Shortcuts: s show h hide n next p prev

Graphics::Toolkit::Color 2.0 feature overview

blogs.perl.org

Finally - GTC 2.0, an all in one color library, is released ! This post will not be rehash (of) the (very) fine manual, but give you a sense what you can achieve with this software and why it is better than any other lib of that sort on CPAN. If you like to look under the hood of GTC, please read my last post.

When I released GTC 1.0 in 2022, it had 4 major features:

1. computing color gradients, between 2 colors in RGB
2. computing complementary colors in HSL
3. translating color names from internal constant set into RGB values
4. converting RGB to HSL and back

The HSL support allowed to add and subtract lightness and saturation (make colors darker, or lighter make them more pale or colorful). And by mentioning a very rudimentary distance computation and color blending we reached the bottom of the barrel.

GTC 2.0 expanded in all areas by a manyfold. Going from 2 (RGB and HSL) to now 17 color spaces (soon ~25) has a large effect. Not only being able to read and write color values from 17 spaces makes GTC much more useful, but also computing a gradient and measuring the distance in different spaces gives you options. Some spaces are optimized for human perception (OKLAB or CIELUV) other you would choose for technical necessity. Especially OKLAB and OKHCL are the hype (for a while) and GTC is the only module in CPAN supporting it. Almost all methods (beside ''name'' and ''complement'') let you choose the color space, the method will be computed in. And in is always the named argument you do it with: " in => 'RGB' " just reads natural.

And just to complete bullet point 1: gradient can now take a series of colors and a tilt factor as arguments to produce very expressive and custom gradients. The tilt factor works also for complements. If you use special tilt values from the documentation you can get also split complementary colors as needed by designers but the nice thing about GTC is, you could choose any other value to get exactly what you are looking for. Many libraries have one method for triadic colors one for quadratic. To get them in GTC you just set the steps argument to 3 or 4 but you can choose again also any other number. Complements can be tilted in all 3 Dimensions.

Beside gradient and complement came also a new color set method: cluster. It is for computing a bunch of colors and are centered around a given one, but have a given, minimal dissimilarity. New is also invert, often the fastest way to get a fitting fore/background color, if the original color was not too bland.

The internal color name constants are still the same, but this feature block got 2 expansions. For one you can now ask for the closest color name (closest_name) and select from which standard this name has to come from (e.g. CSS). These Constants are provided by the Graphics::ColorNames::* modules and you can use them also anywhere a color is expected as input. The nice green from X11 standard would be just:'X:forestgreen'.
But since CSS + X11 + Pantone report colors are already included 'forestgreen' works too.

There are many more features that will come the next week, the most requested is probably simulation for color impaired vision, more spaces, a gamut checker is already implement, gamma correction, will be implemented this week and much much more. Just give it a try and please send bug reports and feature requests.

PS. Yes I also heald a lightning talk about GTC in Berlin last week.

I need to move some chunks of text around in a file. I am partially successful, in the sense that I can move only the first chunk successfully.

The text in the file looks like this:

text regtext1 text regtext2 text regtextA regtextZ end

where text is some random text, and regtext1,2,3 are pieces of text conforming to some regular rules / patterns. All of them can contain pretty much any printable character, and a few more (diacritics, end-of-line, ...).

What I do now is something like this:

/(reg)(text\d+.*?)(regtext[A-Z]+)/$1$3$2/gs

the result being that regextA is moved inside regtext1:

text regregtextAtext1 text regtext2 text regtextZ end

The issue is that after the replace, the search-and-replace continues at the position after regtextA, before regtextZ - if I understand the algorithm correctly.

How can I modify the search-and-replace expression in such way to do the same thing for regtext2...regtextZ, and all other such occurrences? The text in the end should look like:

text regregtextAtext1 text regregtextZtext2 text end

but it does not happen.

I might have to use the \G anchor, but I have no idea how. For debugging I use regex101.com.

Looking at a previous example, I tried the following code:

$s =~ s{(?:\G(?!\A)|)\K(reg)(text\d+.*?)(regtext[A-Z]+)}{"$1$3$2"}

but it makes also only one replacement - probably because I do not understand exactly how the original code (and \G) works.

I tried the correct version of the code suggested in the answer, but it takes an "infinity" of time(actually, I forcefully stopped the execution after several minutes) (just like in the previous example) - even if I limit the execution to only one replacement. The presence of the "while" is "malefic". In the absence of the while, the one replacement happens "instantly".

Reschedule 'use VERSION' switch fatalisation to 5.46

We did say we'd do this for 5.44 but we forgot to make the change until
now, and it's a bit late in the cycle. We'll reschedule it for 5.46.
perlguts: Refer queries directly to P5P list

Currently, readers of this file who encounter problems have to scroll down
over 5000 lines to find the "author" to whom questions should be directed.
For nearly 30 years that "author" has been P5P, so let's tell the readers that
directly.

Make, Bash, and a scripting language of your choice


Creating AWS Resources…let me count the ways

You need to create an S3 bucket, an SQS queue, an IAM policy and a few other AWS resources. But how?…TIMTOWTDI

The Console

  • Pros: visual, immediate feedback, no tooling required, great for exploration
  • Cons: not repeatable, not version controllable, opaque, clickops doesn’t scale, “I swear I configured it the same way”

The AWS CLI

  • Pros: scriptable, composable, already installed, good for one-offs
  • Cons: not idempotent by default, no state management, error handling is manual, scripts can grow into monsters

CloudFormation

  • Pros: native AWS, state managed by AWS, rollback support, drift detection
  • Cons: YAML/JSON verbosity, slow feedback loop, stack update failures are painful, error messages are famously cryptic, proprietary to AWS, subject to change without notice

Terraform

  • Pros: multi-cloud, huge community, mature ecosystem, state management, plan before apply
  • Cons: state file complexity, backend configuration, provider versioning, HCL is yet another language to learn, overkill for small projects, often requires tricks & contortions

Pulumi

  • Pros: real programming languages, familiar abstractions, state management
  • Cons: even more complex than Terraform, another runtime to install and maintain

CDK

  • Pros: real programming languages, generates CloudFormation, good for large organizations
  • Cons: CloudFormation underneath means CloudFormation problems, Node.js dependency

…and the rest of crew…

Ansible, AWS SAM, Serverless Framework - each with their own opinions, dependencies, and learning curves.

Every option beyond the CLI adds a layer of abstraction, a new language or DSL, a state management story, and a new thing to learn and maintain. For large teams managing hundreds of resources across multiple environments that overhead is justified. For a solo developer or small team managing a focused set of resources it can feel like overkill.

Even in large organizations, not every project should be conflated into the corporate infrastructure IaC tool. Moreover, not every project gets the attention of the DevOps team necessary to create or support the application infrastructure.

What if you could get idempotent, repeatable, version-controlled infrastructure management using tools you already have? No new language, no state backend, no provider versioning. Just make, bash, a scripting language you’re comfortable with, and your cloud provider’s CLI.

And yes…my love affair with make is endless.

We’ll use AWS examples throughout, but the patterns apply equally to Google Cloud (gcloud) and Microsoft Azure (az). The CLI tools differ, the patterns don’t.


A word about the AWS CLI --query option

Before you reach for jq, perl, or python to parse CLI output, it’s worth knowing that most cloud CLIs have built-in query support. The AWS CLI’s --query flag implements JMESPath - a query language for JSON that handles the majority of filtering and extraction tasks without any additional tools:

# get a specific field
aws lambda get-function \
    --function-name my-function \
    --query 'Configuration.FunctionArn' \
    --output text

# filter a list
aws sqs list-queues \
    --query 'QueueUrls[?contains(@, `my-queue`)]|[0]' \
    --output text

--query is faster, requires no additional dependencies, and keeps your pipeline simple. Reach for it first. When it falls short - complex transformations, arithmetic, multi-value extraction - that’s when a one-liner earns its place:

# perl
aws lambda get-function --function-name my-function | \
    perl -MJSON -n0 -e '$l=decode_json($_); print $l->{Configuration}{FunctionArn}'

# python
aws lambda get-function --function-name my-function | \
    python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Configuration']['FunctionArn'])"

Both get the job done. Use whichever lives in your shed.


What is Idempotency?

The word comes from mathematics - an operation is idempotent if applying it multiple times produces the same result as applying it once. Sort of like those ID10T errors…no matter how hard or how many times that user clicks on that button they get the same result.

In the context of infrastructure management it means this: running your resource creation script twice should have exactly the same outcome as running it once. The first run creates the resource. The second run detects it already exists and does nothing - no errors, no duplicates, no side effects.

This sounds simple but it’s surprisingly easy to get wrong. A naive script that just calls aws lambda create-function will fail on the second run with a ResourceConflictException. A slightly better script wraps that in error handling. A truly idempotent script never attempts to create a resource it knows already exists.

And it works in both directions. The idempotent bug - running a failing process repeatedly and getting the same error every time - is what happens when your failure path is idempotent too. Consistently wrong, no matter how many times you try. The patterns we’ll show are designed to ensure that success is idempotent while failure always leaves the door open for the next attempt.

Cloud APIs fall into four distinct behavioral categories when it comes to idempotency, and your tooling needs to handle each one differently:

Case 1 - The API is idempotent and produces output

Some APIs can be called repeatedly without error and return useful output each time - whether the resource was just created or already existed. aws events put-rule is a good example - it returns the rule ARN whether the rule was just created or already existed. The pattern: call the read API first, capture the output, call the write API only if the read returned nothing.

Case 2 - The API is idempotent but produces no output

Some write APIs succeed silently - they return nothing on success. aws s3api put-bucket-notification-configuration is a good example. It will happily overwrite an existing configuration without complaint, but returns no output to confirm success. The pattern: call the API, synthesize a value for your sentinel using && echo to capture something meaningful on success.

Case 3 - The API is not idempotent

Some APIs will fail with an error if you try to create a resource that already exists. aws lambda add-permission returns ResourceConflictException if the statement ID already exists. aws lambda create-function returns ResourceConflictException if the function already exists. These APIs give you no choice - you must query first and only call the write API if the resource is missing.

Case 4 - The API call fails

Any of the above can fail - network errors, permission problems, invalid parameters. When a call fails you must not leave behind a sentinel file that signals success. A stale sentinel is worse than no sentinel - it tells Make the resource exists when it doesn’t, and subsequent runs silently skip the creation step. The patterns: || rm -f $@ when writing directly, or else rm -f $@ when capturing to a variable first.


The Sentinel File

Before we look at the four patterns in detail, we need to introduce a concept that ties everything together: the sentinel file.

A sentinel file is simply a file whose existence signals that a task has been completed successfully. It contains no magic - it might hold the output of the API call that created the resource, or it might just be an empty file created with touch. What matters is that it exists when the task succeeded and doesn’t exist when it hasn’t.

make has used this pattern since the 1970s. When you declare a target in a Makefile, make checks whether a file with that name exists before deciding whether to run the recipe. If the file exists and is newer than its dependencies, make skips the recipe entirely. If the file doesn’t exist, make runs the recipe to create it.

For infrastructure management this is exactly the behavior we want:

my-resource:
    @value="$$(aws some-service describe-resource \
            --name $(RESOURCE_NAME) 2>&1)"; \
    if [[ -z "$$value" || "$$value" = "ResourceNotFound" ]]; then \
        value="$$(aws some-service create-resource \
            --name $(RESOURCE_NAME))"; \
    fi; \
    test -e $@ || echo "$$value" > $@

The first time you run make my-resource the file doesn’t exist, the recipe runs, the resource is created, and the API response is written to the sentinel file my-resource. The second time you run it, make sees the file exists and skips the recipe entirely - zero API calls.

This brings us to the || rm -f $@ discipline. If the API call fails for any reason, the sentinel file is immediately removed. Without this a failed create leaves an empty or partial sentinel file. Make sees the file exists on the next run, skips the recipe, and the resource is never created. An idempotent bug - consistently broken, silently, forever.

One more pattern worth noting - test -e $@ || echo "$$value" > $@. This writes the sentinel only if it doesn’t already exist. Combined with the initial query this means we never rewrite a sentinel unnecessarily, avoiding redundant API calls on every make invocation. The sentinel is written exactly once - on the first successful run - and never touched again.


The Four Patterns

Armed with the sentinel file concept and an understanding of the four API behavioral categories, let’s look at concrete implementations of each pattern.


Pattern 1 - Idempotent API with output

The simplest case. Query the resource first - if it exists capture the output and write the sentinel. If it doesn’t exist, create it, capture the output, and write the sentinel. Either way you end up with a sentinel containing meaningful content.

The SQS queue creation is a good example:

sqs-queue:
    @queue="$$(aws sqs list-queues \
        --query 'QueueUrls[?contains(@, `$(QUEUE_NAME)`)]|[0]' \
        --output text --profile $(AWS_PROFILE) 2>&1)"; \
    if echo "$$queue" | grep -q 'error\|Error'; then \
        echo "ERROR: list-queues failed: $$queue" >&2; \
        exit 1; \
    elif [[ -z "$$queue" || "$$queue" = "None" ]]; then \
        queue="$(QUEUE_NAME)"; \
        aws sqs create-queue --queue-name $(QUEUE_NAME) \
            --profile $(AWS_PROFILE); \
    fi; \
    test -e $@ || echo "$$queue" > $@

Notice --query doing the filtering work before the output reaches the shell. No jq, no pipeline - the AWS CLI extracts exactly what we need. The result is either a queue URL or empty. If empty we create. Either way $$queue ends up with a value and the sentinel is written exactly once.

The EventBridge rule follows the same pattern:

lambda-eventbridge-rule:
    @rule="$$(aws events describe-rule \
            --name $(RULE_NAME) \
            --profile $(AWS_PROFILE) 2>&1)"; \
    if echo "$$rule" | grep -q 'ResourceNotFoundException'; then \
        rule="$$(aws events put-rule \
            --name $(RULE_NAME) \
            --schedule-expression "$(SCHEDULE_EXPRESSION)" \
            --state ENABLED \
            --profile $(AWS_PROFILE))"; \
    elif echo "$$rule" | grep -q 'error\|Error'; then \
        echo "ERROR: describe-rule failed: $$rule" >&2; \
        exit 1; \
    fi; \
    test -e $@ || echo "$$rule" > $@

Same shape - query, create if missing, write sentinel once.


Pattern 2 - Idempotent API with no output

Some APIs succeed silently. aws s3api put-bucket-notification-configuration is the canonical example - it happily overwrites an existing configuration and returns nothing. No output means nothing to write to the sentinel.

The solution is to synthesize a value using &&:

lambda-s3-trigger: lambda-s3-permission
    @function_arn=$$(cat lambda-function | perl -MJSON -n0 -e \
        '$$l=decode_json($$_); print $$l->{Configuration}->{FunctionArn}'); \
    config="{LambdaFunctionConfigurations => \
        [{LambdaFunctionArn => q{$$function_arn}, Events => [qw($(S3_EVENT))]}]}"; \
    config="$$(perl -MJSON -e "printf q{\"%s\"}, encode_json($$config)")"; \
    trigger="$$(aws s3api put-bucket-notification-configuration \
        --bucket $(BUCKET_NAME) \
        --notification-configuration $$config \
        --profile $(AWS_PROFILE) && echo $$config)"; \
    if [[ -n "$$trigger" ]]; then \
        test -e $@ || echo "$$trigger" > $@; \
    else \
        rm -f $@; \
    fi

The && echo $$config is the key. If the API call succeeds the && fires and $$trigger gets the config value - something meaningful to write to the sentinel. If the API call fails && doesn’t fire, $$trigger stays empty, and the else branch cleans up with rm -f $@.

This is also where a useful trick emerges for generating shell-safe JSON from a scripting language. The AWS CLI needs the JSON wrapped in double quotes as a single shell argument. Rather than fighting with shell escaping at the point of use, we bake the quotes into the generated value at the point of creation:

# perl
config="$(perl -MJSON -e "printf q{\"%s\"}, encode_json({...})")"

# python
config="$(python3 -c "import json; print('\"' + json.dumps({...}) + '\"')")"

TIMTOWTDI.


Pattern 3 - Non-idempotent API

Some APIs are not idempotent - they fail with a ResourceConflictException or similar if the resource already exists. aws lambda add-permission and aws lambda create-function are both in this category. There is no “create or update” variant - you must check existence first and only call the write API if the resource is missing.

The Lambda S3 permission target is a good example:

lambda-s3-permission: lambda-function s3-bucket
        @permission="$$(aws lambda get-policy \
                --function-name $(FUNCTION_NAME) \
                --profile $(AWS_PROFILE) 2>&1)"; \
        if echo "$$permission" | grep -q 'ResourceNotFoundException' || \
           ! echo "$$permission" | grep -q s3.amazonaws.com; then \
            permission="$$(aws lambda add-permission \
                --function-name $(FUNCTION_NAME) \
                --statement-id s3-trigger-$(BUCKET_NAME) \
                --action lambda:InvokeFunction \
                --principal s3.amazonaws.com \
                --source-arn arn:aws:s3:::$(BUCKET_NAME) \
                --profile $(AWS_PROFILE))"; \
        elif echo "$$permission" | grep -q 'error\|Error'; then \
            echo "ERROR: get-policy failed: $$permission" >&2; \
            exit 1; \
        fi; \
        if [[ -n "$$permission" ]]; then \
            test -e $@ || echo "$$permission" > $@; \
        else \
            rm -f $@; \
        fi

A few things worth noting here…

  • get-policy returns the full policy document which may contain multiple statements - we check for the presence of s3.amazonaws.com specifically using ! grep -q rather than just checking for an empty response. This handles the case where a policy exists but doesn’t yet have the S3 permission we need.
  • The sentinel is only written if $$permission is non-empty after the if block. This covers the case where get-policy returns nothing and add-permission also fails - the sentinel stays absent and the next make run will try again.
  • We pipe errors to our bash variable to detect the case where the resource does not exist or there may have been some other error. When other failures are possible 2>&1 combined with specific error string matching gives you both idempotency and visibility. Swallowing errors silently (2>/dev/null) is how idempotent bugs are born.

Pattern 4 - Failure handling

This isn’t a separate pattern so much as a discipline that applies to all three of the above. There are two mechanisms depending on how the sentinel is written.

When the sentinel is written directly by the command:

aws lambda create-function ... > $@ || rm -f $@

|| rm -f $@ ensures that if the command fails the partial or empty sentinel is immediately cleaned up. Without it Make sees the file on the next run and silently skips the recipe - an idempotent bug.

When the sentinel is written by capturing output to a variable first:

if [[ -n "$$value" ]]; then \
    test -e $@ || echo "$$value" > $@; \
else \
    rm -f $@; \
fi

The else rm -f $@ serves the same purpose. If the variable is empty - because the API call failed - the sentinel is removed. If the sentinel doesn’t exist yet nothing is written. Either way the next make run will try again.

In both cases the goal is the same: a sentinel file should only exist when the underlying resource exists. A stale sentinel is worse than no sentinel.

Note also that our Makefiles set .SHELLFLAGS := -ec which causes make to exit immediately if any command in a recipe fails. This means commands that don’t write to $@ - like aws sqs create-queue - don’t need explicit failure handling. make will die loudly and the sentinel won’t be written.


Conclusion

Creating AWS resources can be done using several different tools…all of them eventually call AWS APIs and process the return payloads. Each of these tools has its place. Each adds something. Each also has a complexity, dependencies, and a learning curve score.

For a small project or a focused set of resources - the kind a solo developer or small team manages for a specific application - you don’t need tools with a high cognitive or resource load. You can use those tools you already have on your belt; make,bash, [insert favorite scripting language here], and aws. And you can leverage those same tools equally well with gcloud or az.

The four patterns we’ve covered handle every AWS API behavior you’ll encounter:

  • Query first, create only if missing, write a sentinel
  • Synthesize output when the API has none
  • Always check before calling a non-idempotent API
  • Clean up on failure with || rm -f $@

These aren’t new tricks - they’re straightforward applications of tools that have been around for decades. make has been managing file-based dependencies since 1976. The sentinel file pattern predates cloud computing entirely. We’re just applying them to a new problem.

One final thought. The idempotent bug - running a failing process repeatedly and getting the same error every time - is the mirror image of what we’ve built here. Our goal is idempotent success: run it once, it works. Run it again, it still works. Run it a hundred times, nothing changes. || rm -f $@ is what separates idempotent success from idempotent failure - it ensures that a bad run always leaves the door open for the next attempt rather than cementing the failure in place with a stale sentinel.

Your shed is already well stocked. Sometimes the right tool for the job is the one you’ve had hanging on the wall for thirty years.


Further Reading

  • “Advanced Bash-Scripting Guide” - https://tldp.org/LDP/abs/html/index.html
  • “GNU Make” - https://www.gnu.org/software/make/manual/html_node/index.html
  • Dave Oswald, “Perl One Liners for the Shell” (Perl conference presentation): https://www.slideshare.net/slideshow/perl-oneliners/77841913
  • Peteris Krumins, “Perl One-Liners” (No Starch Press): https://nostarch.com/perloneliners
  • Sundeep Agarwal, “Perl One-Liners Guide” (free online): https://learnbyexample.github.io/learn_perl_oneliners/
  • AWS CLI JMESPath query documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html

Happy 7th birthday TWC!

dev.to #perl

Weekly Challenge 366

It was seven years ago that Mohammad sent out the first challenge to Team PWC (as it was then known). Thank you very much for all your work over the seven years.

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Count Prefixes

Task

You are given an array of words and a string (contains only lowercase English letters).

Write a script to return the number of words in the given array that are a prefix of the given string.

My solution

This is a one-liner in Python. It should be pretty self-explanatory. Given a list called array and a string called prefix, it counts the number of items in the list where the first letters of the prefix match the word.

def count_prefixes(array: list, prefix: str) -> int:
    return sum(1 for word in array if prefix[:len(word)] == word)

The Perl solution uses the grep function to perform the counting. In a scalar context, grep returns the number of matching items.

sub main (@array) {
    my $prefix = pop(@array);
    my $count  = grep { substr( $prefix, 0, length($_) ) eq $_ } @array;
    say $count;
}

Examples

$ ./ch-1.py a ap app apple banana apple
4

$ ./ch-1.py cat dog fish bird
0

$ ./ch-1.py hello he hello heaven he hello
4

$ ./ch-1.py "" code coding cod coding
3

$ ./ch-1.py p pr pro prog progr progra program program
7

Task 2: Valid Times

Task

You are given a time in the form HH:MM. The earliest possible time is 00:00 and the latest possible time is 23:59. In the string time, the digits represented by the ? symbol are unknown, and must be replaced with a digit from 0 to 9.

Write a script to return the count different ways we can make it a valid time.

My solution

This is an interesting challenge as the solution is not straight forward. There are a few approaches that can be taken. One option is to calculate all 1440 minutes in a day and see if it matched the expected pattern.

The approach I took was to calculate the possible hours and possible minutes, and multiplying both figures to return a result.

I start by using a regular expression to check if the time is valid. As the question mark ? is within square brackets [ ] this is taken as a literal character.

def valid_times(input_string: str) -> int:
    if not re.search(r'^([0-1?][0-9?]|2[0-3?]):[0-5?][0-9?]$', input_string):
        raise ValueError("Input is not in the expected format (HH:MM)")

The next task is calculating the number of valid hours.

  1. If the hours is ??, then there are 24 valid hours.
  2. If the first character is a question there are 3 valid hours if the second digit is less than four (e.g. 02 12 22), or 2 if it is 4 or greater (e.g. 04 14).
  3. If the second character is a question mark, there are 4 valid hours if the first digit is 2, or 10 valid hours otherwise.
  4. If hours have no questions marks, there is only one valid hour.
    # Compute the hours
    if input_string[:2] == "??":
        hours = 24
    elif input_string[:1] == "?":
        hours = 3 if int(input_string[1:2]) < 4 else 2
    elif input_string[1:2] == "?":
        hours = 4 if input_string[:1] == "2" else 10
    else:
        hours = 1

Thankfully calculating the number of valid minutes is a little easier.

  1. If the minutes is ??, then there are sixty valid minutes.
  2. If the first character is a question mark, then there are six valid minutes (e.g. 06 16 26 36 46 56).
  3. If the second characters is a question mark, there are ten valid minutes (e.g. 50 51 ... 58 59).
  4. If the minutes have no question marks, there is only one valid minute.
    if input_string[3:] == "??":
        minutes = 60
    elif input_string[3:4] == "?":
        minutes = 6
    elif input_string[4:] == "?":
        minutes = 10
    else:
        minutes = 1

    return hours * minutes

The Perl solution follows the same logic as the Python solution.

Examples

$ ./ch-2.py ?2:34
3

$ ./ch-2.py ?4:?0
12

$ ./ch-2.py ??:??
1440

$ ./ch-2.py ?3:45
3

$ ./ch-2.py 2?:15
4

Perl Weekly #765 - Testing in Perl and AI

dev.to #perl

Originally published at Perl Weekly 765

Hi there!

I am sending this edition rather late as I got into a frenzy of online courses that require a lot of preparation and only now I had time to work on the Perl Weekly. Sorry for that. In addition this edition has a lot of excellent articles. What happend? Last time I hardly found any article and now there are a lot. I am not complaining at all, I was just really surprised. Keep up the blogging so we we can share more content!

We have 3 grant reports, 2 reports from GPW, several article about the use of AI for Perl and many more. I think one of the keys is that several people have started to write serieses of articles. So they have a theme and explore it from various aspects.

I realized too late, but as I am stuck in Hungary for more than a month already, I should have visited the German Perl Workshop in Berlin. I thought about it too late. Anyway, there are at least the reports.

Personally I love testing. It is coding with very fast feedback that helps me stay sane. More or less :-)

Last week I taught a course on Testing in Python, but I thought one about Perl should be also done. So a few days from now I am going to start teaching a multi-part course about Testing in Perl. In Zoom.

Course attendance is free of charge.

The presentations will be recorded and will be uploaded to the Code Maven Academy where they will be available to paying subscribers.

I hope I'll see many of you and your co-workers at the course. Register here!

Enjoy your week

--
Your editor: Gabor Szabo.

Articles

Perl, the Strange Language That Built the Early Web

A bit of nostalgy and a lot of good insights.

TPRC Talk Submission Deadline extended

The new deadline is April 21, 2026. Go and submit your talk proposal!

Still on the [b]leading edge

The story of a crazy bug. Somewhere. Not in my code. discuss

ANNOUNCE: Perl.Wiki V 1.42 & 2 CPAN::Meta* modules

Beautiful Perl feature: reusable subregexes

Stop Writing Release Notes: Accelerate with AI

Help testing DBD::Oracle

Discussion

Getting a 500 error on my website when running CGI script

Or, how to go from Perl v5.005 to Perl v5.32.1 in one step.

PetaPerl - reimplementation of perl

I have though several times about trying to reimplement Perl in Rust and every time I quickly convinced myself not to do it. First of all because it is way beyond my expertise. However also, what is the value of it? As I understand it there was a presentation about it at the German Perl Workshop covering the motivation as well. Very interesting. You can read the documentation and see the slides. I am rather excited!

Ambiguous use of ${x} resolved to $x

Code with winter clothes...

Perl and AI

Six Ways to Use AI Without Giving Up the Keys

The titles: 1. Unit Test Writing; 2. Documentation; 3. Release Notes; 4. Bug Triage; 5. Code Review; 6. Legacy Code Deciphering

experiments with claude, part ⅳ: dzilification of MIME-Lite

experiments with claude, part ⅴ: ClaudeLog

experiments with claude, part ⅲ: JMAP-Tester coverage

Grants

Maintaining Perl 5 Core (Dave Mitchell): February 2026

PEVANS Core Perl 5: Grant Report for February 2026

Maintaining Perl (Tony Cook) February 2026

Perl

This week in PSC (218) | 2026-03-16

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 366

Welcome to a new week with a couple of fun tasks "Count Prefixes" and "Valid Times". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 365

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Alphabet Index Digit Sum" and "Valid Token Counter" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

A Token Alphabet

An informative and thoughtful article which illustrates Raku's fantastic facilities for creating grammars and using tokens to model your own custom alphabet in a pleasing and expressive manner. Good balance of theory with practical approach; gives uncommon parsing concepts reasonable readability as well showcasing Raku's idiomatic implementation.

PWC365, Task 2 Valid Token Counter

The implementation of this solution has been done using a clean and organised manner. It shows excellent use of list processing in Raku while also using control flow to solve the problem effectively. Based on the written implementation, the author clearly understands how the system works as shown by their concise and logical reasoning in the code itself, as well as providing an idiomatic means of expressing themselves through the way they wrote their code.

Perl Weekly Challenge: Week 365

A clearly written and entertaining article that clearly shows both Perl and Raku solutions in parallel. This demonstrates the author's understanding of the idioms and strengths of both languages. The article provides clear logic as well as practical examples of how to implement the logic. The information provided in the article is helpful in showing the differences and similarities between the two programming languages, while also being concise and easy to read.

Sum Tokens and Count Digits

This is an intelligently written article that succinctly outlines how to utilise an effective problem-solving methodology without sacrificing either code readability or idiomatic use of language. In addition, the article does a wonderful job of providing clarity as well as technical depth in order to enhance both continuity in reasoning and elegance/instructional value of the solution.

The Weekly Challenge 365

This well-written article provides structure to help readers understand how each Weekly Challenge solution was developed. It combines clear explanations with practical examples of code to look at both how to apply a problem and how to solve it. The author demonstrates an understanding of their problem as well as the specific requirements that need to be satisfied in order for a given solution to be considered valid, but also gives the reader a fun place to explore various forms of programming using the languages of Perl and others.

regexps to rule them all!

An organised, well-articulated post that illustrates your consistent, orderly method for completing each week’s Challenge with great success in diverse languages. This demonstrates your problem solving capabilities as well as your versatility. All explanations provided were descriptive and practical; therefore were applicable across all languages. Also, by providing side-by-side examples of the various implementations from different programming languages, you have created meaningful comparisons; therefore illustrating each language’s distinctive characteristics.

Perl Weekly Challenge 365

A normalised write‑up is written in an interesting way, making it clear and fun to understand about solving both parts of the Weekly Challenges providing well-structured solutions and Perl/Raku examples. Examples will also be provided that are easy to read, written clearly and concisely, demonstrating logic that can be understood easily, by those with varying abilities.

Are Post Alphabits a Token Breakfast Cereal?

The post is full of energy and fun. It presents a practical, hands-on approach to completing the Weekly Challenge with appropriate justification and effective usage of Perl programming constructs. Solutions demonstrate an excellent understanding of the basics of programming (particularly list and string). Implementation of the solutions are both approachable and educational for the viewer.

Splitting and Summing and Checking and Counting

A concise README that is thoughtfully organised, with clear explanations and idiomatic code, that makes it easy to replicate your approach. You have demonstrated excellent problem solving and a high level of attention to clarity in your write-up; you have also successfully managed to balance the level of detail and technical depth for other people to follow.

I'll be the smartest bird the world has ever seen!

This is a creative solution that is fun, playful, uses a literary reference to solve a technical problem, and has clarity of thought and personality. The implementation is brief and uses idiomatic Perl. The strengths of Perl have been used to make it clear, and the story has been made clear and memorable.

Lots of counting

This is a good example of a solid engineering solution. It shows a structured and clear thinking process, as well as how well you have used the basic features of Perl to accomplish the task at hand. Your implementation is both concise and expressive; thus, demonstrating your mastery of decomposing problems into their components and using clean, idiomatic coding methods in your programming experience.

The Weekly Challenge - 365: Alphabet Index Digit Sum

This document has been created in a deliberate and orderly way which shows a good understanding of the problem at hand as well as the logic behind arriving at the answer; it also includes attention to detail when implementing the solution. The solution is practically designed as well as creatively developed and uses Perl features thoughtfully to create an efficient and effective answer.

The Weekly Challenge - 365: Valid Token Counter

It is a clear and well thought-out solution that uses a sound problem-solving method, reasoning clearly, and has clean, idiomatic Perl code. The method is easy to implement, efficient and has demonstrated the author's understanding of the problem and their attention to edge cases in the implementation process.

The Weekly Challenge #365

The post gives a comprehensive introduction to how to use Perl, as well as examples of its many capabilities. Each task has been addressed thoroughly by providing clear explanations and well‑structured code, illustrating the effective and creative use of Perl idiomatic patterns. All of these characteristics make this post an excellent resource for both learning Perl and using Perl as a reference.

Alphabet Digit Counter Token

This post presents a clear, thorough examination of the problem and provides an explanation of the solution to the problem through logical analysis. Roger has created a detailed description of the proposed solution, which includes smaller, clearer explanations and code so that all readers, whether looking for Perl or token-based parsing methods, can easily understand how to implement these methods in their own code.

Counting the index

A concise write-up, which clearly illustrates the two parts of the Weekly Challenge: counting an index, transforming alphabet position into repetitive digit sums, and validating tokens via concise logic expression, using both Python and Perl along with a clear explanation of the solution with examples of practical problem solving and proper implementation.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.

Event reports

28th German Perl Workshop (2026, Berlin)

It sounds like the German Perl Workshop became a replacement to the mostly defunct YAPC::EU.

German Perl Workshop 2026 in Berlin

The usual very detailed review by domm.

Events

Perl Maven online: Testing in Perl - part 1

March 26, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

28th German Perl Workshop (2026, Berlin)

r/perl

update Module CoreList

Perl commits on GitHub
update Module CoreList

see: bce42ab11583f8e120361de2fd7b341cd0c9fc3e

bisect-runner.pl: Add example

Perl commits on GitHub
bisect-runner.pl: Add example

Demonstration program uses locally installed module.
perlsyn: remove reference to "do SUB" syntax

This (long deprecated) syntax was removed in v5.20 (commit
8c74b4142557).
Welcome to the Week #366 of The Weekly Challenge.
Thank you Team PWC for your continuous support and encouragement.

28th German Perl Workshop (2026, Berlin)

blogs.perl.org

Cross-posted from my blog

Last week, the Perl community came together for the 28th German Perl Workshop. This year, it was held at the Heilandskirche in Berlin Moabit. Excitingly, we had the nave for the presentations.

PXL_20260316_073151275.RAW-01.COVER-small.jpg

While the name is still German Perl Workshop, we now draw attendees from all over the globe. Presenters came from India, the US and various European countries. Maybe it is time to announce it as a more international conference again.

Bringing the infrastructure to a Perl Workshop is a lot of additional hardware that we hopefully won't need, like looong HDMI cables, various adapters to HDMI, a bundle extension cords and duct tape of the non-Perl variant. Lee also brought the EPO recording set for recording the presentations. The set came back with me from Berlin, as its main use is nowadays recording the talks at a German Perl Workshop for later publication.

PXL_20260316_084945582.RAW-01.COVER-small.jpg

Organizing a conference usually means that my attention is divided between running the event, chatting with attendees and giving a presentation or two. Luckily other members of Frankfurt.pm and other long-time attendees are always there to lend a hand.

PXL_20260317_140525013.RAW-01.COVER-small.jpg

Over the years, we have organized the German Perl Workshop many times. Local organizers for 2027 already stepped up. Next year, we aim for the city of Hannover. We don't have the contract for a venue signed, so watch https://www.perl-workshop.de/news for announcements.

Such an event can't happen without the sponsors who support us financially. Let me quickly show their logos here:

Logo_Otobo_RGB_DarkBlue.png

cosmoshop_perl_basierendes_shopsystem_970x150_2.png

I'm currently in a train from Berlin to Strasbourg and then onward to Marseille, traveling from the 28th(!) German Perl Workshop to the Koha Hackfest. I spend a few days after the Perl Workshop in Berlin with friends from school who moved to Berlin during/after university, hanging around at their homes and neighborhoods, visiting museums, professional industrial kitchens and other nice and foody places. But I want to review the Perl Workshop, so:

German Perl Workshop

It seems the last time I've attended a German Perl Workshop was in 2020 (literally days before the world shut down...), so I've missed a bunch of nice events and possibilities to meet up with old Perl friends. But even after this longish break it felt a bit like returning home :-)

I traveled to Berlin by sleeper train (worked without a problem) arriving on Monday morning a few hours before the workshop started. I went to a friends place (where I'm staying for the week), dumped my stuff, got a bike, and did a nice morning cycle through Tiergarten to the venue. Which was an actual church! And not even a secularized one.

Day 1

After a short introduction and welcome by Max Maischein (starting with a "Willkommen, liebe Gemeinde" fitting the location) he started the workshop with a talk on Claude Code and Coding-Agents. I only recently started to play around a bit with similar tools, so I could related to a lot of the topics mentioned. And I (again?) need to point out the blog post I Sold Out for $20 a Month and All I Got Was This Perfectly Generated Terraform which sums up my feelings and experiences with LLMs much better than I could.

Abigail then shared a nice story on how they (Booking.com) sharded a database, twice using some "interesting" tricks to move the data around and still getting reads from the correct replicas, all with nearly no downtime. Fun, but as "my" projects usually operate on a much smaller scale than Booking I will probably not try to recreate their solution.

For lunch I met with Michael at a nearby market hall for some Vietnamese food to do some planing for the upcoming Perl Toolchain Summit in Vienna.

Lars Dieckow then talked about data types in databases, or actually the lack of more complex types in databases and how one could still implement such types in SQL. Looks interesting, but probably a bit to hackish for me to actually use. I guess I have to continue handling such cases in code (which of course feels ugly, especially as I've learned to move more and more code into the DB using CTEs and window functions).

Next Flavio S. Glock showed his very impressive progress with PerlOnJava, a Perl distribution for the JVM. Cool, but probably not something I will use (mostly because I don't run Java anywhere, so adding it to our stack would make things more complex).

Then Lars showed us some of his beloved tools in Aus dem Nähkästchen, continuing a tradition started by Sven Guckes (RIP). I am already using some of the tools (realias, fzf, zoxide, htop, ripgrep) but now plan to finally clean up my dotfiles using xdg-ninja.

Now it was time for my first talk at this workshop, on Using class, the new-ish feature available in Perl (since 5.38) for native keywords for object-oriented programming. I also sneaked in some bibliographic data structures (MAB2 and MARCXML) to share my pain with the attendees. I was a tiny bit (more) nervous, as this was the first time I was using my current laptop (a Framework running Sway/Wayland) with an external projector, but wl-present worked like a charm. After the talk Wolfram Schneider showed me his MAB2->MARC online converter, which could maybe have been a basis for our tool, but then writing our own was a "fun" way to learn about MAB2.

The last talk of the day was Lee Johnson with I Bought A Scanner showing us how he got an old (ancient?) high-res foto scanner working again to scan his various film projects. Fun and interesting!

Between the end of the talks and the social event I went for some coffee with Paul Cochrane, and we where joined by Sawyer X and Flavio and some vegan tiramisu. Paul and me then cycled to the Indian restaurant through some light drizzle and along the Spree, and only then I realized that Paul cycled all the way from Hannover to Berlin. I was a bit envious (even though I in fact did cycle to Berlin 16 years ago (oh my, so long ago..)). Dinner was nice, but I did not stay too long.

Day 2

Tuesday started with Richard Jelinek first showing us his rather impressive off-grid house (or "A technocrat's house - 2050s standard") and the software used to automate it before moving on the the actual topic of his talk, Perl mit AI which turned out to be about a Perl implementation in Rust called pperl developed with massive LLM support. Which seems to be rather fast. As with PerlOnJava, I'm not sure I really want to use an alternative implementation (and of course currently pperl is marked as "Research Preview — WORK IN PROGRESS — please do not use in production environments") but maybe I will give it a try when it's more stable. Especially since we now have containers, which make setting up some experimental environments much easier.

Then Alexander Thurow shared his Thoughts on (Modern?) Software Development, lots of inspirational (or depressing) quotes and some LLM criticism lacking at the workshop (until now..)

Next up was Lars (again) with a talk on Hierarchien in SQL where we did a very nice derivation on how to get from some handcrafted SQL to recursive CTEs to query hierarchical graph data (DAG). I used (and even talked about) recursive CTEs a few times, but this was by far the best explanation I've ever seen. And we got to see some geizhals internals :-)

Sören Laird Sörries informed us on Digitale Souveränität und Made in Europe and I'm quite proud to say that I'm already using a lot of the services he showed (mailbox, Hetzner, fairphone, ..) though we could still do better (eg one project is still using a bunch of Google services)

Then Salve J. Nilsen (whose name I will promise to not mangle anymore) showed us his thoughts on What might a CPAN Steward organization look like?. We already talked about this topic a few weeks ago (in preparation of the Perl Toolchain Summit), so I was not paying a lot of attention (and instead hacked up a few short slides for a lightning talk) - Sorry. But in the discussion afterwards Salve clarified that the Cyber Resilience Act applies to all "CE-marked products" and that even a Perl API backend that power a mobile app running on a smartphone count as "CE-marked products". Before that I was under the assumption that only software running on actual physical products need the attestation. So we should really get this Steward organization going and hopefully even profit from it!

The last slot of the day was filled with the Lightning Talks hosted by R Geoffrey Avery and his gong. I submitted two and got a "double domm" slot, where I hurried through my microblog pipeline (on POSSE and getting not-twitter-tweets from my command line via some gitolite to my self hosted microblog and the on to Mastodon) followed by taking up Lars' challenge to show stuff from my own "Nähkästchen", in my case gopass and tofi (and some bash pipes) for an easy password manager.

We had the usual mixture of fun and/or informative short talks, but the highlight for me was Sebastian Gamaga, who did his first talk at a Perl event on How I learned about the problem differentiating a Hash from a HashRef. Good slides, well executed and showing a problem that I'm quite sure everybody encountered when first learning Perl (and I have to admit I also sometimes mix up hash/ref and regular/curly-braces when setting up a hash). Looking forward for a "proper" talk by Sebastian next year :-)

This evening I skipped having dinner with the Perl people, because I had to finish some slides for Wednesday and wanted to hang out with my non-Perl friends. But I've heard that a bunch of people had fun bouldering!

Day 3

I had a job call at 10:00 and (unfortunately) a bug to fix, so I missed the three talks in the morning session and only arrived at the venue during lunch break and in time for Paul Cochrane talking about Getting FIT in Perl (and fit he did get, too!). I've only recently started to collect exercise data (as I got a sport watch for my birthday) and being able to extract and analyze the data using my own software is indeed something I plan to do.

Next up was Julien Fiegehenn on Turning humans into SysAdmins, where he showed us how he used LLMs to adapt his developer mentorship framework to also work for sysadmin and getting them (LLMs, not fresh Sysadmins) to differentiate between Julian and Julien (among other things..)

For the final talk it was my turn again: Deploying Perl apps using Podman, make & gitlab. I'm not too happy with slides, as I had to rush a bit to finish them and did not properly highlight all the important points. But it still went well (enough) and it seemed that a few people found one of the main points (using bash / make in gitlab CI instead of specifying all the steps directly in .gitlab-ci.yml) useful.

Then Max spoke the closing words and announced the location of next years German Perl Workshop, which will take place in Hannover! Nice, I've never been there and plan to attend (and maybe join Paul on a bike ride there?)

Summary

As usual, a lot of thanks to the sponsors, the speakers, the orgas and the attendees. Thanks for making this nice event possible!

(dxcii) 18 great CPAN modules released last week

r/perl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::cpanminus - get, unpack, build and install modules from CPAN
    • Version: 1.7049 on 2026-03-17, with 286 votes
    • Previous CPAN version: 1.7048 was 1 year, 4 months, 18 days before
    • Author: MIYAGAWA
  2. App::HTTPThis - Export the current directory over HTTP
    • Version: v0.11.1 on 2026-03-16, with 25 votes
    • Previous CPAN version: v0.11.0 was 2 days before
    • Author: DAVECROSS
  3. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260318.001 on 2026-03-18, with 25 votes
    • Previous CPAN version: 20260315.002 was 3 days before
    • Author: BRIANDFOY
  4. Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
    • Version: 0.022 on 2026-03-21, with 17 votes
    • Previous CPAN version: 0.021 was 1 year, 1 month, 17 days before
    • Author: LEONT
  5. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.20.0 on 2026-03-19, with 103 votes
    • Previous CPAN version: 3.19.0 was 4 days before
    • Author: TURNSTEP
  6. Git::CPAN::Patch - Patch CPAN modules using Git
    • Version: 2.5.2 on 2026-03-18, with 45 votes
    • Previous CPAN version: 2.5.1
    • Author: YANICK
  7. JSON - JSON (JavaScript Object Notation) encoder/decoder
    • Version: 4.11 on 2026-03-22, with 109 votes
    • Previous CPAN version: 4.10 was 3 years, 5 months, 13 days before
    • Author: ISHIGAKI
  8. JSON::PP - JSON::XS compatible pure-Perl module.
    • Version: 4.18 on 2026-03-20, with 22 votes
    • Previous CPAN version: 4.17_01 was 2 years, 7 months, 21 days before
    • Author: ISHIGAKI
  9. Log::Any - Bringing loggers and listeners together
    • Version: 1.719 on 2026-03-16, with 69 votes
    • Previous CPAN version: 1.718 was 9 months, 14 days before
    • Author: PREACTION
  10. MetaCPAN::API - (DEPRECATED) A comprehensive, DWIM-featured API to MetaCPAN
    • Version: 0.52 on 2026-03-16, with 26 votes
    • Previous CPAN version: 0.51 was 8 years, 9 months, 9 days before
    • Author: HAARG
  11. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260320 on 2026-03-20, with 44 votes
    • Previous CPAN version: 5.20260308 was 11 days before
    • Author: BINGOS
  12. Net::SSLeay - Perl bindings for OpenSSL and LibreSSL
    • Version: 1.96 on 2026-03-21, with 27 votes
    • Previous CPAN version: 1.95_03 
    • Author: CHRISN
  13. OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
    • Version: 0.7009 on 2026-03-19, with 15 votes
    • Previous CPAN version: 0.7008
    • Author: ETJ
  14. SPVM - The SPVM Language
    • Version: 0.990150 on 2026-03-19, with 36 votes
    • Previous CPAN version: 0.990149
    • Author: KIMOTO
  15. Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
    • Version: 1.045 on 2026-03-19, with 14 votes
    • Previous CPAN version: 1.044 was 10 days before
    • Author: CHOROBA
  16. TimeDate - Date and time formatting subroutines
    • Version: 2.35 on 2026-03-21, with 28 votes
    • Previous CPAN version: 2.34_03 was 1 day before
    • Author: ATOOMIC
  17. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.70 on 2026-03-19, with 20 votes
    • Previous CPAN version: 0.69
    • Author: CHANSEN
  18. YAML::Syck - Fast, lightweight YAML loader and dumper
    • Version: 1.39 on 2026-03-21, with 18 votes
    • Previous CPAN version: 1.38
    • Author: TODDR

TPRC Talk Submission Deadline extended

Perl Foundation News

We are re-opening the talk submissions with a new deadline of April 21, 2026. Please submit your 20 minute talks, and 50 minute talks at https://tprc.us/. Let us know if you need help with your submission or your talk development, because we have mentors who can listen to your ideas and guide you.

We are also taking submissions for interactive sessions. These are sessions that have a theme, but invite maximum audience participation; sessions which take advantage of the gathering of community members that have a wide range of experience and ideas to share. You would introduce the theme and moderate the session. If you have ideas for interactive sessions, but don’t want to moderate them yourself, please go to our wiki to enter your ideas, and maybe someone else will pick up the ball!

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.

Getting a 500 error on my website when running CGI script

r/perl

I am just curious if anyone can suggest anything else I might try to resolve an issue.

Since the 19th post scheduled server maintenance none of my CGI scripts work? I have confirmed the coding (even though I have not made any changes in 5+ years). If I copy my site files to a XAMPP server they run run fine. But tech support is unable to find anything wrong and is just throwing it back at me be it is third party software.

I have asked them to confirm that my owner permissions are valid and that the perl library is intact, but have not heard back yet. When I attempt to run any of my CGI scripts the server is generating a 500 error. I have checked everything I can think on my end. I have 755 permissions set. My files have all been uploaded in ASCII FTP mode. All of my HTML pages load. I have confirmed all of my Shebang lines are correct (even though I have not edited them recently).

I am really just wondering if there is anything else I can do to attempt to resolve the issue?

submitted by /u/DaddyRobA
[link] [comments]

Still on the [b]leading edge

Perl Hacks

About eighteen months ago, I wrote a post called On the Bleading Edge about my decision to start using Perl’s new class feature in real code. I knew I was getting ahead of parts of the ecosystem. I knew there would be occasional pain. I decided the benefits were worth it.

I still think that’s true.

But every now and then, the bleading edge reminds you why it’s called that.

Recently, I lost a couple of days to a bug that turned out not to be in my code, not in the module I was installing, and not even in the module that module depended on — but in the installer’s understanding of modern Perl syntax.

This is the story.

The Symptom

I was building a Docker image for Aphra. As part of the build, I needed to install App::HTTPThis, which depends on Plack::App::DirectoryIndex, which depends on WebServer::DirIndex.

The Docker build failed with this error:

#13 45.66 --> Working on WebServer::DirIndex
#13 45.66 Fetching https://www.cpan.org/authors/id/D/DA/DAVECROSS/WebServer-DirIndex-0.1.3.tar.gz ... OK
#13 45.83 Configuring WebServer-DirIndex-v0.1.3 ... OK
#13 46.21 Building WebServer-DirIndex-v0.1.3 ... OK
#13 46.75 Successfully installed WebServer-DirIndex-v0.1.3
#13 46.84 ! Installing the dependencies failed: Installed version (undef) of WebServer::DirIndex is not in range 'v0.1.0'
#13 46.84 ! Bailing out the installation for Plack-App-DirectoryIndex-v0.2.1.

Now, that’s a deeply confusing error message.

It clearly says that WebServer::DirIndex was successfully installed. And then immediately says that the installed version is undef and not in the required range.

At this point you start wondering if you’ve somehow broken version numbering, or if there’s a packaging error, or if the dependency chain is wrong.

But the version number in WebServer::DirIndex was fine. The module built. The tests passed. Everything looked normal.

So why did the installer think the version was undef?

When This Bug Appears

This only shows up in a fairly specific situation:

  • A module uses modern Perl class syntax
  • The module defines a $VERSION
  • Another module declares a prerequisite with a specific version requirement
  • The installer tries to check the installed version without loading the module
  • It uses Module::Metadata to extract $VERSION
  • And the version of Module::Metadata it is using doesn’t properly understand class

If you don’t specify a version requirement, you’ll probably never see this. Which is why I hadn’t seen it before. I don’t often pin minimum versions of my own modules, but in this case, the modules are more tightly coupled than I’d like, and specific versions are required.

So this bug only appears when you combine:

modern Perl syntax + version checks + older toolchain

Which is pretty much the definition of “bleading edge”.

The Real Culprit

The problem turned out to be an older version of Module::Metadata that had been fatpacked into cpanm.

cpanm uses Module::Metadata to inspect modules and extract $VERSION without loading the module. But the older Module::Metadata didn’t correctly understand the class keyword, so it couldn’t work out which package the $VERSION belonged to.

So when it checked the installed version, it found… nothing.

Hence:

Installed version (undef) of WebServer::DirIndex is not in range ‘v0.1.0’

The version wasn’t wrong. The installer just couldn’t see it.

An aside, you may find it amusing to hear an anecdote from my attempts to debug this problem.

I spun up a new Ubuntu Docker container, installed cpanm and tried to install Plack::App::DirectoryIndex. Initially, this gave the same error message. At least the problem was easily reproducible.

I then ran code that was very similar to the code cpanm uses to work out what a module’s version is.

$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'

This displayed an empty string. I was really onto something here. Module::Metadata couldn’t find the version.

I was using Module::Metadata version 1.000037 and, looking at the change log on CPAN, I saw this:

1.000038 2023-04-28 11:25:40Z
- detects "class" syntax
I installed 1.000038 and reran my command.
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'
0.1.3

That seemed conclusive. Excitedly, I reran the Docker build.

It failed again.

You’ve probably worked out why. But it took me a frustrating half an hour to work it out.

cpanm doesn’t use the installed version of Module::Metadata. It uses its own, fatpacked version. Updating Module::Metadata wouldn’t fix my problem.

The Workaround

I found a workaround. That was to add a redundant package declaration alongside the class declaration, so older versions of Module::Metadata can still identify the package that owns $VERSION.

So instead of just this:

class WebServer::DirIndex {
  our $VERSION = '0.1.3';
  ...
}

I now have this:

package WebServer::DirIndex;

class WebServer::DirIndex {
  our $VERSION = '0.1.3';
  ...
}

It looks unnecessary. And in a perfect world, it would be unnecessary.

But it allows older tooling to work out the version correctly, and everything installs cleanly again.

The Proper Fix

Of course, the real fix was to update the toolchain.

So I raised an issue against App::cpanminus, pointing out that the fatpacked Module::Metadata was too old to cope properly with modules that use class.

Tatsuhiko Miyagawa responded very quickly, and a new release of cpanm appeared with an updated version of Module::Metadata.

This is one of the nice things about the Perl ecosystem. Sometimes you report a problem and the right person fixes it almost immediately.

When Do I Remove the Workaround?

This leaves me with an interesting question.

The correct fix is “use a recent cpanm”.

But the workaround is “add a redundant package line so older tooling doesn’t get confused”.

So when do I remove the workaround?

The answer is probably: not yet.

Because although a fixed cpanm exists, that doesn’t mean everyone is using it. Old Docker base images, CI environments, bootstrap scripts, and long-lived servers can all have surprisingly ancient versions of cpanm lurking in them.

And the workaround is harmless. It just offends my sense of neatness slightly.

So for now, the redundant package line stays. Not because modern Perl needs it, but because parts of the world around modern Perl are still catching up.

Life on the Bleading Edge

This is what life on the bleading edge actually looks like.

Not dramatic crashes. Not language bugs. Not catastrophic failures.

Just a tool, somewhere in the install chain, that looks at perfectly valid modern Perl code and quietly decides that your module doesn’t have a version number.

And then you lose two days proving that you are not, in fact, going mad.

But I’m still using class. And I’m still happy I am.

You just have to keep an eye on the whole toolchain — not just the language — when you decide to live a little closer to the future than everyone else.

The post Still on the [b]leading edge first appeared on Perl Hacks.

I am currently re-visiting the documentation for Perl's CGI module. In the section about the param() method, there is a warning about using that method in a list context; see here. The warning literally reads:

Warning - calling param() in list context can lead to vulnerabilities if you do not sanitise user input as it is possible to inject other param keys and values into your code. [...]

Then there is an example of what we should not do:

my %user_info = (
  id   => 1,
  name => $q->param('name'),
);

I have understood the warning and the code except one thing:

How can calling param() in list context inject other "param keys" (as the citation calls it) into my code? Could somebody please give an example of a query string or of POST data that lets me reproduce this?

The question is specifically about parameter keys, not about possible multiple values for the same key.

Abstract

Even if you’re skeptical about AI writing your code, you’re leaving time on the table.

Many developers have been slow to adopt AI in their workflows, and that’s understandable. As AI coding assistants become more capable the anxiety is real - nobody wants to feel like they’re training their replacement. But we’re not there yet. Skilled developers who understand logic, mathematics, business needs and user experience will be essential to guide application development for the foreseeable future.

The smarter play is to let AI handle the parts of the job you never liked anyway - the documentation, the release notes, the boilerplate tests - while you stay focused on the work that actually requires your experience and judgment. You don’t need to go all in on day one. Here are six places to start.


1. Unit Test Writing

Writing unit tests is one of those tasks most developers know they should do more of and few enjoy doing. It’s methodical, time-consuming, and the worst time to write them is when the code reviewer asks if they pass.

TDD is a fine theory. In practice, writing tests before you’ve vetted your design means rewriting your tests every time the design evolves - which is often. Most experienced developers write tests after the design has settled, and that’s a perfectly reasonable approach.

The important thing is that they get written at all. Even a test that simply validates use_ok(qw(Foo::Bar)) puts scaffolding in place that can be expanded when new features are added or behavior changes. A placeholder is infinitely more useful than nothing.

This is where AI earns its keep. Feed it a function or a module and it will identify the code paths that need coverage - the happy path, the edge cases, the boundary conditions, the error handling. It will suggest appropriate test data sets including the inputs most likely to expose bugs: empty strings, nulls, negative numbers, off-by-one values - the things a tired developer skips.

You review it, adjust it, own it. AI did the mechanical work of thinking through the permutations. You make sure it reflects how your code is actually used in the real world.


2. Documentation

“Documentation is like sex: when it’s good, it’s very, very good; and when it’s bad, it’s better than nothing.” - said someone somewhere.

Of course, there are developers that justify their disdain for writing documentation with one of two arguments (or both):

  1. The code is the documentation
  2. Documentation is wrong the moment it is written

It is true, the single source of truth regarding what code actually does is the code itself. What it is supposed to do is what documentation should be all about. When they diverge it’s either a defect in the software or a misunderstanding of the business requirement captured in the documentation.

Code that changes rapidly is difficult to document, but the intent of the code is not. Especially now with AI. It is trivial to ask AI to review the current documentation and align it with the code, negating point #2.

Feed AI a module and ask it to generate POD. It will describe what the code does. Your job is to verify that what it does is what it should do - which is a much faster review than writing from scratch.


3. Release Notes

If you’ve read this far you may have noticed the irony - this post was written by someone who just published a blog post about automating release notes with AI. So consider this section field-tested.

Release notes sit at the intersection of everything developers dislike: writing prose, summarizing work they’ve already mentally moved on from, and doing it with enough clarity that non-developers can understand what changed and why it matters. It’s the last thing standing between you and shipping.

The problem with feeding a git log to AI is that git logs are written for developers in the moment, not for readers after the fact. “Fix the thing” and “WIP” are not useful release note fodder.

The better approach is to give AI real context - a unified diff, a file manifest, and the actual source of the changed files. With those three inputs AI can identify the primary themes of a release, group related changes, and produce structured notes that actually reflect the architecture rather than just the line changes.

A simple make release-notes target can generate all three assets automatically from your last git tag. Upload them, prompt for your preferred format, and you have a first draft in seconds rather than thirty minutes. Here’s how I built it.

You still edit it. You add color, context, and the business rationale that only you know. But the mechanical work of reading every diff and turning it into coherent prose? Delegated.


4. Bug Triage

Debugging can be the most frustrating and the most rewarding experience for a developer. Most developers are predisposed to love a puzzle and there is nothing more puzzling than a race condition or a dangling pointer. Even though books and posters have been written about debugging it is sometimes difficult to know exactly where to start.

Describe the symptoms, share the relevant code, toss your theory at it. AI will validate or repudiate without ego - no colleague awkwardly telling you you’re wrong. It will suggest where to look, what telemetry to add, and before you know it you’re instrumenting the code that should have been instrumented from the start.

AI may not find your bug, but it will be a fantastic bug buddy.


5. Code Review

Since I’ve started using AI I’ve found that one of the most valuable things I can do with it is to give it my first draft of a piece of code. Anything more than a dozen or so lines is fair game.

Don’t waste your time polishing a piece of lava that just spewed from your noggin. There’s probably some gold in there and there’s definitely some ash. That’s ok. You created the framework for a discussion on design and implementation. Before you know it you have settled on a path.

AI’s strength is pattern recognition. It will recognize when your code needs to adopt a different pattern or when you nailed it. Get feedback. Push back. It’s not a one-way conversation. Question the approach, flag the inconsistencies that don’t feel right - your input into that review process is critical in evolving the molten rock into a solid foundation.


6. Legacy Code Deciphering

What defines “Legacy Code?” It’s a great question and hard to answer. And not to get too racy again, but as it has been said of pornography, I can’t exactly define it but I know it when I see it.

Fortunately (and yes I do mean fortunately) I have been involved in maintaining legacy code since the day I started working for a family run business in 1998. The code I maintained there was born literally in the late 70’s and still, to this day generates millions of dollars. You will never learn more about coding than by maintaining legacy code.

These are the major characteristics of legacy code from my experience (in order of visibility):

  1. It generates so much money for a company they could not possibly think of it being unavailable.
  2. It is monolithic and may in fact consist of modules in multiple languages.
  3. It is grown organically over the decades.
  4. It is more than 10 years old.
  5. The business rules are not documented, opaque and can only be discerned by a careful reading of the software. Product managers and users think they know what the software does, but probably do not have the entire picture.
  6. It cannot easily be re-written (by humans) because of #5.
  7. It contains as much dead code that is no longer serving any useful purpose as it does useful code.

I once maintained a C program that searched an ISAM database of legal judgments. The code had been ported from a proprietary in-memory binary tree implementation and was likely older than most of the developers reading this post. The business model was straightforward and terrifying - miss a judgment and we indemnify the client. Every change had to be essentially idempotent. You weren’t fixing code, you were performing surgery on a patient who would sue you if the scar was in the wrong place.

I was fortunate - there were no paydays for a client on my watch. But I wish I’d had AI back then. Not to write the code. To help me read it.

Now, where does AI come in? Points 5, 6, and definitely 7.

Throw a jabberwocky of a function at AI and ask it what it does. Not what it should do - what it actually does. The variable names are cryptic, the comments are either missing or lying, and the original author left the company during the Clinton administration. AI doesn’t care. It reads the code without preconception and gives you a plain English explanation of the logic, the assumptions baked in, and the side effects you never knew existed.

That explanation becomes your documentation. Those assumptions become your unit tests. Those side effects become the bug reports you never filed because you didn’t know they were bugs.

Dead code is where AI particularly shines. Show it a module and ask what’s unreachable. Ask what’s duplicated. Ask what hasn’t been touched in a decade but sits there quietly terrifying anyone who considers deleting it. AI will give you a map of the minefield so you can walk through it rather than around it forever.

Along the way AI will flag security vulnerabilities you never knew were there - input validation gaps, unsafe string handling, authentication assumptions that made sense in 1998 and are a liability today. It will also suggest where instrumentation is missing, the logging and telemetry that would have made every debugging session for the last twenty years shorter. You can’t go back and add it to history, but you can add it now before the next incident.

The irony of legacy code is that the skills required to understand it - patience, pattern recognition, the ability to hold an entire system in your head - are exactly the skills AI complements rather than replaces. You still need to understand the business. AI just helps you read the hieroglyphics.


Conclusion

None of the six items on this list require you to hand over the keys. You are still the architect, the decision maker, the person who understands the business and the user. AI is the tireless assistant who handles the parts of the job that drain your energy without advancing your craft.

The developers who thrive in the next decade won’t be the ones who resisted AI the longest. They’ll be the ones who figured out earliest how to delegate the tedious, the mechanical, and the repetitive - and spent the time they saved on the work that actually requires a human.

You don’t have to go all in. Start with a unit test. Paste some legacy code and ask AI to explain it or document it. Think of AI as that senior developer you go to with the tough problems - the one who has seen everything, judges nothing, and is available at 3am when the production system is on fire.

Only this one never sighs when you knock on the door.


Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.

ANNOUNCE: Perl.Wiki V 1.42 & 2 CPAN::Meta* modules

blogs.perl.org

Available now from my Wiki Haven: Perl.Wiki.html V 1.42 & the JSTree version.

Also, I've uploaded to CPAN 2 modules:

1: CPAN::MetaCurator V 1.13

2: CPAN::MetaPackager V 1.00

Q & A:

1: What is the relationship between these 2 modules?
CPAN::MetaPackager's scripts/build.db.sh inputs
a recent version of the Perl file 02packages.details.txt,
and outputs an SQLite file called cpan.metapackager.sqlite (15Mb).
The latter ships with the module.

Future versions of this module will use the differences between the db
and newer versions of 02packages.details.txt to do the usual thing of
add/change/delete entries in cpan.metapackager.sqlite.

2: CPAN::MetaCurator's scripts/build.db.sh inputs
an JSON export from Perl.Wiki.html called tiddlers.json,
and outputs an SQLite file called cpan.metacurator.sqlite (15Mb).
The latter ships with the module.

Then scripts/export.tree.sh outputs a file called cpan.metacurator.tree.html.
This latter file is a JSTree version of Perl.Wiki.html, as mentioned above.

Note: By setting the env var INCLUDE_PACKAGES to 1 before you run export.tree.sh
the code will read the cpan.metapackager.sqlite table 'packages' and that changes
the output tree a bit, since the code then knows the names of modules released
to CPAN.

Is all my 2026 blogging going to be about doing more random stuff with Claude? No, I promise. But I’m still working through my backlog of “and the next thing I tried to put it through its paces”, so here it comes. This time, more drudgery.

Project Four: MIME-Lite to Dist::Zilla

Here’s a quick one before I post beefy Project Five tomorrow.

A couple weeks ago, I got a pull request against MIME-Lite. That’s a pretty popular Perl email library that I maintain, and which Fastmail uses at least a little. The contributor had supplied a patch to use GitHub Actions to run the tests on every PR. Great!

Thing was, I already have a standard way of doing that, and I didn’t really want to pick up another way. The other thing was, my standard way relies on the distribution being built with Dist::Zilla. It’s not hard to convert from not using Dist::Zilla to using it, but it’s not trivial. And I don’t love MIME::Lite, so I wasn’t keen to spend a bunch of time on it.

I fired up Claude, pretty certain that I was going to get a crappy result.

rjbsConvert this distribution to building with Dist::Zilla, in the fashion of most RJBS-authored distributions, like Email::MIME.
claudeI’ll explore both the current MIME-Lite distribution and Email::MIME as a reference for the Dist::Zilla conversion pattern.

In about 20 minutes, it was done. Now, when I say “it”, I mean Claude. Claude wrote a big commit that wasn’t good enough. MIME-Lite had a bunch of little quirks, like optional dependencies and a slightly not-rjbs-standard documentation structure. I spent just a few minutes polishing things up and it was good to go.

This was a good example of needing some expert involvement. Understanding exactly what to do was going to require way too much context that couldn’t be found in the repository or in “understanding” of published copies of things. But some exploration of MIME-Lite and Email-MIME let Claude do a bunch of the work so I didn’t have to grumble my way through it. As I’ve seen a few times, it also did work I would not have bothered with, like converting the changelog to a more consistent format.

One funny nit… the original changelog had:

=item Version 1.101   (1997/03/01)

Baseline code.
Originally created: 11 December 1996.  Ho ho ho.

The new one:

1.101     1997-03-01
        - Baseline code. Originally created: 11 December 1996.

I guess Claude is part of the war on Christmas.

Claude also hallucinated author names weirdly and screwed up the Copyright. You can’t just let it do whatever it wants with no oversight! It was pretty bad, and I can definitely see somebody getting into hot water this way. This was probably the worst Claude experience I had that didn’t end in scrapping things. I think I probably could’ve done this one myself faster and with fewer errors. As the world’s leading Dist::Zilla expert… probably shouldn’t be a surprise.

With the conversion done, I dropped the standard CI workflow in place and… found problems in my conversion. Just what CI is for. Now it’s all sorted and happy and any new PRs will get test runs.

You can read my transcript of this conversation, but it’s a bit odd. I stopped and resumed the session at one point, and Claude waited at a permission prompt for 12 hours while I slept or something.

Originally, this was going to be the last in my series of stuff I did with Claude that I found compelling, but… the news, good or bad, is that I’ll be posting at least one more soon. This one, though, is definitely the piece of work that convinced me that Claude was useful for more than mechanical transformation.

Project Five: ClaudeLog

In some of my previous posts, I posted links to transcripts of my chats with Claude, including its internal monologue, its tools used, and other stuff that you won’t see if you just copy text out of the viewer. Those transcripts were generated by a tool that I built with Claude, which I’m going to talk about, now.

I think that the experience of using Claude in its current form (as opposed to what we saw from coding agents a year ago) is fairly stunning. It sometimes screws up, it sometimes makes errors, it sometimes takes four passes at doing things that any normal person would routinely do in one… but on the other hand, it can generate large, complex, readable, correct pieces of software from a standing start. Saying “it’s real good, folks” is not sufficient. I think that to understand what it can do, you should see for yourself. This is not to say that there are no reasons to hesitate or to worry about the serious impact that LLMs are having, and will continue to have. But one criticism I continue to see is “these things spew out unmaintainable nonsense”, which is not claim that really stands up to much real use.

Also, one friend recently said to me, “I want to be a programmer, not a copy editor.” I had to explain that while, yes, you do need to read and think about possible errors in agent-generated code, the experience is much more one of design and construct than of proof reading.

Since not everybody is going to say, “Hey, Rik, let’s pair and look at Claude,” and because I am not going to pair with every rando who might ask, I thought it would be good to provide a thorough transcript. I knew that Claude keeps a detailed log of its sessions (even though, amusingly, Claude claimed it knew of no such thing).

I had looked through the JSONL files in which sessions were stored, and the data looked a bit messy, but probably sufficient. Without giving it too much investigation, I opened up Claude Code and said…

I want to provide transcripts of my interactions with Claude to colleagues who are learning to use the system. I understand that my ~/.claude directory contains transcripts of the chat sessions. Please build a TUI application that:

  1. lets me drill down through projects to individual sessions, showing previews of the sessions as I go
  2. lets me select a session to be transcribed for my use
  3. produces a Markdown summary of the session that I can embed in a document

While your general instructions say to prefer Perl, in this case I would like you to use whatever language you believe will achieve the results most successfully with a pleasant terminal interface.

You can, of course, read the transcription of this whole conversation, produced by the program that the conversation eventually led to. There’s a lot of experimentation, a fair bit of stumbling, and overall you can see how I discover what it is I want while I’m building it. This is normal development, but…

In normal development with such a significant “discovery” aspect, it’s very common to spend a lot of time upshifting and downshifting. That is, first I imagine that general structure of what I want. I write a pseudo-code outline of the high-level code. Then I begin converting individual pieces into real code. Almost continuously, I’m shifting between design thinking and implementation. These are (for me, and I suspect for others) distinct ways of thinking, and while it’s not “context switching”, there is, I think, an analogous overhead.

Using Claude, I am really focusing on one of those angles at a time. I started with “here is a very rough outline” and within 20 minutes, I had a working program. I never, ever had to switch into implementation thinking, to get there. Then I had many tight, quick exchanges in the form, “This is wrong, change it” or “I’m ready to describe the next feature”.

At the top of the transcript, you’ll see this line:

Duration: 7h 50m (3h active)

This means that from the first to last event in the transcript, about eight hours passed on the clock, but I was only engaged in the chat for about three. Probably I took a long lunch in there, or maybe worked on something more pressing for a while. Or I just stopped and thought about it, or spent time reading transcripts and thinking about what could be better.

By the end of the day, I had a really useful program. The HTML it was generating was definitely fit for purpose. On the other hand, I made the mistake of looking at the code…

Look, it wasn’t absolutely impenetrable. It just felt very, very amateurish. It got the job done, and I could read it, but there was very little abstraction. There was nearly no separation of concerns. There were no real “layers” to speak of. And when layers did exist, they often duplicated work.

Hardly surprising: if you imagine Clade as often analogous to a junior programmer (not a great analogy, but often useful), and you imagine me as the lousy boss who kept saying, “Implement this feature immediately and I don’t care about code quality”, of course there was a ton of debt. And of course it racked up fast, because I was racking up features fast. The question I needed to answer was Will Claude’s code always be unmaintainable after a short while?

I decided to answer that the next day. Instead of adding more features, I’d just look at fixing the code. It was already clear to me how the code should’ve been structured. I just didn’t tell Claude about it, because I was focused on features. Why didn’t Claude architect it well from the beginning? Because nobody asked it to. This seems sort of stupid, but it’s often how this stuff works.

So, the next day, I started a new session. There were two key parts of this:

I am very happy with the output of this program. I would now like to begin some serious refactoring for maintainability. The first step should be fairly simple. I would like you to split out the code that takes a session JSONL file and emits HTML. Put that in its own module, so that the “build HTML from JSONL” can be called without the session browser being involved.

…and…

Next, I would like to separate out the generation of HTML from the interpretation of the JSONL file. I propose the following:

  • a SessionFile object that represents the log itself, and is created with a static method like session_file = SessionFile.from_path(filepath)
  • a SessionLog object that represents a sequence of SessionTurn
  • a SessionFile has a SessionLog, which is computed lazily when requested
  • a SessionTurn is either a UserTurn, a ClaudeTurn, a SystemTurn, or an IdleTurn
  • non-IdleTurn turns have a sequence of TurnEvents, which are things like the chat, the tool use, the thinking, and so on
  • the HTML generator is passed an already-computed SessionLog, which it then just formats, rather than formatting and event-aggregating all at once

This is a significant change. Make a plan, think hard, and ask me any questions you need to ask up front.

There’s a bunch of other stuff that went on, but these are the big ones. You can read the whole refactoring transcript.

Claude thought for a while, then asked me some (very reasonable) questions. I answered them, and Claude produced a plan. Pretty soon, it all worked. The program still did just what it did before, but now the code made sense, and it was much easier to add some new features, because they could be added as small changes to a few layers, instead of changes to the one big ball of mud.

Part of what made this easy was that up front I said, “save a copy of input and output now, so that while you refactor, you can keep checking you didn’t change the output”. Claude did this, checking the output repeatedly as it worked.

Again, a lot of other stuff went on, but I think this is an accurate summary.

One way of looking at this as a two-stage process is “Rik should’ve made sure Claude did it right to start”, and that’s an okay position, but it brings back some of that upshifting and downshifting that I mentioned.

Another option is, “Rik should’ve given Claude standing instructions to pay more attention to architecture.” Yes, probably that would be useful, but also then I’d probably be spending more time thinking about its architecture, because it would be talking about its architecture.

I think the best view might be: because fixing the design later is at least sometimes not as costly (in time, not to mention morale), the human is given more freedom to choose the approach that they will find more satisfying or pleasant. In the end, we want the code to be correct, reliable, readable, and maintainable. The path we take to get there isn’t necessarily all that relevant. (But please leave a good git history.)

I’ll publish the code for CladeLog at some point.

Weekly Challenge: Counting the index

dev.to #perl

Weekly Challenge 365

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Alphabet Index Digit Sum

Task

You are given a string $str consisting of lowercase English letters, and an integer $k.

Write a script to convert a lowercase string into numbers using alphabet positions (a=1 … z=26), concatenate them to form an integer, then compute the sum of its digits repeatedly $k times, returning the final value.

My solution

This is a task of two parts. The first is to take the letters from input_string (as str is a reserved word in Python) to create a number. For this I use string.ascii_lowercase.index(letter)+1 to get each letter and append it to the digits variable. The +1 is because the letters start at 1, not 0.

def aid_sum(input_string: str, k: int) -> int:
    digits = ''
    for letter in input_string:
        try:
            digits += str(string.ascii_lowercase.index(letter)+1)
        except ValueError:
            raise ValueError(
                f"The character '{letter}' does not appear to be a lower case letter"
            )

The second part is to compute the sums of all the digits a specified number of times. For this I have a loop that performs this. It's a little clunky as Python treats strings and integers differently. If I have a single digit, I exit the loop early as further repetitions won't change the result.

    for _ in range(k):
        digits = str(sum(int(i) for i in digits))
        if len(digits) == 1:
            break

    return int(digits)

As Perl doesn't care about strings vs integers (with a few exceptions), the code is more straight forward. The index function is used to find the position of the letter in the alphabet.

sub main ( $input_string, $k ) {
    my $alphabet = join( "", "a" .. "z" );
    my $digits   = '';

    foreach my $letter ( split //, $input_string ) {
        my $idx = index( $alphabet, $letter );
        if ( $idx == -1 ) {
            die
              "The character '$letter' does not appear to be a lower case letter\n";
        }

        $digits .= $idx + 1;
    }

    foreach ( 1 .. $k ) {
        $digits = sum( split //, $digits );
    }

    say $digits;
}

Examples

$ ./ch-1.py abc 1
6

$ ./ch-1.py az 2
9

$ ./ch-1.py cat 1
6

$ ./ch-1.py dog 2
8

$ ./ch-1.py perl 3
6

Task 2: Valid Token Counter

Task

You are given a sentence.

Write a script to split the given sentence into space-separated tokens and count how many are valid words. A token is valid if it contains no digits, has at most one hyphen surrounded by lowercase letters, and at most one punctuation mark (!, ., ,) appearing only at the end.

My solution

This is a challenge where regular expression can be used to solve the problem. In both the Python and Perl solution, the regular expression used is ^[a-z]+(\-[a-z]+)?[!,\.]?$.

Breaking each part down:

  • ^ indicate the start of the string
  • [a-z]+ means one or more lower case letters
  • (\-[a-z]+)? means optionally (the question mark) a hyphen and one or more lowercase letters.
  • [!,\.]? means optionally a exclamation mark, comma or full stop.
  • $ means the end of the string.

This is a one liner in Python

def valid_token_counter(input_string: str) -> int:
    return sum(
        1 for word in input_string.split()
        if re.search(r'^[a-z]+(\-[a-z]+)?[!,\.]?$', word)
    )

The Perl solution is also one line (and an extra one to display the answer). The grep function returns the number of matches in a scalar context.

sub main ($input_string) {
    my $count = grep { /^[a-z]+(\-[a-z]+)?[!,\.]?$/ } split /\s+/,
      $input_string;
    say $count;
}

Examples

$ ./ch-2.py "cat and dog"
3

$ ./ch-2.py "a-b c! d,e"
2

$ ./ch-2.py "hello-world! this is fun"
4

$ ./ch-2.py "ab- cd-ef gh- ij!"
2

$ ./ch-2.py "wow! a-b-c nice."
2

Answer

You can configure grub via several ways to use a specific kernel or you can configure grub to use the latest one, or you can tell grub to pick one from a selection.

One specific kernel

If you inspect /etc/grub/grub.cfg you’ll see entries like this:

# the \ are mine, these are usually one big line but for blog purposes I
# multilined them
menuentry 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' --class debian \
 --class gnu-linux --class gnu --class os $menuentry_id_option \
 'gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4' {

You can use two identifiers to configure grub; you can use 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' or you can use the $menuentry_id_option with gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4.

This week in PSC (218) | 2026-03-16

blogs.perl.org

All three of us attended this long meeting covering quite a bit ground:

  • CVE-2026-3381 obliges us to cut a 5.42.2 point release with an updated Compress::Raw::Zlib.

  • We accepted Philippe’s and Eric’s offer to handle the last dev releases of the cycle.

  • Olaf Alders requested more explicit EOL notices and has updated perlpolicy.pod and the release manager guide accordingly. We agreed that the release announcement mails for the final dev release and the stable release should also contain a brief note about the perl version which is falling out of support, and filed an issue to make this happen.

  • We sent mail to kick off the voting process for some new core team member candidates.

  • We discussed the state of Devel::PPPort. It has been outdated for some time and needs to be unstuck.

  • We would like to get customize.dat down to the only entry that cannot be removed (for version.pm). We will try to coordinate with maintainers.

  • We noticed that we missed the deprecation of multiple use VERSION declarations in the same scope, which was supposed to be fatalized in 5.44. It is too late now to do that in this dev cycle, so the warning will have to change to 5.46 and the deprecation revisited next cycle.

  • Further on the topic of overlooked deprecations, we considered how to prevent this from continuing to happen. We decided that some kind of documentation of recurring PSC obligations during a cycle is needed, which would also include things like the contentious changes freeze and release blocker triage.

  • There was not much time left for release blocker triage, so we only did a little, which surfaced no candidate blockers so far. (A few already-definite blockers have been spotted and marked outside of triage.)

[P5P posting of this summary]

Beautiful Perl feature: reusable subregexes

dev.to #perl

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

Perl is famous for its regular expressions (in short: regexes): this technology had been known for a long time, but Perl was probably the first general-purpose programming language to integrate them into the core. Perl also augmented the domain-specific sublanguage of regular expressions with a large collection of extended patterns; some of these were later adopted by many other languages or products under the name "Perl-compatible regular expressions".

The whole territory of regular expressions is a vast topic; today we will merely focus on one very specific mechanism, namely the ability to define reusable subregexes within one regex. This powerful feature is an extended pattern not adopted yet in other programming languages, except those that rely on the PCRE library, a C library meant to be used outside of Perl, but with a regex dialect very close to Perl regexes. PHP and R are examples of such languages.

A glimpse at Perl extended patterns

Among the extended patterns of Perl regular expressions are:

  • recursive subpatterns. The matching process can recurse, so it becomes possible to match nested structures, like parentheses nested at several levels. You may have read previously in several places (even in Perl's own FAQ documentation!) that regular expressions cannot parse HTML or XML ... but with recursive patterns this is no longer true!
  • conditional expressions, where the result of a subpattern can determine where to branch for the rest of the match.

These mechanisms are extremely powerful, but quite hard to master; therefore they are seldom written directly by Perl programmers. The syntax is a bit awkward, due to the fact that when extended expressions were introduced, the syntax for new additional constructs had to be carefully chosen so as to avoid any conflict with existing constructs. Fortunately, some CPAN modules like Regexp::Common help to generate such regular expressions. Probably the most advanced of those is Damian Conway's Regexp::Grammars, an impressive tour de force able to compile recursive-descent grammars into Perl regular expressions! But grammars can also be written without any helper module: an example of a hand-written grammar can be seen in the perldata documentation, describing how Perl identifiers are parsed.

The DEFINE keyword

For this article we will narrow down to a specific construct at the intersection between recursive subpatterns and conditional expressions, namely the DEFINE keyword for defining named subpatterns. Just as you would split a complex algorithm into subroutines, here you can split a complex regular expression into subpatterns! The syntax is (?(DEFINE)(?<name>pattern)...) . An insertion of a named subpattern is written as (?&name) and can appear before the definition. Indeed, good practice as recommended by perlre is to start the regex with the main pattern, including references to subpatterns, and put the DEFINE part with definitions of subpatterns at the end.

The following example, borrowed from perlretut, illustrates the use of named subpatterns for parsing floating point numbers:

/^ (?&osg)\ * ( (?&int)(?&dec)? | (?&dec) )
   (?: [eE](?&osg)(?&int) )?
 $
 (?(DEFINE)
   (?<osg>[-+]?)         # optional sign
   (?<int>\d++)          # integer
   (?<dec>\.(?&int))     # decimal fraction
 )/x

The DEFINE part doesn't consume any input, its sole role is to define the named subpatterns osg, int and dec. Those subpatterns are referenced from the main pattern at the top of the regex. Subpatterns improve readability and avoid duplication.

Example: detecting cross-site scripting attacks

Let's put DEFINE into practice for a practical problem: the goal is to prevent cross-site scripting attacks (abbreviated 'XSS') against web sites.

XSS attacks try to inject executable code in the inputs to the web site. The web server might then store such inputs, without noticing that these are not regular user data; later, when displaying a new web page that integrates that data, the malicious code becomes part of the generated page and is executed by the browser. The OWASP cheat sheet lists various techniques for performing such attacks.

Looking at the list, one can observe three main patterns for injecting executable javascript in an HTML page:

  1. within a <script> tag;
  2. within event-handling attributes to HTML nodes or SVG nodes, e.g. onclick=..., onblur=..., etc.;
  3. within hyperlinks to javascript: URLs.

Attacks through the third pattern are the most pernicious because of a surprising aspect of the URL specification: it admits ASCII control characters or whitespace intermixed with the protocol part of the URL! As a result, an URL with embedded tabs, newlines, null or space characters like ja\tvas\ncript\x00:alert('XSS') is valid according to Web standards.

Many sources about XSS prevention take the position that input filtering is too hard, because of the large number of possible combinations, and therefore any approach based on regular expressions is doomed to be incomplete. Instead, they recommend approaches based on output filtering, where any user data injected into a Web page goes through an encoding process that makes sure that the characters cannot become executable code. The weak point of such approaches is that malicious code can nevertheless be stored on the server side, which is not very satisfactory intellectually, even if that code is made inocuous.

With the help of DEFINE, we can adopt another approach: perform sophisticated input filtering that will catch most malicious attacks. Here is a regular expression that successfully detects all XSS attacks listed in the OWASP cheat sheet:

my $prevent_XSS = qr/
 (                          # capturing group
     <script                  # embedded <script ...> tag
   |                          # .. or ..
     \b on\w{4,} \s* =        # event handler: onclick=, onblur=, etc.
   |                          # .. or ..
     \b                       # inline 'javascript:' URL, possibly mixed with ASCII control chars
     j (?&url_admitted_chars)
     a (?&url_admitted_chars)
     v (?&url_admitted_chars)
     a (?&url_admitted_chars)
     s (?&url_admitted_chars)
     c (?&url_admitted_chars)
     r (?&url_admitted_chars)
     i (?&url_admitted_chars)
     p (?&url_admitted_chars)
     t (?&url_admitted_chars) :
  )                         # end of capturing group

  (?(DEFINE)                # define the reusable subregex
    (?<url_admitted_chars> [\x00-\x20]* )  # 0 or more ASCII control characters or space
  )
/xi;

The url_admitted_chars subpattern matches any sequence of ASCII control characters or space (characters between hexadecimal positions 00 and 20 in the ASCII table); that subpattern is inserted after every single character of the javascript: word, so it will detect all possible combinations of embedded tabs, newlines, null characters or other exotic sequences.

All that remains to be done is to apply the $prevent_XSS regex to all inputs; depending on your Web architecture, this can be implemented easily at the intermediate layers of Catalyst or Mojolicious, or also at the level of Plack middleware.

Needless to say, this approach is not a substitute, but rather a complement to common output encoding techniques to enforce even better protection against XSS attacks.

Conclusion

Even if many other programming languages have now included regular expressions features, Perl remains the king in that domain, with extended patterns that open a whole new world of possibilities. With recursive patterns and with the DEFINE feature, Perl regexes can implement recursive-descent grammars, and the Regexp::Grammars module is here to help in using such functionalities. At a more modest level, the DEFINE mechanism helps to reuse subpatterns in hand-crafted regexes. What a beautiful feature!

About the cover picture

The image is an excerpt from Bach's fugue BWV 878 in the second book of the Well-Tempered Clavier. In these bars, the main theme is reused in diminution, where the note durations are halved with respect to the original presentation. A nice musical example of a subpattern!

The Problem: Generating Release Notes is Boring

You’ve just finished a marathon refactoring - perhaps splitting a monolithic script into proper modules-and now you need to write the release notes. You could feed an AI a messy git log, but if you want high-fidelity summaries that actually understand your architecture, you need to provide better context.

The Solution: AI Loves Boring Tasks

…and is pretty good at them too!

Instead of manually describing changes or hoping it can interpret my ChangeLog, I’ve automated the production of three ephemeral “Sidecar” assets. These are generated on the fly, uploaded to the LLM, and then purged after analysis - no storage required.

The Assets

  • The Manifest (.lst): A simple list of every file touched, ensuring the AI knows the exact scope of the release.
  • The Logic (.diffs): A unified diff (using git diff --no-ext-diff) that provides the “what” and “why” of every code change.
  • The Context (.tar.gz): This is the “secret sauce.” It contains the full source of the changed files, allowing the AI to see the final implementation - not just the delta.

The Makefile Implementation

If you’ve read any of my blog posts you know I’m a huge Makefile fan. To automate this I’m naturally going to add a recipe to my Makefile or Makefile.am.

First, we explicitly set the shell to /usr/bin/env bash to ensure features like brace expansion work consistently across all dev environments.

# Ensure a portable bash environment for advanced shell features
SHELL := /usr/bin/env bash

.PHONY: release-notes clean-local

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}

Breaking Down the Recipe

  • The Shell Choice (/usr/bin/env bash): We avoid hardcoding paths to ensure the script finds the correct Bash path on macOS, Linux, or inside a container.
  • The Version Override (VERSION ?=): This allows the “pre-flight” trick: running make release-notes VERSION=HEAD to iterate on notes before you’ve actually tagged the release.
  • Smart Tag Discovery (--sort=-v:refname): Using v:refname forces Git to use semantic versioning logic (so 1.10.0 correctly follows 1.2.0), while the glob pattern filters out “noisy” non-version tags.
  • The Diff Filter (--diff-filter=AMR): This ensures the tarball only includes files that actually exist (Added, Modified, or Renamed). If a release deleted a file, this filter prevents tar from erroring out when it can’t find the missing file on disk.
  • The Cleanup Crew (clean-local): remove the ephemeral artifacts using bash expansion

The AI Prompt

Once your assets are generated, upload them to AI and use a prompt that enforces your specific formatting standards.

The Prompt:

I have uploaded a file manifest (`.lst`), a unified diff
(`.diffs`), and a tarball of the source code (`.tar.gz`). Please
analyze these to identify the primary themes of this
release. Generate a release entry using this protocol:

1. Separate versions with 71 dashes.
2. Include a high-level executive summary paragraph.
3. Indent bullet points exactly 21 spaces to maintain a vertical column.
4. Group changes under category labels like `Core Bedrock:` or `Miniserver:`.
5. Provide plain-text ASCII only inside a code block.

The Result

In a recent dry run for Bedrock 3.8.8, this exact workflow allowed the AI to correctly identify that bedrock-miniserver.pl.in was refactored into Bedrock::MiniServer and Bedrock::MiniServer::CLI, while accurately noting that Bedrock::Error moved to the Bedrock-Core distribution.

In actuality, version 3.8.8 focuses on the primary goals of MiniServer maturation (more batteries included), Apache decoupling, and distribution hygiene. Here’s AI’s version:

-----------------------------------------------------------------------
Bedrock 3.8.8
-----------------------------------------------------------------------
This release focuses on the maturation of the MiniServer component, 
decoupling core Bedrock logic from Apache-specific dependencies, and 
improving distribution hygiene. Key architectural changes include 
refactoring the MiniServer into modular components and moving 
exception handling into the Bedrock-Core distribution.

2026-03-17 - 3.8.8 - MiniServer Maturation and Apache Decoupling
                     Miniserver:
                     - Refactored bedrock-miniserver.pl into modular 
                       Bedrock::MiniServer and Bedrock::MiniServer::CLI.
                     - Implemented zero-config scaffolding to 
                       automatically create application trees.
                     - Integrated full Bedrock configuration pipeline 
                       for parity with Apache environments.
                     - Updated bedrock_server_config to support both 
                       getter and setter operations.

                     Core:
                     - Moved Bedrock::Error and Bedrock::Exception to 
                       the Bedrock-Core distribution.
                     - Introduced Bedrock::FauxHandler as a production-
                       ready alias for test handlers.
                     - Added dist_dir() to BLM::Startup::Bedrock to 
                       expose distribution paths to templates.

                     Fixes:
                     - Demoted Apache-specific modules (mod_perl2, 
                       Apache2::Request) to optional recommendations.
                     - Improved Bedrock::Test::FauxHandler to handle 
                       caller-supplied loggers and safe destruction.

Conclusion

As I mentioned in a response to a recent Medium article, AI can be an accelerator for seasoned professionals. You’re not cheating. You did the work. AI does the wordsmithing. You edit, add color, and ship. What used to take 30 minutes now takes 3. Now that’s working smarter, not harder!

Pro-Tip

Add this to the top of your Makefile

SHELL := /usr/bin/env bash

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)

Copy this to a file named release-notes.mk

.PHONY: release-notes clean-local

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}

Then add release-notes.mk to your Makefile

include release-notes.mk

Help testing DBD::Oracle

blogs.perl.org

A small group of volunteers continue to maintain the DBD::Oracle driver without any sponsorship or funding.

Who have curated yet another dev release which has hit the CPAN in the form of v1.91_5 - probably the last dev release before a new version.

It has grown to be quite large so will probably be released as 1.95 or something to give it some distance from the last release. For that same reason I am hoping for anyone using DBD::Oracle in their stack to help us test it out.

Given the nature of Oracle it is very challenging to exhaustively test it, despite having quite a respectable set of unit tests. There are several current supported OS and architectures, and supported client and server versions, on top of Perl versions. Plus all the historical OS and architectures (and client and server and Perl versions) which are still running in production. Support for which is as I mentioned entirely up to volunteers and folks sending in fixes or enhancements.

Because of this, my suggestion is to always set up a CI (or similar) with your specific versions and test every DBD::Oracle version prior to deploying it - rather than yolo'oing it with cpanm.

So, I invite everyone who can to run the dev release through and even try it in production where possible. Please send feedback via github


Dave writes:

Last month I worked on various miscellaneous issues, including a few performance and deparsing regressions.

Summary: * 3:00 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build * 2:49 GH# #24212 goto void XSUB in scalar context crashes * 7:19 XS: avoid core distros using void ST(0) hack * 2:40 fix up Deparse breakage * 5:41 remove OP_NULLs in OP_COND execution path

Total: * 21:29 (HH::MM)


Paul writes:

Not too much activity of my own this month here, as I spent a lot of Perl time working on other things like magic-v2 or some CPAN module ecosystem like Future::IO. Plus I had a stage show to finish building props for and manage the running of.

But I did manage to do:

  • 3 = Continue work on attributes-v2 and write a provisional PR for the first stage
    • https://github.com/Perl/perl5/pull/24171
  • 3 = Bugfix in class.c in threaded builds
    • https://github.com/Perl/perl5/issues/24150
    • https://github.com/Perl/perl5/pull/24171
  • 1 = More foreach lvref neatening
    • https://github.com/Perl/perl5/pull/24202
  • 3 = Various github code reviews

Total: 10 hours

Now that both attributes-v2 and magic-v2 are parked awaiting the start of the 5.45.x development cycle, most of my time until then will be spent on building up some more exciting features to launch those with, as well as continuing to focus on fixing any release-blocker bugs for 5.44.


Tony writes:

``` [Hours] [Activity] 2026/02/02 Monday 0.08 #24122 review updates and comment 0.17 #24063 review updates and apply to blead 0.28 #24062 approve with comment and bonus comment 0.92 #24071 review updates and approve 0.40 #24080 review updates, research and comment 0.18 #24122 review updates and approve 0.27 #24157 look into it and original ticket, comment on original ticket 0.58 #24134 review and comments 0.27 #24144 review and approve with comment 0.18 #24155 review and comment 0.48 #16865 debugging

0.90 #16865 debugging, start a bisect with a better test case

4.71

2026/02/03 Tuesday 0.17 review steve’s suggested maint-votes and vote 0.17 #24155 review updates and approve 1.30 #24073 recheck, comments and apply to blead 0.87 #24082 more review, follow-ups 0.83 #24105 work on threads support

0.65 #24105 more work on threads, hash randomization support

3.99

2026/02/04 Wednesday 0.13 github notifications 1.92 #24163 review, comments 0.48 #24105 rebase some more, fix tests, do a commit and push for CI (needs more work)

1.70 #24105 more cleanup and push for CI

4.23

2026/02/05 Thursday 0.20 github notifications 0.38 #24105 review CI results and fix some issues 1.75 #24082 research and comments 0.63 #24105 more CI results, update the various generated config files and push for CI 0.17 #23561 review updates and comment 0.40 #24163 research and follow-up

0.58 #24098 review updates and comments

4.11

2026/02/09 Monday 0.15 #24082 comment 0.20 #22040 comment 0.30 #24005 research, comment 0.33 #4106 rebase again and apply to blead 0.35 #24133 comment 0.35 #24168 review CI results and comment 0.25 #24098 comment 0.18 #24129 review updates and comment 0.92 #24160 review, comment, approve 0.17 #24136 review and briefly comment 0.78 #24179 review, comments

0.48 #16865 comment, try an approach

4.46

2026/02/10 Tuesday 0.62 #24163 comment 0.23 #24082 research

0.20 #24082 more research

1.05

2026/02/11 Wednesday 0.48 #24163 review updates and approve 0.73 #24129 review updates 0.45 #24098 research and follow-up comment 0.32 #24134 review updates and approve 0.17 #24080 review updates and approve 1.18 #22132 setup, testing and comments on ticket and upstream llvm ticket 0.32 #23561 review update and approve 0.42 #24179 review some more and make a suggestion

1.03 #24187 review and comments

5.10

2026/02/12 Thursday 0.43 #24136 research and comment 0.17 #24190 review and approve 0.90 #24182 review discussion and the change and approve 0.08 #24178 review and briefly comment 0.33 #24177 review, research and comment 0.08 #24187 brief follow-up 0.43 #24176 research, review and approve 0.27 #24191 research, testing 0.20 #24192 review and approve 0.38 #24056 debugging

0.58 #24056 debugging, something in find_lexical_cv()?

3.85

2026/02/16 Monday 0.52 github notifications 0.08 #24178 review updates and approve 2.20 #24098 review and comments 0.88 #24056 more debugging, find at least one bug 0.92 #24056 work up tests, testing, commit message and push for

CI, perldelta and re-push

4.60

2026/02/17 Tuesday 0.18 #24056 check CI results, rebase in case and re-push, open PR 24205 2.88 #24187 review, comments 0.47 #24187 more comments 0.23 reply email from Jim Keenan re git handling for testing PR

tests without the fixes

3.76

2026/02/18 Wednesday 3.02 #24187 review comments, work on fix for assertion, testing, push for CI 0.25 #24187 check CI, make perldelta and make PR 24211

0.35 #24098 review updates and approve

3.62

2026/02/19 Thursday 0.30 #24200 research and comment 0.47 #24215 review, wonder why cmp_version didn’t complain, find out and approve 0.08 #24208 review and comment 0.73 #24213 review, everything that needs saying had been said 0.22 #24206 review and comments 0.53 #24203 review, comment and approve 0.33 #24210 review, research and approve with comment

0.37 #24200 review, research and approve

3.03

2026/02/23 Monday 0.35 #24212 testing add #24213 to 5.42 votes 2.42 #24159 review and benchmarking, comment

0.73 #24187 try to break it

3.50

2026/02/24 Tuesday 0.35 github notifications 1.13 #24187 update PR 24211 commit message, rechecks 0.43 #24001 re-work tests on PR 24060

0.30 #24001 more re-work

2.21

2026/02/25 Wednesday 1.02 #24180 research, comments 0.22 #24206 review update and comment 0.28 #24208 review updates and comment 0.57 #24060 more tests

0.88 #24060 more tests, testing, debugging

2.97

2026/02/26 Thursday 0.47 #24211 minor fixes per comments 0.23 #24206 review updates and approve 0.22 #24180 review updates and approve 0.98 #24236 review and comments 1.30 #24228 review, testing and comments 0.08 #24236 research and comment

0.78 #24159 review updates, testing, comments

4.06

Which I calculate is 59.25 hours.

Approximately 50 tickets were reviewed or worked on, and 3 patches were applied. ```

Perl: sort of Lamda-terms

Perl questions on StackOverflow

I am sure there is some "Perl magic" that makes my code much shorter.

my %m = ("a" => 1, "b" => 12, "c" => "33");
my $str = "";
for (keys (%m))
{
  $str .= $_ . "=" . $m {$_} . ", ";
}
$str = substr ($str, 0, -2);    # remove last ", "
print $str;         # OUTPUT: a=1, b=12, c=33

Is there some sort or lambda-style in Perl to make simple tasks not that "clumsy"?

e.g. $m.keys ().foreach (k,v => $k "=" . $v . ", ").join ()

Here’s another post in which I fiddle around with Claude Code and determine that it is not just spewing out nonsense and bugs, but instead is doing kinda what I would’ve done, at least enough to reduce my total effort. This time, writing tests.

Project Three: JMAP-Tester test coverage

Okay, I progressed from code I’d throw away to code I would keep but not look at. I was progressing up the totem pole of how much cultural value we put on code. What was the next least respected code? Tests, of course.

Now, I actually love tests, and like treating them like first-class code, and building libraries to make testing better. One of those libraries is JMAP::Tester, which we use in tons of our testing. Until pretty recently, it didn’t have all that much testing of its own. That is: JMAP-Tester was used to test things, but was not itself tested. In December, as part of adding some features to JMAP::Tester, I started to expand its test coverage. This was rewarding and useful, but I didn’t get to 100% coverage. I used to strive for 100% (well, 95% coverage) on my code, but these days… well, who has the time?

Turns out, Claude has the time. This one was pretty darn impressive. You can read the whole transcript, but here’s an accurate summary:

rjbsThis project is a CPAN distribution. Produce a coverage report, which you can do with “dzil cover”. Find low-hanging fruit to add test coverage and draft the tests.
time passes
claudeI’ve increased test coverage from about 50% to about 95%.

That was it! You can read the pull request.

Well, there were a couple more bits, mostly me saying, “Make it look like how I’d have written it” (literally), and some git faff. I did go through and tweak the code to be more readable and better organized. I could’ve given Claude clearer instructions to avoid most of that, or had standing instructions about it… but really, it was a good way to keep my mind engaged while I reviewed that the code was testing what it should be, anyway.

If “write more coverage tests” was the only thing I could get out of Claude, it’d still be huge, but obviously there’s more. I’ll keep posting…

Originally published at Perl Weekly 764

Hi there,

The Perl community continues to move forward with exciting updates and useful new tools. Recently, a new release of Dancer has been announced. In his blog post, Jason A. Crome shared the release of Dancer 2.10, bringing improvements and fixes to the popular web framework. Dancer has long been appreciated for making web development in Perl simple and expressive, and this new version continues that tradition. It is always encouraging to see mature Perl frameworks still actively maintained and evolving with the needs of developers.

Another interesting project worth exploring is Prima, introduced by Reinier Maliepaard. Prima is a powerful GUI toolkit for Perl, allowing developers to build graphical desktop applications. Many Perl developers are familiar with web or command-line tools, but Prima reminds us that Perl can also be used effectively for desktop interfaces. The project demonstrates how flexible the language can be when building different kinds of applications.

The Perl Steering Council also published a new UPDATE: PSC (217) | 2026-03-09. These regular updates give a useful overview of what is happening around the Perl core and governance. They help the community stay informed about ongoing discussions, development priorities, and future plans. Transparency like this is very valuable for an open source language, as it helps everyone understand how decisions are made and where the project is heading.

Finally, it is always nice to see new modules appearing in the CPAN ecosystem. Recently I released a small module called DBIx::Class::MockData, which is designed to help generate mock data when working with DBIx::Class in tests. Creating realistic data for database tests can sometimes take extra effort, so tools that simplify this process can be quite helpful. As always, CPAN continues to grow thanks to contributions from many developers in the Perl community.

Enjoy rest of the newsletter. Stay safe and healthy.

--
Your editor: Mohammad Sajid Anwar.

Announcements

Dancer 2.1.0 Released

In this short announcement, Jason A. Crome shares the release of Dancer 2.10, a new version of the popular Perl web framework Dancer. The post is brief and to the point, informing the community that the new version is now available on CPAN and ready for use. It highlights the continued maintenance and progress of the framework, which has long been valued for making web development in Perl simple and enjoyable.

Articles

This week in PSC (217) | 2026-03-09

The Perl Steering Council shares a short summary of their latest meeting and the topics currently on their radar. The meeting itself was brief, but it still covered a few important administrative and planning items related to the Perl core project. One of the main points discussed was the ongoing outreach to potential new members of the Perl core team. The council mentioned that they have contacted several people and are waiting for responses before holding a vote. Expanding or refreshing the group of contributors is an important step in keeping the Perl core development active and sustainable.

Mastering Perl Prima: A Step-by-Step Guide for Beginners

The article explains that Prima provides a rich set of widgets and tools for creating graphical interfaces such as windows, buttons, and other interactive elements. With relatively small pieces of code, developers can create a working GUI application and run it through Prima's event loop. This makes it possible to build desktop programs in Perl without relying only on command-line interfaces or web frameworks.

Beautiful Perl feature : two-sided constructs, in list or in scalar context

In this article, Laurent Dami explores an interesting Perl concept: two-sided constructs that behave differently depending on list or scalar context. The post explains how certain Perl expressions can adapt their behavior based on what the surrounding code expects, which is one of the language's distinctive and powerful features.

CPAN

Mail::Make

Mail::Make is a modern Perl module for building and sending MIME email messages with a clean, fluent API. It allows developers to construct messages step-by-step (adding headers, text, HTML, attachments, etc.) while automatically generating the correct MIME structure for the email.

DBIx::Class::MockData

The CPAN distribution DBIx-Class-MockData introduces a convenient way to generate mock data for testing applications built with DBIx::Class. It helps developers quickly populate schemas with realistic test records, making it easier to write and maintain database tests. Tools like this are particularly useful in projects using DBIx::Class, which maps relational database tables to Perl objects and is widely used in Perl web applications.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Marc Perry.

The Weekly Challenge - 365

Welcome to a new week with a couple of fun tasks "Alphabet Index Digit Sum" and "Valid Token Counter". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 364

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Decrypt String" and "Goal Parser" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

String Goal

The post showing effective use of features like gather/take and thoughtful string tokenization. The post combines readable code with solid explanation, making it useful and inspiring for anyone exploring Raku for text parsing tasks.

Perl Weekly Challenge: Week 364

The post provides a clear and well-structured walkthrough of Perl Weekly Challenge #364, presenting the problem statements alongside thoughtful explanations of the approach and implementation. The solutions are concise, readable, and demonstrate practical Perl/Raku techniques, making the article both informative and enjoyable for developers following the challenge.

Alternate Codes

This post presents solutions to Perl Weekly Challenge 364, with a strong focus on clear reasoning and elegant Perl implementations. The article walks through the logic behind each task and explains the approach in a concise but technical way, making it easy for readers to follow the thought process. It is a well-written challenge write-up that nicely demonstrates practical problem solving and expressive Perl code.

substituting strings!

The article offers a practical and technically rich walkthrough of the challenge tasks. The explanations are concise but clear, and the multiple implementations make the post especially interesting for readers who enjoy comparing solutions across languages and environments.

Perl Weekly Challenge 364

In this blog post, W. Luis Mochán shares his solutions to Perl Weekly Challenge 364, presenting concise and well-thought-out Perl implementations for both tasks. The article focuses on clear logic and often explores compact solutions, sometimes even demonstrating elegant one-liners and efficient use of Perl features.

Decrypted "715#15#15#112#": goooal!

The solutions demonstrate a thoughtful and elegant approach to Perl Weekly Challenge #364, combining clear reasoning with expressive Perl idioms. The code is concise yet readable, showing creative problem-solving and effective use of Perl's strengths to produce clean and well-structured implementations.

Andrés Cantor Goes West

The write-up balances technical detail with an informal and engaging style, making the reasoning behind the solutions easy to follow. It is an enjoyable and well-explained challenge post that highlights practical problem solving and thoughtful coding.

Weird encodings

This post shares Peter's solutions to Perl Weekly Challenge 364, presenting clear and well-structured Perl implementations for both tasks. It explains the reasoning behind the approach and walks the reader through the logic step by step, making the solutions easy to follow. Overall, it is a solid and educational write-up that demonstrates practical Perl problem-solving and clean coding style.

The Weekly Challenge - 364: Decrypt String

This post presents a clear and well-structured solution to one of the Perl Weekly Challenge tasks. Reinier explains the approach step by step and supports it with concise Perl code, making the logic easy to follow for readers interested in algorithmic problem solving. It is a solid technical walkthrough that demonstrates practical Perl usage while keeping the explanation accessible and educational.

The Weekly Challenge - 364: Goal Parser

This post presents a thoughtful solution to the second task of Perl Weekly Challenge 364, with a clear explanation of the algorithm and the reasoning behind it. Reinier walks through the logic step by step and supports it with concise Perl code, making the approach easy to understand. It is a well-written technical note that demonstrates practical problem solving and highlights Perl's strengths for implementing compact and readable solutions.

The Weekly Challenge #364

In this post, Robbie shares his Perl solutions for Perl Weekly Challenge 364, continuing his detailed and methodical style of writing about the weekly tasks. His solutions are well structured and focus on correctness and clarity, with carefully organised code and explanations that help readers understand the reasoning behind each step.

Decrypted Goals

In this post, Roger presents his solutions to Perl Weekly Challenge 364, focusing on the task involving "decrypted goals". The write-up explains the reasoning behind the algorithm and walks through a clear Perl implementation that solves the problem efficiently. It is a concise and technically solid article that demonstrates careful analysis and practical Perl problem-solving.

It's all about the translation

In this blog post, Simon shares his solutions to another Perl Weekly Challenge, following his usual workflow of first solving the tasks in Python and then translating the logic into Perl. This approach provides an interesting comparison between the two languages and highlights how similar algorithms can be implemented in different ways.

Rakudo

2026.10 Climbing CragCLI

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.

Events

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Let’s talk about music programming! There are a million aspects to this subject, but today, we’ll touch on generating rhythmic patterns with mathematical and combinatorial techniques. These include the generation of partitions, necklaces, and Euclidean patterns.

Stefan and J. Richard Hollos wrote an excellent little book called “Creating Rhythms” that has been turned into C, Perl, and Python. It features a number of algorithms that produce or modify lists of numbers or bit-vectors (of ones and zeroes). These can be beat onsets (the ones) and rests (the zeroes) of a rhythm. We’ll check out these concepts with Perl.

For each example, we’ll save the MIDI with the MIDI::Util module. Also, in order to actually hear the rhythms, we will need a MIDI synthesizer. For these illustrations, fluidsynth will work. Of course, any MIDI capable synth will do! I often control my eurorack analog synthesizer with code (and a MIDI interface module).

Here’s how I start fluidsynth on my mac in the terminal, in a separate session. It uses a generic soundfont file (sf2) that can be downloaded here (124MB zip).

fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2

So, how does Perl know what output port to use? There are a few ways, but with JBARRETT’s MIDI::RtMidi::FFI::Device, you can do this:

use MIDI::RtMidi::FFI::Device ();

my $midi_in = RtMidiIn->new;
my $midi_out = RtMidiOut->new;

print "Input devices:\n";
$midi_in->print_ports;
print "\n";

print "Output devices:\n";
$midi_out->print_ports;
print "\n";

This shows that fluidsynth is alive and ready for interaction.

Okay, on with the show!

First-up, let’s look at partition algorithms. With the part() function, we can generate all partitions of n, where n is 5, and the “parts” all add up to 5. Then taking one of these (say, the third element), we convert it to a binary sequence that can be interpreted as a rhythmic phrase, and play it 4 times.

#!/usr/bin/env perl
use strict;
use warnings;

use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $parts = $mcr->part(5);
# [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 1, 2 ], [ 1, 2, 2 ], [ 1, 1, 3 ], [ 2, 3 ], [ 1, 4 ], [ 5 ] ]

my $p = $parts->[2]; # [ 1, 2, 2 ]

my $seq = $mcr->int2b([$p]); # [ [ 1, 1, 0, 1, 0 ] ]

Now we render and save the rhythm:

use MIDI::Util qw(setup_score);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) {
    for my $bit ($seq->[0]->@*) {
        if ($bit) {
            $score->n('en', 40);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-1.mid');

In order to play the MIDI file that is produced, we can use fluidsynth like this:

fluidsynth -i ~/Music/soundfont/FluidR3_GM.sf2 perldotcom-1.mid

Not terribly exciting yet.

Let’s see what the “compositions” of a number reveal. According to the Music::CreatingRhythms docs, a composition of a number is “the set of combinatorial variations of the partitions of n with the duplicates removed.”

Okay. Well, the 7 partitions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 3], [1, 2, 2], [1, 4], [2, 3], [5]]

And the 16 compositions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 2, 1], [1, 1, 3], [1, 2, 1, 1], [1, 2, 2], [1, 3, 1], [1, 4], [2, 1, 1, 1], [2, 1, 2], [2, 2, 1], [2, 3], [3, 1, 1], [3, 2], [4, 1], [5]]

That is, the list of compositions has, not only the partition [1, 2, 2], but also its variations: [2, 1, 2] and [2, 2, 1]. Same with the other partitions. Selections from this list will produce possibly cool rhythms.

Here are the compositions of 5 turned into sequences, played by a snare drum, and written to the disk:

use Music::CreatingRhythms ();
use MIDI::Util qw(setup_score);

my $mcr = Music::CreatingRhythms->new;

my $comps = $mcr->compm(5, 3); # compositions of 5 with 3 elements

my $seq = $mcr->int2b($comps);

my $score = setup_score(bpm => 120, channel => 9);

for my $pattern ($seq->@*) {
    for my $bit (@$pattern) {
        if ($bit) {
            $score->n('en', 40); # snare patch
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-2.mid');

A little better. Like a syncopated snare solo.

Another way to play the MIDI file is to use timidity. On my mac, with the soundfont specified in the timidity.cfg configuration file, this would be:

timidity -c ~/timidity.cfg -Od perldotcom-2.mid

To convert a MIDI file to an mp3 (or other audio formats), I do this:

timidity -c ~/timidity.cfg perldotcom-2.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 64k perldotcom-2.mp3

Okay. Enough technical details! What if we want a kick bass drum and hi-hat cymbals, too? Refactor time…

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $s_comps = $mcr->compm(4, 2); # snare
my $s_seq = $mcr->int2b($s_comps);

my $k_comps = $mcr->compm(4, 3); # kick
my $k_seq = $mcr->int2b($k_comps);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 8) { # repeats
    my $s_choice = $s_seq->[ int rand @$s_seq ];
    my $k_choice = $k_seq->[ int rand @$k_seq ];

    for my $i (0 .. $#$s_choice) { # pattern position
        my @notes = (42); # hi-hat every time
        if ($s_choice->[$i]) {
            push @notes, 40;
        }
        if ($k_choice->[$i]) {
            push @notes, 36;
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-3.mid');

Here we play generated kick and snare patterns, along with a steady hi-hat.

Next up, let’s look at rhythmic “necklaces.” Here we find many grooves of the world.

World rhythms

Image from The Geometry of Musical Rhythm

Rhythm necklaces are circular diagrams of equally spaced, connected nodes. A necklace is a lexicographical ordering with no rotational duplicates. For instance, the necklaces of 3 beats are [[1, 1, 1], [1, 1, 0], [1, 0, 0], [0, 0, 0]]. Notice that there is no [1, 0, 1] or [0, 1, 1]. Also, there are no rotated versions of [1, 0, 0], either.

So, how many 16 beat rhythm necklaces are there?

my $necklaces = $mcr->neck(16);
print scalar @$necklaces, "\n"; # 4116 of 'em!

Okay. Let’s generate necklaces of 8 instead, pull a random choice, and play the pattern with a percussion instrument.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $patch = shift || 75; # claves

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);
my $choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $bit (@$choice) { # pattern position
        if ($bit) {
            $score->n('en', $patch);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-4.mid');

Here we choose from all necklaces. But note that this also includes the sequence with all ones and the sequence with all zeroes. More sophisticated code might skip these.

More interesting would be playing simultaneous beats.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);

my $x_choice = $necklaces->[ int rand @$necklaces ];
my $y_choice = $necklaces->[ int rand @$necklaces ];
my $z_choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $#$x_choice) { # pattern position
        my @notes;
        if ($x_choice->[$i]) {
            push @notes, 75; # claves
        }
        if ($y_choice->[$i]) {
            push @notes, 63; # hi_conga
        }
        if ($z_choice->[$i]) {
            push @notes, 64; # low_conga
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-5.mid');

And that sounds like:

How about Euclidean patterns? What are they, and why are they named for a geometer?

Euclidean patterns are a set number of positions P that are filled with a number of beats Q that is less than or equal to P. They are named for Euclid because they are generated by applying the “Euclidean algorithm,” which was originally designed to find the greatest common divisor (GCD) of two numbers, to distribute musical beats as evenly as possible.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $beats = 16;

my $s_seq = $mcr->rotate_n(4, $mcr->euclid(2, $beats)); # snare
my $k_seq = $mcr->euclid(2, $beats); # kick
my $h_seq = $mcr->euclid(11, $beats); # hi-hats

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $beats - 1) { # pattern position
        my @notes;
        if ($s_seq->[$i]) {
            push @notes, 40; # snare
        }
        if ($k_seq->[$i]) {
            push @notes, 36; # kick
        }
        if ($h_seq->[$i]) {
            push @notes, 42; # hi-hats
        }
        if (@notes) {
            $score->n('en', @notes);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-6.mid');

Now we’re talkin’ - an actual drum groove! To reiterate, the euclid() method distributes a number of beats, like 2 or 11, over the number of beats, 16. The kick and snare use the same arguments, but the snare pattern is rotated by 4 beats, so that they alternate.

So what have we learned today?

  1. That you can use mathematical functions to generate sequences to represent rhythmic patterns.

  2. That you can play an entire sequence or simultaneous notes with MIDI.

References:

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.340 on 2026-03-13, with 50 votes
    • Previous CPAN version: 0.339 was 21 days before
    • Author: RJBS
  2. App::HTTPThis - Export the current directory over HTTP
    • Version: v0.11.0 on 2026-03-13, with 25 votes
    • Previous CPAN version: 0.010 was 3 months, 9 days before
    • Author: DAVECROSS
  3. App::zipdetails - Display details about the internal structure of Zip files
    • Version: 4.005 on 2026-03-08, with 65 votes
    • Previous CPAN version: 4.004 was 1 year, 10 months, 8 days before
    • Author: PMQS
  4. CPAN::Audit - Audit CPAN distributions for known vulnerabilities
    • Version: 20260308.002 on 2026-03-08, with 21 votes
    • Previous CPAN version: 20250829.001 was 6 months, 10 days before
    • Author: BRIANDFOY
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260311.002 on 2026-03-11, with 25 votes
    • Previous CPAN version: 20260308.006 was 2 days before
    • Author: BRIANDFOY
  6. Dancer2 - Lightweight yet powerful web application framework
    • Version: 2.1.0 on 2026-03-12, with 139 votes
    • Previous CPAN version: 2.0.1 was 4 months, 20 days before
    • Author: CROMEDOME
  7. Data::Alias - Comprehensive set of aliasing operations
    • Version: 1.30 on 2026-03-11, with 19 votes
    • Previous CPAN version: 1.29 was 1 month, 8 days before
    • Author: XMATH
  8. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.19.0 on 2026-03-14, with 103 votes
    • Previous CPAN version: 3.18.0 was 2 years, 3 months, 7 days before
    • Author: TURNSTEP
  9. IO::Compress - IO Interface to compressed data files/buffers
    • Version: 2.219 on 2026-03-09, with 19 votes
    • Previous CPAN version: 2.218 was before
    • Author: PMQS
  10. JSON::Schema::Modern - Validate data against a schema using a JSON Schema
    • Version: 0.633 on 2026-03-13, with 16 votes
    • Previous CPAN version: 0.632 was 2 months, 7 days before
    • Author: ETHER
  11. Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
    • Version: 0.74 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.74 was 1 day before
    • Author: DANAJ
  12. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.040000 on 2026-03-09, with 29 votes
    • Previous CPAN version: 2.039000 was 8 days before
    • Author: MICKEY
  13. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260308 on 2026-03-08, with 44 votes
    • Previous CPAN version: 5.20260220 was 15 days before
    • Author: BINGOS
  14. OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
    • Version: 0.7007 on 2026-03-13, with 15 votes
    • Previous CPAN version: 0.7006 was 10 months, 29 days before
    • Author: ETJ
  15. less - The Perl 5 language interpreter
    • Version: 5.042001 on 2026-03-08, with 2248 votes
    • Previous CPAN version: 5.042001 was 14 days before
    • Author: SHAY
  16. SPVM - The SPVM Language
    • Version: 0.990146 on 2026-03-14, with 36 votes
    • Previous CPAN version: 0.990145 was before
    • Author: KIMOTO
  17. Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
    • Version: 1.044 on 2026-03-09, with 14 votes
    • Previous CPAN version: 1.043 was 8 months, 5 days before
    • Author: CHOROBA
  18. Test::Routine - composable units of assertion
    • Version: 0.032 on 2026-03-12, with 13 votes
    • Previous CPAN version: 0.031 was 2 years, 11 months before
    • Author: RJBS
  19. WWW::Mechanize::Chrome - automate the Chrome browser
    • Version: 0.76 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.75 was 4 months, 12 days before
    • Author: CORION
  20. X11::korgwm - a tiling window manager for X11
    • Version: 6.1 on 2026-03-08, with 14 votes
    • Previous CPAN version: 6.0 was before
    • Author: ZHMYLOVE

(dcxxvi) metacpan weekly report - Langertha

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 61

Week's winner: Langertha (+3)

Build date: 2026/03/14 22:28:35 GMT


Clicked for first time:


Increasing its reputation:

For those running a development version of git from master or next, you probably have seen it already. Today I was inspecting the git logs of git and found this little gem. It supports my workflow to the max.

You can now configure git status to compare branches with your current branch in status. When you configure status.comparebranches you can use @{upstream} and @{push} and you see both how far you’ve diverged from your upstream and your push branch. For those, like me, who track an upstream branch which differs from their push branch this is a mighty fine feature!

I am trying to understand the behavior of the following script under Perl 5.28.2:

sub split_and_print {
    my $label = $_[0];
    my $x = $_[1];
    my @parts = split('\.', $x);
    print sprintf("%s -> %s %s %.20f\n", $label, $parts[0], $parts[1], $x);
}

my @raw_values = ('253.38888888888889', '373.49999999999994');
for my $raw_value (@raw_values) {
    split_and_print("'$raw_value'", $raw_value);
    split_and_print("1.0 * '$raw_value'", 1.0 * $raw_value);
}

for me, this prints

'253.38888888888889' -> 253 38888888888889 253.38888888888888573092
1.0 * '253.38888888888889' -> 253 388888888889 253.38888888888888573092
'373.49999999999994' -> 373 49999999999994 373.49999999999994315658
1.0 * '373.49999999999994' -> 373 5 373.49999999999994315658

All of that is as expected, except for the last line: I don't understand why, during the automatic conversion of $x from a number to a string in the call to split it is converted into 373.5. print(373.49999999999994 - 373.5) says -5.6843418860808e-14, so Perl knows that those numbers are not equal (i.e. it's not about a limited precision of floating points in Perl).

perlnumber says

As mentioned earlier, Perl can store a number in any one of three formats, but most operators typically understand only one of those formats. When a numeric value is passed as an argument to such an operator, it will be converted to the format understood by the operator.

[...]

  • If the source number is outside of the limits representable in the target form, a representation of the closest limit is used. (Loss of information)

  • If the source number is between two numbers representable in the target form, a representation of one of these numbers is used. (Loss of information)

But '373.5' doesn't seem to be the "closest limit" of representing 373.49999999999994 as a string -- that would be '373.49999999999994', or some other decimal representation that, when converted back to a number yields the original value.

Also: what is different about 253.38888888888889?

I am looking for a definite reference that explains how exactly the automatic conversion of numbers to strings works in Perl.

I’ve been trying more things with Clade Code, and I want to keep some notes on how it went. This is the second post in a probably-short series about early adventures with the tool.

Project Two: rjbs.cloud/talks

I felt good about how the Cassandane subroutine signature replacement went, and I wanted to do some more experimentation. This time, I wanted to use Claude to generate code that I wouldn’t just go delete. I wanted something low stakes, where I wouldn’t have any reason to feel annoyed by Claude’s choices in implementation. I also wanted to be able to say “yeah good enough” when I thought it was good enough, without asking anybody else to accept that the work was good enough for the repo. That is: I wanted an isolated test case where only I was taking on any risk. Easy: my personal web page!

Sure, in theory this was using company resources (my Claude subscription) for personal use, but I already got Fastmail’s $20 worth out of Claude in Cassandane. And it was professional development. Yup, felt like a great idea!

Did I mention I was in Belgium at the time? I was. I was at FOSDEM, I’d just given a talk on JMAP, and I was going to put it on my “talks I’ve given page”, but that page of my site was still on my old website, where it was manually-written HTML. My current site is Jekyll, built by GitHub Pages, the definition of “good enough”. By hand, I got the talks page into Jekyll, as a horrible hybrid of Markdown and the old HTML. You can read the state I got it into in the last commit before I invited Claude to the party.

I prompted Claude with this:

The file talks/index.md contains two lists of conference presentations. One is a Markdown bullet list of links, generally to YouTube. The other is HTML with thumbnail images linked to SpeakerDeck. Convert this to a data-driven grid rendered by Jekyll, presumably using a collection. Each entry should be able to contian a text description, title, and image asset.

If you want, you can read the entire, highly-detailed transcript of this session with Claude. The short version is that I spend about 20 minutes engaged with Claude. (There are some hours-long waits in there while I did things like “go to dinner”.) Claude very quickly updated my repo to do just what I asked. It looked pretty good, too, and produced a pretty straightforward commit. The commit message is crap: I wrote it, with an audience of “myself”.

The commit adds a bunch of CSS (SCSS), which very plausibly contains pointless crap or is doing in a goofy way. I don’t know because (a) I’m not a CSS expert and (b) I could not care less. This kind of “could not care less” echos my feelings about the construction of the throwaway subroutine-signature-adder. This code (the CSS) would stick around, but I wasn’t going to have to think about it. The real content was the YAML file, and if the CSS got too complicated later, I could throw it all away, start over, or whatever. It was not for human consumption, and posed roughly no risk. (I did read it, because no LLM-generated code is obviously safe and correct.)

With that done, things got more interesting, in that there was more faffing about. For a talk with slides and an image, I was going to have to do this:

/_data/talks.yaml   # <- add an entry
/asset/talk-name/preview.jpg
/asset/talk-name/slides.pdf

Ugh, there was too much chance of anomaly there. I asked for a fix, Claude said “What if we use a collection?” and I said, “That’s what I suggested to start with!” Still, Claude went ahead and converted the talks.yaml file into a bunch of individual Markdown files for me, so all was forgiven. Then there was a big back and forth about how to make preview image discoverable, where Clade was definitely confused about how things worked.

rjbsHow sure are you that files from a collection are part of static_files?
claude(thinking to itself) The user is asking me to be honest about my certainty level again. And they’re right to push - I’ve been making assumptions. I’m not actually certain that static files within a collection directory appear in site.static_files. They might be in a different collection-specific variable, or might need to be accessed differently.

This was really frustrating, but only about 70% because of Claude. The Jekyll docs are beautiful but lousy. They sort of vaguely describe how things work in many cases, but they’re hardly comprehensive. We got there in the end, but it was tedious: I took over, made changes by hand, pushed, and checked what happened.

Project Two ½: rjbs.cloud/talks refactoring

After the previous day’s mix of success and annoyance, I went back for round two. I did a few things. I cleaned up some old posts, I reworked how the talk source files were laid out… but the main thing was I got a local build running. This was never going to be hard, because I’d done it before, but I didn’t want to futz around. I asked Claude how I could get a local build going and it said “like this” and dumped out a little shell script. I didn’t like it and asked for a few improvements. Within minutes, I had a local build-and-serve script committed.

You can read the whole transcript, boring bits and all. Also, of course the commits are all in my repo.

Was this safe to run on my laptop? Well, I think so.

Claude’s general tool-running security is pretty lousy. It says it won’t touch stuff outside the project directory, and tries to ask for permission, but it’s pretty unintelligent. When in doubt, throw it all in a container. Anthropic offers some. But there’s also sandbox mode. In sandbox mode, Claude uses bubblewrap or seatbelts, which are effectively process namespacing tools. In sandbox mode, you can tell Claude to go ahead and try whatever it wants. If the kernel prevents it, then it will ask permission. This isn’t a foolproof system, but it’s suitable for “hackin’ on my personal website”.

Project Two ¾: rjbs.cloud hacking on the web

Last bit about using Claude for my personal site!

My friend David is much more of a power user of Claude than I am. The other day, he said he was kicking off some bugfixes from a taxi. I said, “How does that work?” It’s like this: Just like you can chat with Claude in your browser and ask it to explain how an octocoupler works, you can code with Claude and ask it to make changes to your software. It’s like an IDE, but there’s no human-operable editor. (Kind of like emacs, heyooooooo!)

You give Claude permission to access your GitHub repos, and it runs its edits and other tool use inside Anthropic’s compute cloud. When it’s done, it can create PRs for you.

I am not enthusiastic about this setup, although “fire off a worker to get started on a problem” is not a bad idea. It’s mostly that Claude Code on the Web is pretty limited and I don’t think it will be suitable for my primary work any time soon. Also, I’m less clear on the entire security model of delegated GitHub access through Claude. I need to really read the docs, here.

On the other hand, I fixed a bit of styling on my blog from a chat on my tablet. This all fell under “pretty neat, worked great, not interested”.

Claude Code on the Web

I’ve been slowly ramping up my use of Claude for coding issues. I’ve been meaning to write a bit more about how I use it, and had been putting that off until I finished a few things. With some of those done, I thought I’d write up some notes on how it’s gone, finally. Over the next little while, I’ll post some actual work I’ve done. Later, I’ll try to write some more sort of general thoughts: other things I might try, what general tactics have felt useful, places where I think things are particularly problematic, and so on.

I started out fairly negative on “agentic coding”, and I still have a lot of opinions, but they now include that (a) coding agents are not going anywhere and (b) the resulting code can be of sufficient quality to be worth using in real work.

Project One: Cassandane Signatures

I work on Cyrus IMAP, an open-source JMAP, IMAP, CalDAV and CardDAV server. Cassandane is the Cyrus test suite’s largest component. It’s a big pile of Perl, around 200k LOC. In general, each test is a separate subroutine stored in its own file. The whole thing has upsides and downsides. One of the smaller, but noticeable, downsides: basically none of that code used subroutine signatures. I try to always use subroutine signatures in new Perl code. I’d begun using them in some new Cassandane code, but it was just a drop in the ocean. I wanted them everywhere, and to be the clear default. My existing “convert subs to use signatures” code munging program I had lying didn’t cut it, for a variety of boring reasons, including that it didn’t cope with Perl subroutine attributes, which Cassandane uses extensively.

I wanted to, in one swoop, convert all of Cassandane’s tests to use subroutine signatures. I considered futzing with my old code for this, but then I thought, “This seems like a nice simple job to test out Claude”. I gave Anthropic $20, installed Claude Code, and fired it up.

Claude strategy was a lot like mine: rather than go edit every file, it wrote a program that would edit all the files. It was sort of terrible, around 300 lines of code. Later I tried to write my own version. It never quite worked (after five to ten minutes of work, anyway), but it was close, and under 50 lines. But the good news is that Claude’s worked, and then I could delete it. If I was building a program to use and maintain, I would never have accepted that thing. But I didn’t need to. I could run the program and look at the git diff. There wasn’t even a security concern. It all lived in a container.

Claude needed help. Its first go was so-so. Claude couldn’t check its own work because it didn’t know how to use the Docker-driven build-and-test system I use for Cyrus, and so Claude couldn’t run the tests. It could compile-test the tests, though, which went a long way. It iterated for an hour or so. Sometimes I’d hop in and tell it what it was doing wrong, or that it could stop worrying about some issue.

When it was done, I had a diff that was thousands of lines long and touched 1,500+ files. I spent a long time (several shifts of 15 minutes each) reviewing the diff. The diff was so close to perfectly uniform as to be mind-numbing. But it was my job to make sure I wasn’t sending bogus changes for review to a colleague without vetting it first. (After all, had I written my own code-transforming program entirely by hand and run that, I wouldn’t have sent its output along for code review without a careful reading!)

I found some minor bugs and fixed them in separate commits. You can read the whole changeset if you want. You’ll see it’s six commits by me, one by Claude.

If this was the only value I got out of the $20, it would’ve been well worth it, but I went on to get a lot more done on those $20. I’ll write more about some other, more interesting work, over the next few days.

TL;DR

I didn’t like how the default zsh prompt truncation works. My solution, used in my own custom-made prompt (fully supported by promptinit), uses a custom precmd hook to dynamically determine the terminal’s available width.

Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

TPRC Presentation Coaches Available

Perl Foundation News

The deadline for talks looms large, but assistance awaits!

This year, we have coaches available to help write your talk description, and to support you in developing the talk.

If you have a talk you would like to give, but cannot flesh out the idea before the deadline (March 15th; 6 days from now!), you should submit your bare-bones idea and check "Yes" on "Do you need assistance in developing this talk?".

We have more schedule space for talks than we did last year, and we would love to add new voices and wider topics, but time is of the essence, so go to https://tprc.us/ , and spill the beans on your percolating ideas!

In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.

Here is an excerpt of the code that gives error:

package remote {
__DATA__
print "$ENV{HOME}\n";
}

as show below

Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.

I can't seem to find any mis-matched brackets.

On the contrary, when I re-write the same package without braces, the code works.

package remote;
__DATA__
print "$ENV{HOME}\n";

I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Clone - recursively copy Perl datatypes
    • Version: 0.48 on 2026-03-02, with 33 votes
    • Previous CPAN version: 0.48_07 was 6 days before
    • Author: ATOOMIC
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260301.001 on 2026-03-01, with 25 votes
    • Previous CPAN version: 20260228.001
    • Author: BRIANDFOY
  3. Date::Manip - Date manipulation routines
    • Version: 6.99 on 2026-03-02, with 20 votes
    • Previous CPAN version: 6.98 was 9 months before
    • Author: SBECK
  4. DateTime::TimeZone - Time zone object base class and factory
    • Version: 2.67 on 2026-03-05, with 22 votes
    • Previous CPAN version: 2.66 was 2 months, 25 days before
    • Author: DROLSKY
  5. Devel::Cover - Code coverage metrics for Perl
    • Version: 1.52 on 2026-03-07, with 104 votes
    • Previous CPAN version: 1.51 was 7 months, 11 days before
    • Author: PJCJ
  6. ExtUtils::MakeMaker - Create a module Makefile
    • Version: 7.78 on 2026-03-03, with 64 votes
    • Previous CPAN version: 7.77_03 was 1 day before
    • Author: BINGOS
  7. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260306 on 2026-03-06, with 37 votes
    • Previous CPAN version: 1.20260301 was 5 days before
    • Author: MSIMERSON
  8. Module::Build::Tiny - A tiny replacement for Module::Build
    • Version: 0.053 on 2026-03-03, with 16 votes
    • Previous CPAN version: 0.052 was 9 months, 22 days before
    • Author: LEONT
  9. Number::Phone - base class for Number::Phone::* modules
    • Version: 4.0010 on 2026-03-06, with 24 votes
    • Previous CPAN version: 4.0009 was 2 months, 27 days before
    • Author: DCANTRELL
  10. PDL - Perl Data Language
    • Version: 2.103 on 2026-03-03, with 101 votes
    • Previous CPAN version: 2.102
    • Author: ETJ
  11. SPVM - The SPVM Language
    • Version: 0.990141 on 2026-03-06, with 36 votes
    • Previous CPAN version: 0.990140
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.89 on 2026-03-05, with 18 votes
    • Previous CPAN version: 2.88 was 9 days before
    • Author: MANWAR
  13. Sys::Virt - libvirt Perl API
    • Version: v12.1.0 on 2026-03-03, with 17 votes
    • Previous CPAN version: v12.0.0 was 1 month, 18 days before
    • Author: DANBERR
  14. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.68 on 2026-03-02, with 20 votes
    • Previous CPAN version: 0.67
    • Author: CHANSEN
  15. X11::korgwm - a tiling window manager for X11
    • Version: 6.0 on 2026-03-07, with 14 votes
    • Previous CPAN version: 5.0 was 1 year, 1 month, 15 days before
    • Author: ZHMYLOVE
  16. Zonemaster::Engine - A tool to check the quality of a DNS zone
    • Version: 8.001001 on 2026-03-04, with 35 votes
    • Previous CPAN version: 8.001000 was 2 months, 16 days before
    • Author: ZNMSTR

In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling corrections (or autocorrect for commands). While handy, it is somewhat limited as it is global. Now, I wanted to ignore it only for git and not other commands. But I haven’t found a way to only target git without having to make a wrapper around git (which I don’t want to do).

So I wrote an autoloaded function that does this for me. The idea is rather simple. In your .zshrc you set a zstyle that tells which file should be ignored based on files (or directories) that exist in the current directory. Based on this you build the CORRECT_IGNORE_FILE environment variable or you just unset it. This function is then hooked into the chpwd action. I went with three default options, check dir, file, or just exist: d, f, or e. File wins, then directory, then exists.

Episode 9 - Olaf Kolkman (part 1)

The Underbar
Olaf Kolkman has had a long career in open source. In this first part, we discussed his involvement with Perl, DNSSEC and NLnet Labs.

(dlxxxix) 16 great CPAN modules released last week

Niceperl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Amon2 - lightweight web application framework
    • Version: 6.18 on 2026-02-28, with 27 votes
    • Previous CPAN version: 6.17 was 1 day before
    • Author: TOKUHIROM
  2. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.439 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.438 was 1 month, 29 days before
    • Author: KUERBIS
  3. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.031 on 2026-02-25, with 19 votes
    • Previous CPAN version: 1.030 was 20 days before
    • Author: PREACTION
  4. CPAN::Uploader - upload things to the CPAN
    • Version: 0.103019 on 2026-02-23, with 25 votes
    • Previous CPAN version: 0.103018 was 3 years, 1 month, 9 days before
    • Author: RJBS
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260228.001 on 2026-02-28, with 25 votes
    • Previous CPAN version: 20260225.001 was 2 days before
    • Author: BRIANDFOY
  6. DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
    • Version: 4.055 on 2026-02-23, with 67 votes
    • Previous CPAN version: 5.013 was 6 months, 19 days before
    • Author: DVEEDEN
  7. Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
    • Version: v31.0.0 on 2026-02-25, with 20 votes
    • Previous CPAN version: v30.0.0 was 27 days before
    • Author: DORASUN
  8. LWP::Protocol::https - Provide https support for LWP::UserAgent
    • Version: 6.15 on 2026-02-23, with 22 votes
    • Previous CPAN version: 6.14 was 1 year, 11 months, 12 days before
    • Author: OALDERS
  9. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260226 on 2026-02-27, with 36 votes
    • Previous CPAN version: 1.20250805 was 6 months, 21 days before
    • Author: MSIMERSON
  10. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.039000 on 2026-02-28, with 27 votes
    • Previous CPAN version: 2.038000 was 29 days before
    • Author: MICKEY
  11. SPVM - The SPVM Language
    • Version: 0.990138 on 2026-02-28, with 36 votes
    • Previous CPAN version: 0.990137 was before
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.88 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.87 was 3 years, 9 months, 3 days before
    • Author: MANWAR
  13. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000163 on 2026-02-24, with 28 votes
    • Previous CPAN version: 1.000162 was 3 days before
    • Author: EXODIST
  14. Tickit - Terminal Interface Construction KIT
    • Version: 0.75 on 2026-02-27, with 29 votes
    • Previous CPAN version: 0.74 was 2 years, 5 months, 22 days before
    • Author: PEVANS
  15. TimeDate - Date and time formatting subroutines
    • Version: 2.34 on 2026-02-28, with 28 votes
    • Previous CPAN version: 2.34_01
    • Author: ATOOMIC
  16. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.66 on 2026-02-25, with 20 votes
    • Previous CPAN version: 0.65 was 1 day before
    • Author: CHANSEN