Published by JRaspass on Wednesday 17 September 2025 15:31
Modernise overloading pragma a little - Move the version declaration into the package line. - Use v5.40 to get strict, warnings, and the module_true feature. - Make private sub lexical, it's unlikely to be used in darkpan. - Make use of subroutine signatures.
Published by khwilliamson on Wednesday 17 September 2025 15:31
Support Unicode 17.0 This adds full support for this latest version of Unicode. What was essentially missing was updating the rules for the break properties, like \b{wb}. This is always a pain, but the changes made for 15.1 and 16.0 made it much easier.
Add Unicode 17.0 This is includes updates to a few perl files that need to know the current Unicode version, and regenerating perl files that depend on the Unicode data
Published by khwilliamson on Wednesday 17 September 2025 15:31
Use Unicode 17.0 name for split property Unicode 16.0 created a subcategory of hyphens containing just U+2010 "HYPHEN". They did not name it, so I called it U2010. Unicode 17.0 does name it as HH (and adds more code points to it). So this commit changes the name to HH, in preparation for 17.0
Published by khwilliamson on Wednesday 17 September 2025 15:31
mktables: White-space only indent statement properly
Published by U. Windl on Wednesday 17 September 2025 14:17
In a CGI script that processes arrays of strings the user can supply for each field whether to match a string there or not. If all user-specified criteria are fulfilled, the specific array should be output.
As the test function depends on the user input, I thought I'd avoid checking the user input each time, and build a test function at runtime that only tests for the values the user had specified.
Unfortunately it never worked and in a debug session I found out that the closure returns the expression as string instead of evaluation it.
Example:
DB<10> x $matcher->(\@fields)
0 '$_[0]->[0] =~ /Zi/ && $_[0]->[1] =~ /A/'
DB<11> x eval $matcher->(\@fields)
0 ''
The code fragments to construct the closure are:
# collect the specified fields in @matches like this
# (the search_fields strings use the same index as the target):
for (my $i = 0; $i <= $#search_fields; ++$i) {
push(@matches, "\$_[0]->[$i] =~ /$search_fields[$i]/")
if (defined $search_fields[$i] && length($search_fields[$i]) > 0);
}
# so the content might be (simple case):
# DB<13> x @search_fields
#0 'Zi'
#1 'A'
#2 undef
#3 undef
#4 undef
#5 undef
#6 undef
#7 undef
#8 undef
#9 undef
#10 undef
# DB<7> x @_matches
#0 '$_[0]->[0] =~ /Zi/'
#1 '$_[0]->[1] =~ /A/')
# (the array given as parameter has the indices to compare pre-computed)
# construction of the closure using eval
eval {
$matcher = sub ($) { join(' && ', @matches) }
} or $matcher = sub ($) { 0 };
# I also had tried "$matcher = eval { sub ($) { join(' && ', @matches) } }" before
# And for performance reasons I want to avoid using eval in the sub itself
# calling the closure:
if ($select = $matcher->(\@fields)) {
# record matches criteria
}
As it is now, the closure matches every record. How to do it correctly and efficiently (while still being maintainable (number and order of fields may change))?
Published by U. Windl on Wednesday 17 September 2025 06:05
I wrote some Perl CGI script that was working rather well when using GET
requests only.
Now the script fails to see some of the parameters that are passed vie URI.
I used this code for debugging the parameters:
sub some_sub($)
{
my $query = shift;
#...
if (my @p = $query->param()) { # debug
@p = map {
my $n = $_;
(map { uri_escape($n) . '=' . uri_escape($_) }
$query->multi_param($n))
} @p;
print $query->comment('p0: ' . join('|', @p)) . "\n";
}
#...
}
Now the point is:
If I replace $query->param()
with $query->url_param()
, then I suspect I cannot use $query->multi_param
(because if the param
method fails to list the parameter, then it's unlikely that multi_param
will know it).
The problem I'm facing is (you won't understand the meaning, I'm sure) that a parameter string
?st=joe;ds=K;au=1;sf-0=1;sf-1=1;sf-8=1;sf-4=1;sf-5=1;sf-7=1;cs-0=1;cs-1=1;cs-8=1;cs-4=1;cs-7=1;um=2;fn=10
produces
<!-- p0: st=joe|ds=K|au=1|sf-0=1|sf-1=1|sf-8=1|sf-4=1|sf-5=1|sf-7=1|cs-0=1|cs-1=1|cs-8=1|cs-4=1|cs-7=1 -->
and I'm wondering where the um=2
went to (however it is expected that fn=10
is missing, because it had been deleted before explicitly).
The um
parameter comes from <input type="hidden" name="um" value="2">
, the last element just before </form>
, while the other parameters shown are from regular form fields (The sf-*
and cs-*
parameters are from a lengthy list of checkboxes).
I'm getting crazy:
Given the query string (from Firefox)
?cs-8=1&fn=10&st=joe&ds=U&cs-7=1&cs-1=1&sf-0=1&um=2&sf-4=1&sf-7=1&sf-5=1&sf-1=1&au=1&cs-0=1&cs-4=1&sf-8=1
and this newly added debug code:
if (my @p = $query->url_param()) { # debug
@p = map {
my $n = $_;
($_ .':', map { uri_escape($n) . '=' . uri_escape($_) }
$query->param($n))
} @p;
print $query->comment('pU: ' . join('|', @p)) . "\n";
}
I see:
<!-- pU: st:|st=joe|fn:|ds:|ds=U|cs-8:|cs-8=1|sf-4:|sf-4=1|sf-7:|sf-7=1|cs-7:|cs-7=1|cs-1:|cs-1=1|sf-0:|sf-0=1|um:|sf-8:|sf-8=1|cs-4:|cs-4=1|cs-0:|cs-0=1|sf-1:|sf-1=1|sf-5:|sf-5=1|au:|au=1 -->
So the parameter um
is present, but it does not have a value!?
Published by user3408541 on Tuesday 16 September 2025 22:38
My task is to parse the protein names by removing the brackets and parentheses in the row.
In short, I want to retain the words in front of any parentheses and brackets.
Note that I need to keep symbols in the main words like H(+)/Cl(-) exchange transporter 6
.
Data File:
data = {
"Entry": ["A0A087X1C5", "A0A0B4J2F0", "O00468", "P51797", "O75164"],
"Reviewed": ["reviewed"] * 5,
"Entry Name": ["CP2D7_HUMAN", "PIOS1_HUMAN", "AGRIN_HUMAN", "CLCN6_HUMAN", "KDM4A_HUMAN"],
"Protein names": [
"Putative cytochrome P450 2D7 (EC 1.14.14.1)",
"Protein PIGBOS1 (PIGB opposite strand protein 1)",
"Agrin [Cleaved into: Agrin N-terminal 110 kDa subunit; Agrin C-terminal 110 kDa subunit; Agrin C-terminal 90 kDa fragment (C90); Agrin C-terminal 22 kDa fragment (C22)]",
"H(+)/Cl(-) exchange transporter 6 (Chloride channel protein 6) (ClC-6) (Chloride transport protein 6)",
"Lysine-specific demethylase 4A (EC 1.14.11.66) (EC 1.14.11.69) (JmjC domain-containing histone demethylation protein 3A) (Jumonji domain-containing protein 2A) ([histone H3]-trimethyl-L-lysine(36) demethylase 4A) ([histone H3]-trimethyl-L-lysine(9) demethylase 4A)"
],
"Gene Names": ["CYP2D7", "PIGBOS1", "AGRN AGRIN", "CLCN6 KIAA0046", "KDM4A JHDM3A JMJD2 JMJD2A KIAA0677"],
"Length": [515, 54, 2068, 869, 1064],
"STRING": [None, "9606.ENSP00000484893", "9606.ENSP00000368678", "9606.ENSP00000234488", "9606.ENSP00000361473"]
}
Expected Result:
data = {
"Entry": ["A0A087X1C5", "A0A0B4J2F0", "O00468", "P51797", "O75164"],
"Reviewed": ["reviewed"] * 5,
"Entry Name": ["CP2D7_HUMAN", "PIOS1_HUMAN", "AGRIN_HUMAN", "CLCN6_HUMAN", "KDM4A_HUMAN"],
"Protein names": [
"Putative cytochrome P450 2D7",
"Protein PIGBOS1",
"Agrin",
"H(+)/Cl(-) exchange transporter 6",
"Lysine-specific demethylase 4A"
],
"Gene Names": ["CYP2D7", "PIGBOS1", "AGRN AGRIN", "CLCN6 KIAA0046", "KDM4A JHDM3A JMJD2 JMJD2A KIAA0677"],
"Length": [515, 54, 2068, 869, 1064],
"STRING": [None, "9606.ENSP00000484893", "9606.ENSP00000368678", "9606.ENSP00000234488", "9606.ENSP00000361473"]
}
When I tried the first approach, it removed all parentheses and brackets without considering the intermediate symbols.
$string =~ s/\s*(\(|\[).*?(\)|\])\s*$//;
# Before
# H(+)/Cl(-) exchange transporter 6 (Chloride channel protein 6) (ClC-6) (Chloride transport protein 6)
# After
# H
I also tried the step-by-step process using the second code, but it removed the intermediate symbols too.
@values = split(/[\(\)]/,$string);
@values = split(/[\[\]]/,$string);
# Before
# Very-long-chain (3R)-3-hydroxyacyl-CoA dehydratase 1 (EC 4.2.1.134) (3-hydroxyacyl-CoA dehydratase 1) (HACD1) (Cementum-attachment protein) (CAP) (Protein-tyrosine phosphatase-like member A)
# After
# Very-long-chain (didnt work)
Published by U. Windl on Tuesday 16 September 2025 18:41
I have a HTML search form that allows the user to input strings to search for.
For several reasons the strings the user enters are transformed to rather complex regular expressions, but the user-supplied strings are always put inside \Q...\E
, so that any *
would match literally.
Now I'm considering an "expert mode" where the user will be allowed to enter a regular expression that is used for searching instead. However I see two risks:
qr
).Is there a way to allow the user to enter a regular expression and validate it, but also prevent any variable expansion or function execution?
Mangling the user-supplied regular expression with a set of regular expressions to make it safe seems quite dangerous (because being incomplete) to me.
From Is it safe to read regular expressions from a file? it seems that any "code like" constructs found in a variable aren't evaluated when expanding such variable inside a match.
Could I prevent the user for using (e.g.) capture groups?
Or should I restrict the reguar expressions severely to use only *.?+|
as metacharacters (plus maybe anchoring using ^
and $
(If allowing character classes, then parsing begins to become messy)?
Published by Jason A. Crome on Tuesday 16 September 2025 02:59
At long last - Dancer2 2.0.0!
I apologize it took longer than expected - open source doesn't always move as fast as we'd like it to - but there's a lot of great things in this release that make it worth the wait.
Head on over to Perl.com to check out the details. Here's a quick summary of what's new:
on_hook_exception
We're really excited for this release, and we hope you are too!
Keep Dancing!
Jason/CromeDome
![]() | Version 2.0.0 of Dancer 2 has landed! Congratulations to Jason Crome and the rest of the Dancer team on this release. đđșđȘ© [link] [comments] |
Not sure if any one was using it but me but the TypePad blogging service, which was based on the blogging platform MovableType (that was written with Perl), is shutting down at the end of this month.
Here is the announcement from the Everything TypePad blog.
Apologies if this is not relevant to the sub or old news.
The Dancer Core Team project is proud to announce the release of Dancer2 2.0.0!
This release has been a long time coming, and while open source sometimes takes longer than weâd like, we believe the wait has been worth it. With fresh documentation, architectural improvements, and developer-friendly new features, version 2.0.0 represents a significant evolution of Dancer2 and the Perl web ecosystem.
If you’d like a more extensive overview, I gave a talk at the Perl and Raku Conference 2025, about Dancer2 2.0.0, covering new features and where weâre headed next. You can watch the full presentation here: Dancer2 2.0.ohh myyy on YouTube â
Every major release is an opportunity to take stock of where a project stands and where itâs going. For Dancer2, 2.0.0 represents:
Here are some of the most important changes included in Dancer2 2.0.0:
Brand New Documentation Thanks to a grant from the Perl and Raku Foundation, Dancer2âs documentation has been completely rewritten. Clearer guides and reference materials will make it easier than ever to get started and to master advanced features. Read the docs â
Extendable Configuration System Thanks to first-time contributor Mikko Koivunalho, the new configuration system allows for greater flexibility in how Dancer2 applications are configured and extended. Developers can build new configuration modules, and even integrate configuration systems from other applications or frameworks.
Building on this work, long-time core team member Yanick Champoux enhanced the new configuration system even further, enabling additional configuration readers to be bootstrapped by the default one. Learn more in the config guide â
A Leaner Core Distribution Why have two templating systems in the core framework when you can have zero?
Dancer2::Template::Simple
has been removed from the core. It is now available
as a separate distribution on MetaCPAN
for migration projects from Dancer 1.Template::Tiny
has been retired, with its improvements merged upstream (thanks,
Karen Etheridge!), and Dancer2::Template::TemplateTiny
is now just an adapter for the official
version. See the documentation âSmarter Data Handling
Dancer2 now supports configurable data/secrets censoring using Data::Censor
, helping developers
protect sensitive information in logs and debug pages. Learn more about Data::Censor â
Better Logging and Debugging
Hooks are now logged as they are executed, and a brand-new hook â on_hook_exception
â provides
a way to handle unexpected issues more gracefully. See the hooks documentation â
CLI Improvements The command-line interface also received some attention:
Dancer2 is what it is because of its community. A heartfelt thank you goes out to everyone who made this release possible, including:
Template::Tiny
, helping us to streamline the Dancer2 core.Your contributions and support are what keep the project moving forward. â€ïž
Dancer2 remains an active, community-driven project, and version 2.0.0 shows our continued dedication to advancing the framework and supporting our community.
We invite you to try out Dancer2 2.0.0, explore the new documentation, and join the conversation on GitHub or the projectâs mailing list.
Weâre excited about this release and canât wait to see what the Perl community builds with it.
Jason (CromeDome) and the Dancer2 Core Team
Published by kobaken on Monday 15 September 2025 07:18
Hi, I'm kobaken.
I've released an adapter for using Inertia.js with Mojolicious, a Perl backend framework. Inertia.js is a bridging library that connects backend frameworks with components written in React, Vue, and other frameworks.
https://metacpan.org/pod/Mojolicious::Plugin::Inertia
Inertia.js is a framework that connects backend frameworks with React or Vue components. It allows you to use modern frontend components while maintaining a traditional backend approach.
I figure this is an approach that fits well when you have substantial assets on the backend side, but replacing everything would be too costly, and you want to start by improving the client-side experience.
When using components written in React, Vue, etc. with backend frameworks like Mojolicious, I believe there are two approaches. One is to mix JavaScript code into the backend framework's proprietary template engine. The other is to call backend APIs and handle rendering with CSR or SSR. Both approaches result in more complex architecture compared to the days when you could simply select a template on the backend and pass values to it.
With Inertia.js, you can use React and other components while maintaining the good old way of writing code. Specifically, it works as follows: by writing $c->inertia(COMPONENT, PROPS) on the backend side, Inertia.js will connect with the component.
Another point I found appealing is that the protocol for this connection is simple. It's remarkably simple.
For example, if you write it like this...
$c->inertia('Index', {
user => { name => 'Mojolicious' }
})
It returns this JSON:
{
"component": "Index",
"props": { "user": { "name": "Mojolicious" } },
"url": "/",
"version": "c53bc3824540ea95b1e9b495c7a01a9d"
}
So straightforward!
In fact, looking at Inertia.js's protocol, it's content you can finish reading in about 10 minutes, and Mojo's plugin is less than 100 lines, so I think it would be easy to port to other Perl frameworks.
Recently, in another project, I made React Server Components usable with Hono, but understanding that in 10 minutes is difficult for me đ Since Inertia.js becomes CSR, I think React Server Components is better if you're looking for user experience and SEO, but I believe there are situations where Inertia.js is sufficient.
https://zenn.dev/kfly8/articles/hono-meets-vite-rsc (Japanese)
Please give it a try if you'd like!
Published by /u/pmz on Sunday 14 September 2025 13:48
Regardless of TIOBE's trustworthiness, it has stirred a lively discussion around the language.
Published by Simon Green on Sunday 14 September 2025 12:52
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
You are given a m x n
matrix.
Write a script to find the highest row sum in the given matrix.
For input from the command line, I take a JSON-formatted string and turn it into a list of list of integers (arrays in Perl).
This is a one liner in Python, using the sum
function to calculate the sum of each row, and the max
function to return the highest (maximum) sum.
def highest_row(matrix: list[list[int]]) -> int:
return max(sum(row) for row in matrix)
While Perl does not have a built-in max
and sum
function, they are provided by the List::Util module. Therefore the solution is similar.
use List::Util qw(max sum);
sub main ($matrix) {
my $max = max( map { sum @$_ } @$matrix );
say $max;
}
$ ./ch-1.py "[[4, 4, 4, 4], [10, 0, 0, 0], [2, 2, 2, 9]]"
16
$ ./ch-1.py "[[1, 5], [7, 3], [3, 5]]"
10
$ ./ch-1.py "[[1, 2, 3], [3, 2, 1]]"
6
$ ./ch-1.py "[[2, 8, 7], [7, 1, 3], [1, 9, 5]]"
17
$ ./ch-1.py "[[10, 20, 30], [5, 5, 5], [0, 100, 0], [25, 25, 25]]"
100
You are given two integer arrays, @arr1
and @arr2
.
Write a script to find the maximum difference between any pair of values from both arrays.
For input from the command line, I take two strings and separate them on non-digit characters to turn them into two lists (arrays in Perl).
There are at least two possible ways to solve this, each with pros and cons.
I could extract the minimum and maximum values of each array, and compare the minimum of one list with the maximum of the other. The advantage of this is we only scan each list once, and do three additional calculations. The con is that it is significantly more code than the other solution.
The solution that I did write is a one liner in Python, by using double list comprehension and the abs
and max
function.
def max_distance(arr1: list[int], arr2: list[int]) -> int:
return max(abs(a - b) for a in arr1 for b in arr2)
Perl does not have list comprehension (it does have map
, but double mapping is asking for a world of pain). Therefore I use two foreach
loops to get the same result.
sub main (@arrays) {
my @arr1 = split /\D+/, $arrays[0];
my @arr2 = split /\D+/, $arrays[1];
my $max = 0;
for my $a1 (@arr1) {
for my $a2 (@arr2) {
my $dist = abs( $a1 - $a2 );
$max = $dist if $dist > $max;
}
}
say $max;
}
$ ./ch-2.py "4 5 7" "9 1 3 4"
6
$ ./ch-2.py "2 3 5 4" "3 2 5 5 8 7"
6
$ ./ch-2.py "2 1 11 3" "2 5 10 2"
9
$ ./ch-2.py "1 2 3" "3 2 1"
2
$ ./ch-2.py "1 0 2 3" "5 0"
5
Published on Sunday 14 September 2025 00:00
Published by /u/niceperl on Saturday 13 September 2025 20:33
Published by prz on Saturday 13 September 2025 22:32
Published by /u/Europia79 on Friday 12 September 2025 23:49
![]() | Already posted this to the Perl Discord server, but Git-for-Windows comes with only a partial and broken Perl installation: Not one that is "fully featured", like with cpanm support for actual Perl development. Yes, this should be fucking illegal, imo !!! However, I was finally able to install Strawberry Perl and "get" them to play nice together, lol :P To do this, I had to loop thru all all the files in the Perl folder and delete them from the Git folder (via a Terminal with Administrative privileges). Then put the Perl PATH before the Git PATH. This works !!! FAQ:
Seems to be an "implementation detail" of And YES: Perl works on Windows 7 :P [link] [comments] |
Published by Mohammad Sajid Anwar on Friday 12 September 2025 02:22
Design Pattern Factory: Moo vs experimental class feature.
Please check out the link for more information:
https://theweeklychallenge.org/blog/design-pattern-factory
We often rely on our tools and just deploy new DB versions and move on.
Lets see these simple examples.
Example 1
You have Schema v1 where table's column has the name `X`. At the next Schema v2 instead of it you created column named `Y`.
v1 -> v2
X -> -
- -> Y
So the tool correctly drops the `X` and creates `Y`.
Example 2:
For downgrades it looks the similar:
v2 -> v1
Y -> -
- -> X
Simple! Right??
Example 3
Let's do it in more advanced way. Now instead of create/drop we will rename field:
v1 -> v2
X -> Y{renamed X}
In this scenario SQL:T will detect `renamed` option and will generate `ALTER ...` statements correctly instead of CREATE/DROP one.
Example 4
Let's move to Schema v3 where we create `X` and drop `Y` (like we did in the example 1):
v2 -> v3
Y{renamed X} -> X
So here in the destination Schema v3 column `X` does not have any extra info at its definition (eg. {renamed Y}) thus the tool will just create column X and drop column Y. Nice!
Example 5
But let's double check downgrade direction.
Now the picture will be seen by the tool as next:
v3 -> v2
X -> Y {renamed X}
The tool will detect `{renamed X}` info at the destination and will generate `ALTER ...` statements.
STOP! We did not do any renamings during v2 -> v3 migrations. Hm... this looks suspicious.
Example 6
Ok. Let's check now what happens during v2 -> v1 downgrade.
Here the tool will see it as:
v2 -> v1
Y {renamed X} -> X
Here at the destination Shema v1 column `X` does not any extra info about renamings thus the tool will DROP Y and CREATE X. Right? Technically Yes. Is it correct? NO!!!
We do not expect DATA LOSE at this step, because during upgrade v1 -> v2 we renamed the column X->Y, thus for downgrade we expect it will be renamed back Y -> X.
This is why the notion of direction is important. For downgrades scripts the order of columns are switched and we should analyze the source definition to understand what is going on for this migration.
So for example 6 the tool will understand that Y were renamed from X, thus migration script will generate `ALTER ...` statements.
This is why I had proposed this change: https://github.com/frioux/DBIx-Class-DeploymentHandler/pull/81
And implemented these improvements and bugfixes https://github.com/dbsrgits/sql-translator/pull/188
I want you guys and girls review it, double check it and provide your feedback on this.
Once done I hope then we can merge this fix https://github.com/dbsrgits/sql-translator/pull/184 which will generate the correct downgrade migrations (without unexpeced data loss :) ).
Published by plentyofcoffee on Wednesday 10 September 2025 16:54
I'm defining some tests for a library I'm using. One of my tests is calling the code in such a way that it produces warnings, which is expected. However, I'd ideally like to suppress these warnings during testing while allowing the library to continue emitting them during normal usage. It seems I may be misunderstanding the lexical nature of the warnings
pragma.
I would expect the code below to disable the 'redundant' warning for all code called inside the block - instead, it produces a warning that I cannot seem to disable without editing the f
function.
#!perl
use v5.36; # automatically enable strict and warnings
package WarningTest {
sub f {sprintf("%s", @_)}
};
{
no warnings qw(redundant);
say WarningTest::f(1, 2);
}
./tmp.pl
Redundant argument in sprintf at ./tmp.pl line 5.
1
Is there any way to actually disable warnings for the duration of a call to a remote library function?
Published by alh on Tuesday 09 September 2025 07:41
Paul writes:
In August I focused on progressing my work on sub signatures. Between
the main OP_MULTIPARAM
work and the surrounding supporting changes,
we're now much better placed to look at no-snails or signatures named
parameters.
Total: 17 hours
Published by alh on Tuesday 09 September 2025 07:39
Tony writes: ``` 2025/08/04 Monday 0.13 github notifications 2.37 #23483 see if this can work for netbsd, testing, testing on openbsd, freebsd and comment 0.15 #23483 testing based on IRC
4.02
2025/08/05 Tuesday 1.33 #21877 research (do I need to rewrite this?)
2.76
2025/08/06 Wednesday 0.47 #23503 research 0.52 #23542 review and comment 0.43 #23539 review and approve 0.15 #23537 review and approve 0.32 #23542 review update and comment
3.99
2025/08/07 Thursday 0.28 #14630 testing and comment 0.08 #18786 review discussion and comment 0.38 #16808 research, testing and comment 0.42 #10376 review discussion, research and comment 0.68 #23543 review, comments 0.08 #23542 review updates and approve 0.08 #23422 apply to blead 0.18 #23459 review and approve
3.55
2025/08/11 Monday 1.00 #23544 review and approve 0.38 #23546 review change, review history and comment 0.60 #23553 review, comments 0.47 #23555 review and comment 0.23 #23557 review and approve
3.96
2025/08/12 Tuesday 0.45 #23543 review discussion, research and comment 0.13 #23555 review updates and comment 0.90 #23375 review comments and long comment 0.08 #23555 review updates and approve 0.08 #23563 review and approve 0.25 #16808 comment 0.35 #10385 comment 1.27 #15004 review discussion and patch, work on an
3.51
2025/08/13 Wednesday 0.40 github notifications 0.63 check coverity reported error and push a fix for CI 1.33 #23202 hopefully final review and comment (minor issue) 0.10 #16715 briefly research and comment 0.08 #15004 check CI, make PR 23567 0.10 coverity: check CI and make PR 23568 0.25 #15004 minor fix, testing 0.08 #23568 apply to blead
4.64
2025/08/14 Thursday 0.67 #23543 review updates, research and approve 0.78 #23202 review updates and comment 0.72 #23565 review and approve 0.87 #23561 research and comment
3.66
2025/08/18 Monday 0.08 #23567 review discussion, apply to blead 0.48 #23561 longish comment 1.22 #23570 review updates, struggle to understand some code, comment 0.27 #23202 review updates and approve 0.72 #23503 review, comment 0.20 #23573 comment
3.54
2025/08/19 Tuesday 0.62 #23533 review discussion as requested by khw, discussion with khw 0.72 check new coverity scan report
2.89
2025/08/20 Wednesday 0.95 #23570 review updates, research and approve 0.58 #23561 research and comments 0.97 #23574 reviewing...
3.85
2025/08/21 Thursday 1.58 #23574 more review and approve 0.60 #13140 review discussion, testing and comment 0.47 #8468 review discussion, research, testing and comment
3.85
2025/08/25 Monday 0.42 discuss handle_possible_posix with khw (while setting up to test Dennis Clarkâs list reported FreeBSD failure, and testing)
1.29
2025/08/26 Tuesday 1.03 #23647 review, testing, generated code checks, comments 0.15 #23647 review update and approve 0.62 #23645 review, review CI results, testing and comment 0.43 #23644 review the involved tickets, some testing and
2.23
2025/08/27 Wednesday 0.18 #16865 follow-up 0.88 #23640 review and approve 0.40 #23638 review, suggest and alternative 0.08 #23634 review and approve 0.27 #23632 review, research and comment 0.23 #23627 review and comment 0.08 #23621 review and approve 0.32 #23616 review, research and comments
3.84
2025/08/28 Thursday 0.60 #23654 review, research and approve 0.60 #23645 review, research, testing and approve, comment 1.15 #23641 review, testing, research and comment 0.22 #23613 review and approve 0.23 #23607 review and approve 0.48 #23612 research and comment
4.55
Which I calculate is 56.13 hours.
Approximately 52 tickets were reviewed or worked on, and 3 patches were applied. ```
Published by alh on Tuesday 09 September 2025 07:32
Dave writes:
I spent last month mainly continuing to work on rewriting and modernising perlxs.pod, Perl's reference manual for XS. The first draft is now about 90% complete. (Last month it was 80%; no doubt next month it will be 95%, then 97.5%, etc.) The bits that have been reworked so far have ended up having essentially none of the original text left, apart from section header titles (which are now in a different order). So it's turning into a complete rewrite from scratch.
It's still a work-in-progress, so nothing's been pushed yet.
During the course of writing about the XS INTERFACE keyword, I discovered a bug and fixed it; I also took the opportunity of fixing another INTERFACE bug which had been reported recently, where the C code generated was giving errors on recent picky C compilers.
Summary:
Total:
Published by Mohammad Sajid Anwar on Monday 08 September 2025 22:41
Caching using CHI.
Please check out the link for more information:
https://theweeklychallenge.org/blog/caching-using-chi
Published by Bob Lied on Monday 08 September 2025 16:28
As I'm writing this, I'm watching a crew install a fence and once again thanking my high school guidance counselor, Mr. Bencini, for yelling at me to get my ass in gear and quit procrastinating on those college applications if you don't want to spend your life digging post holes and living in a van down by the river.
This week's challenges must have come from Cybertruck drivers in Texas, because they are overcompensating for something with an obsession about being the biggest.
Our musical accompaniment: Take it to the Limit by the Eagles, 1975 (coincidentally about the time that Mr. Bencini was yelling at me).
You are given a m x n matrix. Write a script to
find the highest row sum in the given matrix.
Example: Input: @matrix = ([4, 4, 4, 4],
[10, 0, 0, 0],
[2, 2, 2, 9])
Output: 16
Row 1: 4 + 4 + 4 + 4 => 16
Row 2: 10 + 0 + 0 + 0 => 10
Row 3: 2 + 2 + 2 + 9 => 15
If every row can be reduced to its sum, then it's a one-liner. We'll lean on list utilities.
use List::Util qw/sum0 max/;
sub highestRow(@matrix)
{
return max map { sum0 $_->@* } @matrix
}
Notes:
map {...} @matrix
-- Each element in @matrix
is a reference to an array. Do something to every row.sum0 $_->@*
-- De-reference a row ($_
) to get its list of numbers. Use sum0
instead of sum
to handle the possibility of an empty matrix.# You are given two integer arrays, @arr1 and @arr2. Write a script to
# find the maximum difference between any pair of values from both arrays.
# Example 1 Input: @arr1 = (4, 5, 7)
# @arr2 = (9, 1, 3, 4)
# Output: 6
# With element $arr1[0] = 4 | 4 - 9 | = 5
# | 4 - 1 | = 3
# | 4 - 3 | = 1
# | 4 - 4 | = 0
# max distance = 5
# With element $arr1[1] = 5 | 5 - 9 | = 4
# | 5 - 1 | = 4
# | 5 - 3 | = 2
# | 5 - 4 | = 1
# max distance = 4
# With element $arr1[2] = 7 | 7 - 9 | = 2
# | 7 - 1 | = 6
# | 7 - 3 | = 4
# | 7 - 4 | = 4
# max distance = 6
# max (5, 4, 6) = 6
This example is trying to scare us by implying that we have to do some atrocious examination of every possible pair, an O(n2) algorithm. But let's cut through the fog: if we reduce each array to a range from its minimum to its maximum, it can be visualized like sliding @arr2
past @arr1
, with five different possibilities for how they overlap.
min1 max1
+-------------------------+
+--------+ +--------+ +--------+ +--------+ +--------+
min2 max2 min2 max2 min2 max2 min2 max2 min2 max2
The maximum distance will be from the extreme of one line to the extreme of the other. At the left side of the figure, that's the distance from min2
to max1
; at the right side, it's from min1
to max2
.
sub maxDist($arr1, $arr2)
{
use List::MoreUtils qw/minmax/;
use List::Util qw/max/;
my ($min1, $max1) = minmax($arr1->@*);
my ($min2, $max2) = minmax($arr2->@*);
return max( $max1-$min2, $max2-$min1 );
}
Notes:
List::MoreUtils::minmax
-- this convenient function finds the minimum and maximum in an array with only one pass, exploiting Perl's ability to return more than one thing at a time.Well, that was almost too easy. Let's apply Parkinson's Law ("Work expands to fill the time available"). We should do some error checking for the case that one of the arrays is empty. Because I can't think of a sensible result for that case, I'll throw an exception.
die "ERROR empty array" if ( not ( @$arr1 and @$arr2) );
@$arr1 and @arr2
-- the value of an array in a scalar context is its size; compact and useful, and easy to trip up beginners.not
and and
for readability, rather than !
and &&
. It's a simple conditional expression, so there's no problem with the low precedence of not
and and
.That's nice, but now we have a function that could possibly throw an exception. We should check for that. Classic Perl had the eval
syntax, but there have been variations on a try/catch feature for decades. Perl finally acquired a native try/catch syntax as an experimental feature in release 5.34, and it became a stable language feature in 5.40, so let's embrace the future.
For command-line usage, I'm going to pass the script a pair of comma-separated lists.
perl ch-2.pl 1,2,3,4 3,6,9
The Perl code to handle the arguments and call maxDist
will look like:
my @ARR1 = split(",", shift // "");
my @ARR2 = split(",", shift // "");
try { say maxDist(\@ARR1, \@ARR2); }
catch ( $e ) { say STDERR "Problem: $e" }
Notes:
shift
-- the implicit argument for shift
is @ARGV, so this takes the next available command-line argument. shift // ""
-- If there isn't an argument, avoid a warning message by defaulting to an empty string.say STDERR "..."
-- Not trying very hard to inform or recover, but at least use conventional streams.The final piece is to add a unit test for catching an exception. Using the Test2::V0
testing module, check for throwing an appropriate exception like this:
like( dies { maxDist([], []) }, qr/empty/i, "Empty arrays");
like(...)
-- test that the result of some code matches a regular expressiondies {...}
-- evaluate some code that is expected to throw an exception (note braces instead of parentheses).Published by Gabor Szabo on Monday 08 September 2025 11:35
Originally published at Perl Weekly 737
Hi there!
There is a new episode of Underbar, the Perlish podcast, part 3 of the Vibe-coding with Perl series came out, and there is an article whether one should learn Perl in 2025.
Regarding that. My son became a programmer a while ago mostly writing in Python and some fron-end stuff when necessary. He also knows how to use vim and he is definitely not lost on the Linux command-line. I don't think there is a lot of value for him to learn Perl in general, but being able to write one-liner to help with various small tasks could be really usefule. So I started to put together a bunch of oneliners in Perl and converted it into a book. It is still only in its infancy, but to go with the tradition I decided to release early.
Thus you can already read it for free or if you'd like to also support my efforts then you can buy an epub/pdf version of it via Leanpub. You can even pick the price.
Enjoy the book and enjoy your week!
--
Your editor: Gabor Szabo.
BooK wrote: I've just published the latest episode of The Underbar. This time we're having a long conversation with Salve Nilsen about the Cyber Resilience Act and it consequences for Perl and CPAN.
Thomas wrote: This week Farhad and me finally found some time to improve a part of our build pipeline that was nagging me for years. We can now release our DarkPAN modules via CI/CD into a GitLab generic packages repository and install them from there into our app containers, also via CI/CD pipelines.
Re-creating the vulnerability CVE-2025-40927 in an isolated docker container.
VelociPerl is a closed source fork of Perl that claims performance gains of 45% over the stock.
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
Welcome to a new week with a couple of fun tasks "Highest Row" and "Max Distance". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Smaller Than Current" and "Odd Matrix" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
This is a solid, practical and highly efficient blog post that showcases a competitive programming mindset. The approach is characterized by a focus on performance, concise code and leveraging the powerful built-in functions of Perl.
This is a high-quality, technically sound blog post that perfectly exemplifies the spirit of Raku programming. It successfully demonstrates how to tackle a classic algorithmic problem (Eulerian Circuits) by leveraging Raku's unique and powerful features, such as its sophisticated grammar (regex) engine and functional programming constructs.
This post is a well-written, technically sound and engaging exploration of two weekly code challenges in Perl. Overall, it's a solid contribution that balances clarity, correctness and style.
Both tasks move beyond naive solutions to offer significantly more scalable alternatives. The use of sorting, indexing, and run-length encoding reflects expert-level proficiency in PDL. Despite the technical depth, the code remains compact and well-organized.
Solutions are elegant, efficient (thanks to PDL), and provide precise results. They shine when used in a context where PDL is acceptable.
Both tasks avoid brute-force solutions in favor of counting, sorting, and parity logic. Code is concise, modern, and idiomatic Perl. Commentary is pedagogical, explains not only the "how" but also the "why".
This is an exceptionally well-written and insightful post. It successfully transcends a simple "how I solved these coding puzzles" write-up and instead delivers a compelling narrative about the enduring relevance of Perl, the value of community-driven challenges and the universal benefits of sharpening one's problem-solving skills with constrained tools.
Solutions are clear, idiomatic Perl, well explained and great for educational/demo purposes. They emphasize readability and correctness over raw efficiency, which is often the right trade-off in The Weekly Challenge.
The solutions are excellent. They are correct, efficient, readable and well-structured. The post has a clear, pragmatic coding style that focuses on simplicity and directly solving the problem at hand. The code is thoroughly documented and follows good practices. This is production-quality code for this type of algorithmic problem.
The post is a masterclass in technical writing and scientific computing. It successfully transforms a seemingly simple programming challenge into a deep, insightful exploration of numerical methods and performance optimization.
This is a well-written, engaging and technically sound solution to a coding challenge. It stands out by focusing on clarity, educational value and algorithmic elegance rather than just brute-forcing an answer.
Great CPAN modules released last week.
September 9, 2025
September 10, 2025
September 25, 2025
September 27, 2025
December 6, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Published by chrisarg on Monday 08 September 2025 07:18
After a very long hiatus due to the triplet of work-vacation-work, we return to Part 3 of my AI assisted coding of a Perl interface to a foreign library.
I will not repeat the post on my Github pages Github pages, or the documentation of my MetaCPAN package Bit::Set which features a "vibecoding" section.
However, I would like to share the take home points from my exercise:
Are the chatbots worth the investment? I'd say that as an amateur programmer the chatbot was helpful to get me out of the writer's block, but did not really save much time for me.
I can not imagine that a professional who knows what they are doing would be assisted much by the AI chatbots, which is what the METR study said:
overall, experienced developers experienced a 19% drop in productivity with AI assistance.
Published on Sunday 07 September 2025 15:00
This week Farhad and me finally found some time to improve a part of our build pipeline that was nagging me for years. We can now release our DarkPAN modules via CI/CD into a GitLab generic packages repository and install them from there into our app containers, also via CI/CD pipelines.
But before we start with the details, a little background:
Perl modules are published to CPAN. But not all Perl code goes to CPAN, and this "dark Perl matter" is called "DarkPAN". You especially don't want to publish the internal code that's running your company, or the set of various helper code that you collect over the years that's not really tied to one app, and also (for whatever reasons) not suitable to be properly release to CPAN. If you still want to use all the best practices and tools established in the last 30+ years, but in private, you can set up an internal CPAN-like repository, for example using tools like Pinto. Then you can use standard tools to install your internal dependencies from your internal CPAN-like repo (which I also call DarkPAN).
So this is what we did in the last ~10 years:
cpanfile
resolver
to install the dependency from our DarkPANcpm install -g --resolver metadb --resolver 02packages,https://pinto.internal.example.com/stacks/darkpan
This worked quite well.
It worked so well that we also used this to release and install what I now call "shared libs" inside monorepos: We have a few monorepos for different projects, where each monorepo contains multiple apps (different API backends, frontends and anything in between). Inside a monorepo we have code that we want to share between all apps, for example the database abstraction (DBIx::Class). Deploying these shared libs via the above method was working, but not very smoothly, especially when there's a lot of development happening: You had to push a new version of the lib, wait for it to be released, and then bump the version in all the cpanfiles
using this library.
So a few weeks ago I changed our handling of these shared libs. Instead of properly installing them via cpm
, I can copy the source code directly into the app container and set PERL5LIB
accordingly. This is possible because all of the code is in the same monorepo and thus available in the CI/CD pipeline. (material for another blog post..)
This hack is not an (easy) option for code that has to be shared between projects. But I wanted to get rid of maintaining a server to host Pinto, especially as we already have GitLab running, which supports a large range of language specific repositories. Unfortunately, Perl/CPAN is not implemented. But they have a "generic" repository, so I tried to use it to solve our problem.
The first step is to publish the freshly build Perl distribution into the GitLab repo. This is easy to do via the GitLab API and curl
. The API endpoint is not very nice IMO: api/v4/projects/{project-id}/packages/generic/{name}/{version}/{file}
, and I find it a bit weird that you set up a name
and version
and then add files to it (instead of just uploading a tarball), but whatever.
In our Makefile
we added:
package_registry := "https://gitlab.example.com/api/v4/projects/foo%2Fbar/packages/generic/$(application_name)"
get_version = $(shell ls $(application_name)-*.tar.gz | sed "s/$(application_name)-//; s/\.tar\.gz//")
get_filename = $(shell ls $(application_name)-*.tar.gz)
private_token ?= ${CI_JOB_TOKEN}
.PHONY: build
build:
dzil authordeps --missing | xargs -r cpm install --global
cpm install --global
dzil build
.PHONY: release
release:
$(eval VERSION := $(get_version))
$(eval FILENAME := $(get_filename))
curl -L -H "JOB-TOKEN: $(private_token)" \
--upload-file "$(FILENAME)" \
"$(package_registry)/$(VERSION)/$(FILENAME)"
We set the package_registry
base URL (note that I use the URI-escaped string foo%2Fbar
to address the project bar
in group foo
, because I find using the project ID even uglier that escaping the /
as %2F
). Then we use some shell / sed to "parse" the filename and get the version number (which is set automatically via Dist::Zilla) in build
.
In release
, we construct the final URL, authorize using the CI_JOB_TOKEN
and call curl
to upload the file.
Makefile
instead of defining the steps in .gitlab-ci.yml
?Makefile
locally, so I can run whatever steps that are triggered by the pipeline without having to push a change to gitlab (no more force pipeline
commits!)Makefile
to "programming" in YAMLscript: - make build release
Actually installing the Perl distribution from GitLab seemed easy, but GitLab threw a few stumbling blocks in our way.
My plan was to just take the URL of the distribution in the GitLab Generic Registry and use that in cpanfile
via the nice url
addon which allows you to install a CPAN distribution directly from the given URL:
requires 'Internal::Package' => '1.42', url => 'https://gitlab.example.com/foo/bar/packages/generic/Internal-Package-1.42.tar.gz';
But for weird reasons, GitLab does not provide a simple URL like that, esp not for unauthorized users. Instead you have to call the API, provide a token and only then you can download the tarball. And there is no way to add a custom header to pass the token in cpanfile
.
After some more reading of the docs, we found that instead of using a header, we can also stuff a deploy token into basic auth using https://user:password@url
format, and thus can specify the install URL like this:
requires 'Internal::Package' => '1.42', url => 'https://deployer:gldt-234lkndfg@gitlab.example.com/foo/bar/packages/generic/Internal-Package-1.42.tar.gz';
And this works!!
Well, the URL (using the actual API call) in fact looks like this:
requires 'Internal::Package' => '1.42', url =>
'https://deployer:gldt-234lkndfg@gitlab.example.com/api/v4/projects/foo%2Fbar/packages/generic/Internal-Package/1.42/Internal-Package-1.42.tar.gz/';
This is not very nice:
cpanfile
So we continued to improve this in a maybe crazy but perlish way:
One of the nice things of cpanfile
(and Perl in general) is that instead of inventing some stupid DSL to specify your requirements, we just use code:
requires 'Foo::Bar';
is actually calling a function somewhere that does stuff.
So we can run code in cpanfile
:
my @DB = qw(Pg mysql DB2 CSV);
my $yolo_db = 'DB::' . $DB[rand @DB];
requires $yolo_db;
The above code is of course crazy, but we can use this power for good and write a nice little wrapper to make depending on our DarkPAN easier.
I wrote a small tool, Validad::InstallFromGitlab
, which is configured with the GitLab base URL, the project name and the token. It exports a function from_gitlab
, which takes the name and the version of the distribution and returns the long line that requires
needs.
And because cpanfile
is just Perl, we can easily use this module there:
use Validad::InstallFromGitlab (gitlab => 'https://gitlab.example.com', project => 'validad%2fcontainer', auth => $ENV{DARKPAN_ACCESS});
requires from_gitlab( 'Validad::Mailer' => '1.20250904.144530');
requires from_gitlab( 'Accounts::Client' => '1.20250904.144543');
I decided to use some rather old but very powerful method to make Validad::InstallFromGitlab
easy to use: A custom import()
function:
package Validad::InstallFromGitlab;
use v5.40;
use Carp qw(croak);
sub import {
my ($class, %args) = @_;
if (!$args{gitlab} || !$args{project}) {
croak("gitlab and/or project missing");
}
my $registry_url = sprintf('%s/api/v4/projects/%s/packages/generic', $args{gitlab}, $args{project});
if (my $auth = $args{auth}) {
$registry_url =~s{://}{'://'.$auth.'@'}e;
}
my $caller=caller();
no strict 'refs';
*{"$caller\::from_gitlab"} = sub {
my ($module, $version) = @_;
my $package = $module;
$package =~s/::/-/g;
my $tarball = $package .'-' . $version . '.tar.gz' ;
my $url = join('/',$registry_url, $package, $version, $tarball);
return ($module, url => $url);
};
}
1;
import()
is called when you use the module in the calling code or cpanfile
:-)
use Validad::InstallFromGitlab (
gitlab => 'https://gitlab.example.com',
project => 'validad%2fcontainer',
auth => $ENV{DARKPAN_ACCESS}
);
The parameters passed to use
are passed on to import
, where I do some light checking and build the long and cumbersome GitLab url.
Then I use caller()
to get the name of the calling namespace and use a typoglob (been some time since I used that..) to install a function named from_gitlab
into the caller.
This function takes two params, the module name and version, and finally constructs the data needed by require, i.e. the module name, version and the whole gitlab URL.
So I can now specify my requirements very easily in the apps cpanfiles
and still use the GitLab generic package registry to distribute my DarkPAN modules!
But how do I install Validad::InstallFromGitlab
?
I don't. But all of our apps use a shared base container (which also helps to keep container size down). And in the Containerfile of the base container, I copy Validad::InstallFromGitlab
to a well-known location /opt/perl/darkpan
and load it from there via PERL5LIB
:
RUN mkdir -p /opt/perl/darkpan/Validad/
COPY Validad-InstallFromGitlab.pm /opt/perl/darkpan/Validad/InstallFromGitlab.pm
ONBUILD ARG DARKPAN_ACCESS
ONBUILD RUN PERL5LIB=/opt/perl/darkpan/ \
/opt/perl/bin/cpm install --cpanfile cpanfile \
--show-build-log-on-failure -g
But again that's material for another blog post...
Published by prz on Saturday 06 September 2025 15:45
Published by lbvf50mobile on Friday 05 September 2025 06:38
In 2025, I unexpectedly find myself enjoying Perl again â after years of Ruby and Go.
It sounds strange, but Perl hasnât aged the way many people think. Itâs not trendy, elegant, or fashionable â and yet, for certain kinds of work, it feels perfect.
Should you learn Perl in 2025?
Honestly â no. At least, not directly.
Start with UNIX system programming:
Once you understand these things, Perl becomes automatic.
You donât âstudyâ Perl â you realize you already understand it.
Hereâs the key insight:
If you look at Perl from a syntax perspective, it looks messy.
But if you look at it through the lens of UNIX processes and streams, it suddenly becomes crystal clear and intuitive.
Perl isnât designed like Python or Go, where you build large structures full of imports, frameworks, and abstractions.
A Perl script is simply a process:
When you see programs this way, Perlâs âcrypticâ operators and shortcuts stop looking weird â they become beautifully compressed UNIX primitives.
Modern languages â Python, Ruby, Go â often push you toward designing systems.
Perl isnât like that.
Perl is a sharp tool for solving tasks quickly:
You donât worry about architecture, imports, or frameworks.
You just write the code, run the process, and move on.
Perl feels less like a language and more like an extension of your UNIX shell.
Perl didnât die â it simply stepped aside. The web changed.
In the early days, the web was simple: HTML pages, images, and tables.
Perl thrived because it could glue things together effortlessly.
But todayâs web apps are massive, layered systems with complex UIs, APIs, and distributed backends.
Perl was never designed for this â so it faded from the spotlight.
Perl still matters because itâs tied to UNIX itself.
For sysadmins, DevOps engineers, and anyone who works close to the system, Perl remains reliable, concise, and insanely useful.
Perl doesnât chase trends.
It doesnât ship breaking changes every six months.
Itâs like a ballpoint pen and a squared notebook: simple, stable, always ready when you need it.
Perl developers donât see programs as abstract algorithms or piles of imports.
They see them as processes â living entities with streams, signals, and descriptors, talking to other processes in a UNIX world.
When you shift to this perspective, Perl syntax suddenly becomes obvious.
Itâs not cryptic anymore â itâs just UNIX in shorthand.
Perl isnât a language you learn.
Itâs a language you grow into.
And once you do, it feels like home. đȘ
Published on Tuesday 02 September 2025 00:00
After a very long hiatus due to the triplet of work-vacation-work, we return to Part 3 of my AI assisted coding of a Perl interface to a foreign library. In the last couple of months, many things have happened on the AI front, including the release of additional models, and some much needed injection of reality into the hype, so my final part will have a different tone than the previous ones and try to focus less on details. For those who donât wont to read the github pages, feel free to browse the MetaCPAN package Bit::Set which features the âvibecodingâ section.
In these explorations, agentic LLMs were found particularly problematic, often stalling to generate a solution, focusing on the wrong thing when tests were failing and often giving up. I therefore ended up not using them, and relied on the âAskâ mode of Github Copilot.
To build this module, I first created the distribution structure with (what else?) Dist::Zilla, and then opened the folder using VS Code. Subsequently, I provided as context the âbit.hâ header file from the Bit library and the associated README markdown file. The prompt used was the following:
Forget everything I have said and your responses so far. Look at the description
of the project in README.md and the Abstract Data Type interface in bit.h.
Put your self in the place of a senior Perl engineer with extensive understanding
of the C language. Your goal is to create a procedural Perl API to all the
functions in C using the Foreign Function Interface. Assume that we have already
implemented an Alien module (Alien::Bit) to install the foreign (C) dependency
bit, so make sure you use it! Look at the checked runtime exceptions in the
documentation of the C interface. Your goal is to incorporate them in the Perl
interface too, as long as the user has set the DEBUG environmental variable.
If the DEBUG variable has not been set, these runtime checks should be stripped
during the compile phase of the PERL program. To do so, please ensure that the
relevant check involves ONLY DEBUG, otherwise the code may not be stripped.
Things to adhere to during the implementation:
The functions for the Bit_T, should end up in the module Bit::Set, and those for
Bit_DB to Bit::Set::DB .
1. Ensure that you implement the Perl interface to all the functions in the C
interface, i.e. don't implement some functions and then tell me the others are
implemented similarly! Reflect that you have implemented all the functions by
comparing the functions that are exported by the Perl module against the
functions declared in the bit.h interface (excluding of course the functions
defined as macros).
2. The names of the methods in the Perl interface should match those of the C
interface exactly, without exceptions. However, you should not implement the
map function(s).
3. When implementing the wrapper, combine a table driven approach with the
FFI's attach to maximize conciseness and reduct repetition of the code.
For example, you may want to use a hash with keys the function names that
the module will export. CAUTION: As a senior engineer you are probably aware
of the DRY principle (Don't Repeat Yourself). When you generate code please
balance DRY with the performance penalty of function evaluations (e.g. for checks).
4. When implementing a function, do provide the POD documentation for it.
However, generate the POD after you have implemented the functions.
5. After you have implemented the modules, generate a simple test that will
generate a Bit::Set of capacity of 1024 bits, set the first, second and 5th one
and see if the popcount is equal to 3.
Claude did get most things right:
Bit::Set
, Bit::Set::DB
and the single test fileFFI::Platypus
attach.FFI::Platypus::Record
was correctly selected into the implementation for the C structure that passes options for the CPU/GPU enhanced container functions.However, the code itself would not work, requiring a few minor tweaks that are summarized below:
The relevant section is shown below and exhibits numerous problems.
for my $name ( sort keys %functions ) {
my $spec = $functions{$name};
my @attach_args = ( $name, $spec->{args}, $spec->{ret} );
$ffi->attach(@attach_args);
if ( DEBUG && exists $spec->{check} ) {
my $checker = $spec->{check};
push @attach_args, wrapper => sub {
my $orig = shift;
$checker->(@_);
return $orig->(@_);
};
}
}
When the DEBUG variable is not set, it is unclear whether the check for DEBUG
will strip the code that adds the runtime exception wrapper at compile time.
The pattern discussed in the Perl documentation states that a simple test
of the form if (DEBUG) { ... }
will strip everything within the block, but
will a test of the form if ( DEBUG && exists $spec->{check} ) { ... }
do the
same?
Secondly, the attachment of the wrapper function to the FFI call is also a
concern: it takes place early in the process, before the DEBUG check is made.
Thirdly, the snippet push @attach_args, wrapper => sub { ... }
as it pushes
two arguments into the function call for C< attach >.
If one looks into the documentation for FFI::Platypus::attach,
$ffi->attach($name => \@argument_types => $return_type);
$ffi->attach([$c_name => $perl_name] => \@argument_types => $return_type);
$ffi->attach([$address => $perl_name] => \@argument_types => $return_type);
$ffi->attach($name => \@argument_types => $return_type, \&wrapper);
$ffi->attach([$c_name => $perl_name] => \@argument_types => $return_type, \&wrapper);
$ffi->attach([$address => $perl_name] => \@argument_types => $return_type, \&wrapper);
it becomes clear that the maintainer is using the fat comma instead of the
regular comma to pass consecutive arguments into the C
All these problems are reasonably easy to fix, by breaking the test involving
DEBUG into two nested ifs, moving the attach invocation at the end of the loop,
and pushing the code reference without the wrapper =>
part into the arguments
of the attach function.
The container module (Bit::Set::DB
) uses a C structure to pass options to the
CPU/hardware accelerator device . This C structure is passed by value and thus
should be passed as a FFI::Platypus::Record
, created either as a separate
module file, or nested in the Bit::Set::DB
module. The code that was actually
generated by Claude looked like this:
{
package Bit::Set::DB::SETOP_COUNT_OPTS;
use FFI::Platypus::Record;
record_layout_1(
'num_cpu_threads' => 'int',
'device_id' => 'int',
'upd_1st_operand' => 'bool',
'upd_2nd_operand' => 'bool',
'release_1st_operand' => 'bool',
'release_2nd_operand' => 'bool',
'release_counts' => 'bool',
);
}
In the documentation for FFI::Platypus::Record, one can clearly see that the function record_layout_1
receives arguments as record_layout_1($type => $name, ... );
, i.e. the fat comma
is used to separate consecutive arguments to the function, and not as part of the
definition of a hash. However Claude must âthinkâ that it is dealing with a hash,
as it reverses the order of the arguments to make the âkeysâ unique.
The fix is rather simple, i.e. one simply reverses the order of the arguments.
Interestingly enough, the chatbot failed to properly register the type of the record with FFI. In the original output, it included the line:
$ffi->type( 'Bit::Set::DB::SETOP_COUNT_OPTS' => 'SETOP_COUNT_OPTS_t' )
rather than the correct
$ffi->type( 'record(Bit::Set::DB::SETOP_COUNT_OPTS)' => 'SETOP_COUNT_OPTS_t' )
A subtle LLM mistake concerns the handling of returned pointers from FFI calls.
A function in C that is declared as int* foo(...);
may use the returned
pointer to provide a single value, or an array of values. Consider the proposal
for BitDB_count
in the table driven interface:
BitDB_count => {
args => ['Bit_DB_T'],
ret => 'int*',
check => sub {
my ($set) = @_;
die "BitDB_count: set cannot be NULL" if !defined $set;
}
}
When FFI::Platypus
encounters this return type, it will interpret the type
as a hash reference to a Perl scalar as stated explicitly in the documentation.
The correct way to handle this is to declare the function as returning an opaque pointer.
In particular, one would rewrite the last snippet as:
BitDB_count => {
args => ['Bit_DB_T'],
ret => 'opaque',
check => sub {
my ($set) = @_;
die "BitDB_count: set cannot be NULL" if !defined $set;
}
}
By doing so, one ends up receiving the memory address of the buffer as a Perl
scalar value, rather than a reference to Perl scalar, which must be dereferenced
to yield the first (and only the first!) element of the array that is accessible
through the pointer. Please refer to the documentation of FFI::Platypus for
more information on working with opaque pointers.
The documentation of Bit::Set::DB contains examples and usage patterns for
working with arrays returned from the Perl interface to C
Having fixed these errors, I proceeded to generate a Perl version of the C
test suite, by providing as context the (fixed) modules : Bit::Set
and
Bit::Set::DB
as well as the C source code for âtest_bit.câ. The actual
prompt was the single liner:
Convert this test file written in C to Perl, using the Bit::Set and Bit::Set::DB modules.
The major problem with this conversion was the failure of the chatbot to generate correct code when testing the functions that are loading/extracting information from bitsets or containers into raw buffers. For example the following code is supposed to test the extraction of bits from a bitset:
my $bitset = Bit_new(SIZE_OF_TEST_BIT);
Bit_bset( $bitset, 2 );
Bit_bset( $bitset, 0 );
my $buffer_size = Bit_buffer_size(SIZE_OF_TEST_BIT);
my $buffer = "\0" x $buffer_size;
my $bytes = Bit_extract( $bitset, $buffer );
my $first_byte = unpack('C', substr($buffer, 0, 1))
is( $first_byte, 0b00000101, 'Bit_extract produces correct buffer' );
Bit_free( \$bitset );
However, the code is utterly wrong (and segfaults!) as one has to provide
the memory address of the buffer, not the Perl scalar value. The fix is to
generate the buffer as a Perl string and then use FFI::Platypus::Buffer
to extract the memory address of the storage buffer used by the Perl scalar:
my $scalar = "\0" x $buffer_size;
my ( $buffer, $size ) = scalar_to_buffer $scalar;
my $bytes = Bit_extract( $bitset, $buffer );
my $first_byte = unpack( 'C', substr( $scalar, 0, 1 ) ) ;
I only had to edit about 6 lines out of ~ 400 to port the C test suite to Perl.
I had no luck with more complex âassignmentsâ such as generating the benchmark of the Bit::Set::DB
GPU and multi-threaded CPU accelerated interface.
Claude realized that this task, which necessitates the understanding of memory ownership and management across the interfaces of two languages and
between DRAM and GPU RAM is way out of its league and declined by putting out the note in the snippet below:
sub database_match_container_cpu {
my ($db1, $db2, $num_threads) = @_;
...
my $results = BitDB_inter_count_cpu($db1, $db2, $opts);
my $nelem = BitDB_nelem($db1) * BitDB_nelem($db2);
my $max = 0;
# Note: In a real implementation, we'd need to properly handle the results array
# This is a simplified version since we can't directly access C array elements in Perl
# without additional FFI buffer handling
return $max;
}
So what are MY takehome points after this exercise? Iâd say the messages are mixed. I found the agentic bots to not be very helpful, as they entered these long repetitive and useless reflections
without being able to fix the problems they identified when the build of Bit::Set
failed. The âAskâ mode chatbots could generate lots of code, but with subtle mistakes.
Success with porting test suites from one language to the other was highly variable, ranging from near perfect to outright refusal to execute a difficult task.
On the other hand, the chatbot was excellent as an auto-complete, often helping me finish the structure of the POD and putting together the scaffold to fill things in.
Are the chatbots worth the investment? Iâd say that as an amateur programmer the chatbot was helpful to get me out of the writerâs block, but did not really save much time for me.
I can not imagine that a professional who knows what they are doing would be assisted much by the AI chatbots, which is what the METR study said:
overall, experienced developers experienced a 19% drop in productivity with AI assistance.
Published by prz on Saturday 30 August 2025 19:40
Published by prz on Saturday 30 August 2025 19:37
This is the weekly favourites list of CPAN distributions. Votes count: 56
Week's winners (+2): Time::Piece & Pod::Weaver::Section::SourceGitHub
Build date: 2025/08/30 17:36:06 GMT
Clicked for first time:
Increasing its reputation:
I consider myself successful.
I’m 45, with a sportscar, a house, a family, and a small business now 30 years old.
I made good decisions.
My car is 15-years old, my monitors are 20-years old, my chair is 25-years old, my desk is 25-years old.
My computers historically last 10-years; I just tossed a mouse that survived 8-years.
Perl.
My first venture into server-side web development was Lotus Notes…that lasted 5 days.
Perl has lasted me nearly 30 years, and we sure as hell ain’t done yet!
I didn’t meet the perl community until the Toronto conference in 2023.
That’s where I saw the faces.
That’s when I saw the humanity.
That’s why I felt the guilt.
I’d paid for my car, for my house, for my computers, my desk, and my chair.
Perl came for free.
I didn’t pay for it; I didn’t work for it.
But at the conference, before me stood the people who did.
I’ve worked hard for my success.
It’s clear to me now that I wasn’t the only one working hard for my success.
How much of your success is because of all of their hard work?
And how much have you contributed in return?
Me? I contributed absolutely nothing.
That’s my guilt.
But this is the very best kind of guilt.
Because it’s not too late!
In fact, now is the perfect time.
I’ve hired members of the perl community.
That’s a start.
I’m donating directly to the foundation.
And I intend to continue doing so.
If my success depends on perl, then perl depends on my success.
Perl was always a perfect fit for me.
As a syntax, it was concise, yet flexible.
My code’s form could mirror its function.
How perfectly splendid.
Perl is the basis of Holophrastic’s web development platform: Pipelines.
As new popular languages have come along, they’re touted as the best new amazing modern.
And then they vanish, supplanted by the very next amazing.
Had I invested in each, I’d have more archives of code than formats of music albums.
Instead, I rest easy in the knowledge that perl will always keep up.
xls, xlsx, pdf, mysql, mariadb, imagemagick, json, curl
There will always be a cpan module waiting for me when I need it.
My clients have no idea the power that I wield with my fingertips.
AI translations, image processing craziness, survey systems built for 100'000 concurrent test-takers…
And while all of that definitely took some expertise on my end,
Perl created exactly zero hurdles,
It never got in the way at all.
Longevity.
Perl’s got it.
The Perl Foundation’s got it.
The Perl Community’s got it.
I’ve got it.
The lines between? All blurry.
Custom Business Web Development
I work as what I’d call an inside-contractor.
I’m a speed-dial (is that still a thing?) phone call away.
Occasionally, a client will call me a dozen times in a day.
I’m closer than their colleague in the next office.
It usually starts with “a website”
the typical sales- or marketing-oriented something-pretty
Then business takes over.
A product-configuration wizard
A sales-commission calculator
Can you connect to our accounting software and provide the reports that it can’t?
Can you replace our warehouse-production backend?
What about a portal?
Yes; yes I can.
One business obstacle at a time.
Now, 30 years later, I often encounter code & comments, decades old.
The feeling I get from seeing a line that’s now older than the me who wrote it…
I used to feel alone.
I now feel that thanks to the perl community, I was never alone.
Holophrastic is proud to be the first sponsor of the 2026 Perl and Raku Conference.
Published by prz on Sunday 24 August 2025 00:09
Published by Clean Compiler on Thursday 21 August 2025 13:37
Every few years, developers like to declare programming languages “dead.” Cobol is dead. Perl is dead. PHP is dead. Java is dead.
Published by Harishsingh on Monday 18 August 2025 02:12
Last week, I watched a senior developer spend fifteen minutes trying to decode a single line of Perl during our team’s code archaeology…