Published by Prakash Khandelwal on Friday 16 May 2025 11:20
XML file snippet:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<copyright-statement>Copyright ©The authors 2024.</copyright-statement>
</root>
Perl code snippet:
use XML::LibXML;
use File::Slurp;
open my $fh, '<', $ARGV[0] or die $!;
binmode $fh;
my $dom = XML::LibXML->load_xml(IO => $fh);
my $root = $dom->documentElement();
my @cp = $root->findnodes('//copyright-statement');
write_file('file_name.txt', $cp[0]->textContent);
Desired output: Copyright ©The authors 2024.
Actual output: Copyright ©The authors 2024.
I am parsing XML file which may have multiple entities. I want to change some XML attributes, values, nodes name etc. and save the file again. But when I am doing so HTML entities gets decoded automatically. I want to keep entities intact (same as input file), what change should I do to the Perl code?
Published by haarg on Friday 16 May 2025 10:45
fix mismatched quotes in re.pm
Published by con on Friday 16 May 2025 10:36
I am using perlcritic to check code, trying to avoid errors.
I'm also trying to minimize new code loops like foreach, minimizing indentations, etc. I've found that an occasional map
can help to make code more readable.
#!/usr/bin/env perl
use 5.040.2;
use warnings FATAL => 'all';
use autodie ':default';
use DDP {output => 'STDOUT', array_max => 10, show_memsize => 1};
my @arr = (1..9);
@arr = map {$_ /= 4} @arr;
p @arr;
this code works perfectly, as shown with DDP
.
However, I got the following warning from perlcritic in bold:
Don't modify $_ in list functions at line 9, column 8. See page 114 of PBP. (Severity: 5)
I've read through perl best practices, and the examples that the author gave that show why map
is a bad choice to modify arrays like that are much more complex than what I do.
Should I be using map, and disregard the warning from perlcritic?
Published by Generatecode on Friday 16 May 2025 10:15
When it comes to writing a for loop in Perl, many new programmers encounter confusion, especially if transitioning from other programming languages. Understanding the correct syntax and structure is essential for effective coding in Perl. This article will guide you through the process of writing a for loop in Perl, addressing common mistakes and providing you with a solid understanding of its functionality.
A for loop in Perl is typically used to iterate over a range of values or to execute a block of code a specific number of times. It is a powerful tool to automate repetitive tasks efficiently. The basic structure of a for loop in Perl follows this format:
for (my $i = 0; $i < 10; $i++) {
print "Current value of i is: $i\n";
}
In this example, we initialize a variable $i
to 0 and loop while $i
is less than 10, incrementing $i
by 1 on each iteration. This will print the value of $i
from 0 to 9. Notably, the block enclosed in braces {}
contains the code we want to execute for each iteration.
From your provided code snippet, it’s clear there are several common mistakes that are often encountered:
for my $foo; do
is incorrect because you haven't initialized the loop variable correctly. Each for loop should start with an initialized variable.$i = $[i+1];
contains a syntax error. The correct way to increment is using $i++
or ++$i
. The use of $[
is also incorrect in this context.do
is not required in a standard Perl for loop. You can directly use braces {}
to define your code block.With those mistakes in mind, let’s take a look at how you can properly write a for loop in Perl:
my $i;
for ($i = 0; $i < 10; $i++) {
print "Iteration: $i\n";
}
In this corrected code, we declare $i
outside of the for loop as well. We then use the proper for loop syntax, and the program prints out the current iteration number from 0 up to 9.
For loops can also be utilized effectively to iterate over an array. Consider the following example:
my @array = (1, 2, 3, 4, 5);
for my $element (@array) {
print "The element is: $element\n";
}
Here, we define an array @array
and use a for loop to access each element of the array. The variable $element
takes the value of each element in @array
, printing it out.
This approach is not only cleaner but also demonstrates the powerful capabilities of Perl’s for loops with collections.
The syntax for a while loop in Perl is:
while (condition) {
# code to execute while condition is true
}
Yes, you can iterate over a hash using a for loop. Here's an example:
my %hash = ("a" => 1, "b" => 2);
for my $key (keys %hash) {
print "Key: $key, Value: $hash{$key}\n";
}
For loops in Perl are concise and easy to read, allowing you to iterate through elements or ranges efficiently. Using for loops simplifies code readability and maintenance.
In conclusion, crafting a for loop in Perl is a straightforward task when following the correct syntax and structure. By avoiding common pitfalls and understanding how Perl handles loops, you can effectively utilize for loops to boost your coding capabilities. Feel free to reach out if you have any additional questions or need further examples regarding Perl programming!
Published by jkeenan on Thursday 15 May 2025 12:52
Silence "used only once" warning in porting test As suggested by Dave M. in https://github.com/Perl/perl5/pull/23278#issuecomment-2879530280.
Published by khwilliamson on Thursday 15 May 2025 12:51
uni/method.t: White-space only Some test descriptions were split over input lines, leading to ragged output (suppressed under harness testing).
Published by khwilliamson on Thursday 15 May 2025 12:51
uni/method.t: Update, and don't test specific garbage The modern way to change UTF-8 to its component bytes is to use utf8::encode Some of the tests are making sure that those component bytes aren't mistaken for being UTF-8. What those component bytes are is not actually relevant, but the tests were looking at the specific expected values of them. The problem is that these differ on EBCDIC vs ASCII platforms. Several commits had been added to try to get the correct values on both types, but EBCDIC still was getting failures. And, there is no need to test for the specific values of these irrelevant bytes. What is important is that they were not misinterpreted as if they were UTF-8. This commit goes back to the original tests before those other commits were added, and changes the matching pattern to not look for the specific irrelevant byte values. Doing so makes the tests pass on both types of platforms.
Published by thibaultduponchelle on Thursday 15 May 2025 05:18
Pod/Simple/HTMLLegacy.pm is no longer CUSTOMIZED
Published by Generatecode on Wednesday 14 May 2025 23:45
Using Perl for code development often brings up questions of best practices, especially when it comes to functions like map
. In your case, while utilizing perlcritic
to ensure code quality, you've encountered a warning relating to modifying $_
in list functions. This article dives into the implications of using map
, addresses best practices, and provides alternative solutions to enhance code readability while reducing warnings.
Perlcritic is a tool that allows developers to enforce coding standards and best practices in Perl. The warning you encountered — 'Don't modify $_
in list functions' — specifically refers to the idea that modifying the default variable $_
can create confusion and lead to hard-to-debug code. This is particularly important in larger and more complex codebases. The reason behind this practice is that changes to $_
can unintentionally affect subsequent code or nested blocks.
In your provided code snippet, you modified $_
directly in the map
function:
@arr = map { $_ /= 4 } @arr;
While this is syntactically correct, it creates a scenario where $_
might not clearly represent its original intent due to modification. Best practices suggest assigning $_
to a new variable inside the block for clarity. This aids both in understanding your code and ensuring maintainability.
To adhere to best practices while keeping your code clean and readable, consider the following alternatives:
Instead of modifying $_
, you can use a named variable in your map
function. This improves readability without triggering warnings:
@arr = map { my $value = $_; $value / 4 } @arr;
Another effective method, similar to a map
, but possibly clearer for those who prefer explicit loops, is a simple for
loop. Here’s how you could rewrite your example:
for my $index (0 .. $#arr) {
$arr[$index] /= 4;
}
This for
loop makes it clear that each element is being modified individually and explicitly, which may enhance readability while avoiding the warnings from Perlcritic.
Your goal of avoiding excessive loops and minimizing indentations in your Perl code is commendable, as it leads to cleaner, more maintainable code. While it's important to be mindful of these practices, clarity should not be sacrificed. By utilizing alternatives like named variables or for
loops, you balance readability and aesthetic code structure.
A: While you technically can ignore some warnings, following them generally leads to better code practices, especially in collaborative environments.
A: Consider reviewing the documentation or examples associated with the warning. Often, best practices evolve, and it is beneficial to adopt them for long-term maintainability.
A: Focus on critical severity levels and those that affect code clarity and maintenance. High severity warnings usually suggest significant potential issues.
map
?A: Yes, but it’s crucial to do so without modifying $_
. Use descriptive variables instead for clarity, or opt for equivalents like for
loops.
In conclusion, while map
can certainly make your Perl code cleaner and concise, adhering to best practices as indicated by perlcritic
makes it even better. The key takeaway is to prioritize readability over convenience. Using named variables or alternative methods ensures your code remains maintainable while satisfying best practice guidelines. Adapting these practices will lead to fewer warnings from tools like perlcritic
and improve your overall coding experience in Perl. Embrace the art of coding with a balance of functionality and clarity that translates into high-quality software development.
Published by Lutz on Wednesday 14 May 2025 17:40
Writing a test to use this Directory.pm module
require_ok ('Test::Directory');
... and I get the below message
not ok 5 - require Test::Directory;
1..5
Failed test 'require Test::Directory;'
at t/Dir_Access_01.t line 38.
Tried to require 'Test::Directory'.
Error: File::Path version 2.06 required--this is only version v1.01.11 at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 149.
BEGIN failed--compilation aborted at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 149.
Compilation failed in require at /usr/local/share/perl/5.38.2/Test/Directory.pm line 9.
BEGIN failed--compilation aborted at /usr/local/share/perl/5.38.2/Test/Directory.pm line 9.
Compilation failed in require at (eval 17) line 2.
Looks like you failed 1 test of 5.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/5 subtests
My Ubuntu version is
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.2 LTS
Release: 24.04
Codename: noble
My perl version is
$ perl -v
This is perl 5, version 38, subversion 2 (v5.38.2) built for x86_64-linux-gnu-thread-multi (with 45 registered patches, see perl -V for more detail)
Asking for help with diagnosing the problem - I tried
$ perl -le'
use File::Path;
print for $INC{"File/Path.pm"}, $File::Path::VERSION, @INC;
'
/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
2.18
/etc/perl
/usr/local/lib/x86_64-linux-gnu/perl/5.38.2
/usr/local/share/perl/5.38.2
/usr/lib/x86_64-linux-gnu/perl5/5.38
/usr/share/perl5
/usr/lib/x86_64-linux-gnu/perl-base
/usr/lib/x86_64-linux-gnu/perl/5.38
/usr/share/perl/5.38
/usr/local/lib/site_perl
That didn't resulted in showing the problem but triggered a broader search
$ find / -name Path.pm 2>/dev/null
/usr/share/perl5/Debconf/Path.pm
/usr/share/perl5/Dpkg/Path.pm
/usr/share/perl5/XML/Twig/XPath.pm
/usr/share/perl5/XML/XPathEngine/LocationPath.pm
/usr/share/perl/5.38.2/File/Path.pm
/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/usr/local/share/perl/5.34.0/Dpkg/Path.pm
/usr/local/share/perl/5.34.0/XML/Twig/XPath.pm
/home/lutz/.cpan/build/Dpkg-1.22.18-0/lib/Dpkg/Path.pm
/home/lutz/.cpan/build/Dpkg-1.22.18-0/blib/lib/Dpkg/Path.pm
/home/lutz/.cpan/build/XML-Twig-3.53-0/Twig/XPath.pm
/home/lutz/.cpan/build/XML-Twig-3.53-0/Twig/XPath.pm_bak
/home/lutz/.cpan/build/XML-Twig-3.53-0/blib/lib/XML/Twig/XPath.pm
/home/lutz/ws/perl/lib/File/Path.pm
/snap/core18/2855/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/snap/core18/2846/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/snap/gnome-42-2204/176/usr/share/perl5/Dpkg/Path.pm
/snap/gnome-42-2204/202/usr/share/perl5/Debconf/Path.pm
/snap/gnome-42-2204/202/usr/share/perl5/Dpkg/Path.pm
/snap/core20/2571/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/snap/core20/2501/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/snap/core22/1963/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
/snap/core22/1908/usr/lib/x86_64-linux-gnu/perl-base/File/Path.pm
I searched for the version in all those files
and /home/lutz/ws/perl/lib/File/Path.pm
was matching
Published by /u/davorg on Wednesday 14 May 2025 16:29
Published by Dave Cross on Wednesday 14 May 2025 16:22
Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.
That changed when I started using AI as a regular part of my development workflow.
Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.
It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.
Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/
directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB
environment variable so that Perl can find those modules.
But I’ve got a lot of Perl projects — often nested in folders like ~/git
, and sometimes with extra lib/
directories for testing or shared code. And I switch between them frequently. Typing:
export PERL5LIB=lib
…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.
What I wanted was this:
Every time I cd
into a directory, if there are any valid lib/
subdirectories beneath it, set PERL5LIB
automatically.
Only include lib/
dirs that actually contain .pm
files.
Skip junk like .vscode
, blib
, and old release folders like MyModule-1.23/
.
Don’t scan the entire world if I cd ~/git
, which contains hundreds of repos.
Show me what it’s doing, and let me test it in dry-run mode.
With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh
, and it:
Wraps cd()
to trigger a scan after every directory change
Finds all qualifying lib/
directories beneath the current directory
Filters them using simple rules:
Must contain .pm
files
Must not be under .vscode/
, .blib/
, or versioned build folders
Excludes specific top-level directories (like ~/git
) by default
Lets you configure everything via environment variables
Offers verbose
, dry-run
, and force
modes
Can append to or overwrite your existing PERL5LIB
You drop it in your ~/.bashrc
(or wherever you like), and your shell just becomes a little bit smarter.
source ~/bin/perl5lib_auto.sh cd ~/code/MyModule # => PERL5LIB set to: /home/user/code/MyModule/lib PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp # => [PERL5LIB] Found 2 eligible lib dir(s): # => /home/user/code/AnotherApp/lib # => /home/user/code/AnotherApp/t/lib # => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib
You can also set environment variables to customise behaviour:
export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy" export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib" export PERL5LIB_LIB_CAP=5 export PERL5LIB_APPEND=1
Or simulate what it would do:
PERL5LIB_DRYRUN=1 cd ~/code/BigProject
The full script is available on GitHub:
https://github.com/davorg/perl5lib_auto
I’d love to hear how you use it — or how you’d improve it. Feel free to:
Star the repo
Open issues for suggestions or bugs
Send pull requests with fixes, improvements, or completely new ideas
It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.
What useful little utilities have you written with help from an AI pair-programmer?
The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.
Published by Dave Cross on Wednesday 14 May 2025 16:22
Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.
That changed when I started using AI as a regular part of my development workflow.
Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.
It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.
Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/
directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB
environment variable so that Perl can find those modules.
But I’ve got a lot of Perl projects — often nested in folders like ~/git
, and sometimes with extra lib/
directories for testing or shared code. And I switch between them frequently. Typing:
export PERL5LIB=lib
…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.
What I wanted was this:
Every time I cd
into a directory, if there are any valid lib/
subdirectories beneath it, set PERL5LIB
automatically.
Only include lib/
dirs that actually contain .pm
files.
Skip junk like .vscode
, blib
, and old release folders like MyModule-1.23/
.
Don’t scan the entire world if I cd ~/git
, which contains hundreds of repos.
Show me what it’s doing, and let me test it in dry-run mode.
With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh
, and it:
Wraps cd()
to trigger a scan after every directory change
Finds all qualifying lib/
directories beneath the current directory
Filters them using simple rules:
Excludes specific top-level directories (like ~/git
) by default
Lets you configure everything via environment variables
Offers verbose
, dry-run
, and force
modes
Can append to or overwrite your existing PERL5LIB
You drop it in your ~/.bashrc
(or wherever you like), and your shell just becomes a little bit smarter.
source ~/bin/perl5lib_auto.sh
cd ~/code/MyModule
# => PERL5LIB set to: /home/user/code/MyModule/lib
PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp
# => [PERL5LIB] Found 2 eligible lib dir(s):
# => /home/user/code/AnotherApp/lib
# => /home/user/code/AnotherApp/t/lib
# => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib
You can also set environment variables to customise behaviour:
export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy"
export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib"
export PERL5LIB_LIB_CAP=5
export PERL5LIB_APPEND=1
Or simulate what it would do:
PERL5LIB_DRYRUN=1 cd ~/code/BigProject
The full script is available on GitHub:
I’d love to hear how you use it — or how you’d improve it. Feel free to:
⭐ Star the repo
🐛 Open issues for suggestions or bugs
🔀 Send pull requests with fixes, improvements, or completely new ideas
It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.
What useful little utilities have you written with help from an AI pair-programmer?
The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.
Published by /u/ReplacementSlight413 on Wednesday 14 May 2025 16:07
If you are looking for a hybrid event around Independence day ... this is the one.
Note that you can a publication if you wish to in one of the tracks.
Science Perl Track: Full length paper (10-36 pages, 50 minute speaker slot) Science Perl Track: Short paper (2-9 pages, 20 minute speaker slot) Science Perl Track: Extended Abstract (1 page, 5 minute lightning talk slot) Normal Perl Track (45 minute speaker slot, no paper required)
Full announcement: https://blogs.perl.org/users/oodler_577/2025/05/call-for-papers---perl-community-conference-summer-2025.html
Submission website
https://www.papercall.io/cfps/6270/submissions/new
(In case you are interested I will be presenting the interface to a multi-threaded and GPU enabled library for manipulating bitset containers)
Published by xpt on Wednesday 14 May 2025 15:17
This question is specifically for Perl 5 instead of Perl 6.
The printf function in Perl allows for positional parameters in its format string, providing a way to reorder or reuse arguments. Positional parameters are specified using the %n$ syntax within the format string, where n is the index of the argument to be used, starting from 1.
For example:
$ perl -e 'printf "%2\$s %1\$d, %2\$s %1\$d\n", 10, "hello";'
hello 10, hello 10
However, I've stared at my following code many many times, but still am unable to figure out what's going wrong:
$ perl -e 'my $i, $j = 10, 20; printf "i = %1\$d, j = %2\$d, i again = %1\$d, j again = %2\$d\n", $i, $j;'
i = 0, j = 10, i again = 0, j again = 10
$ perl -v
This is perl 5, version 36, subversion 0 (v5.36.0) built for x86_64-linux-gnu-thread-multi
$ apt-cache policy perl
perl:
Installed: 5.36.0-7
Published by Brett Estrade on Tuesday 13 May 2025 20:25
This is a hybrid (in-person and virtual) conference being held in Austin, TX on July 3rd-4th.
Did you miss your chance to speak or have wish to speak at the only available Perl Science Track (and get published in the Science Perl Journal)? Or maybe you just can't get enough Perl this summer??? Submit here ... or get more information on the PCC, including registration, special event registration, and donation links click here. For questions you may email us at science@perlcommunity.org or find us in the Perl Applications & Algorithms discord server.
The following lengths will be accepted for publication and presentation:
You may ask, where is the Winter SPJ or videos? We are working on them, promise! (it's a lot of work as some of you know. See also on Perlmonks and r/perlcommunity.
Published by Generatecode on Tuesday 13 May 2025 20:01
Running tests in Perl is an essential part of ensuring that your modules are functioning correctly. When you attempt to run a test that requires the Test::Directory
module and encounter an error such as:
not ok 5 - require Test::Directory;
1..5
Failed test 'require Test::Directory;'
at t/Dir_Access_01.t line 38.
Tried to require 'Test::Directory'.
Error: File::Path version 2.06 required--this is only version v1.01.11 at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 149.
BEGIN failed--compilation aborted at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 149.
Compilation failed in require at /usr/local/share/perl/5.38.2/Test/Directory.pm line 9.
BEGIN failed--compilation aborted at /usr/local/share/perl/5.38.2/Test/Directory.pm line 9.
Compilation failed in require at (eval 17) line 2.
Looks like you failed 1 test of 5.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/5 subtests
These errors can be frustrating, but they commonly point to version mismatches or missing dependencies. In this article, we’ll explore why this issue arises and how to resolve it effectively.
The error message is indicating that the Test::Directory
module requires a specific version of the File::Path
module (version 2.06 in this case), but your system currently has an older version (v1.01.11). This version mismatch is causing the test to fail when the script attempts to load Test::Directory
. The error message also illustrates that the failure occurs due to the inability to require the necessary components, leading to a cascading failure where subsequent tests cannot run.
From your output, here are key components of your environment:
Both your Ubuntu and Perl versions are up-to-date, but the package versions of Perl modules are not. Let's get into how to fix this.
To fix the issue with the Test::Directory
, follow the steps below.
First, check the installed version of File::Path
using CPAN or your package manager. You can do this by running:
perl -MFile::Path -e 'print $File::Path::VERSION;'
If the version is indeed older, you need to update it. Here, we have two options: using the system package manager or CPAN.
Since you are on Ubuntu, running the following commands in your terminal will update your packages and their associated Perl modules:
sudo apt update
sudo apt upgrade libfile-path-perl
You may prefer using CPAN to update the specific module directly. If you have multiple versions of Perl installed, ensure you're using the correct CPAN for your Perl version by running:
cpan File::Path
This will automatically download and install the latest version of the File::Path
module.
After upgrading the required module, it’s time to rerun your tests. Navigate to your test directory and execute:
perl Makefile.PL
make test
This should now pass without errors.
Once you have re-run the tests, you will see output that might look like this:
1..5
ok 1 - require Test::Directory;
ok 2 - other tested functionality;
...
ok 5 - final test passes.
This output confirms that the required module is functioning properly and your test suite has passed.
Test::Directory
?Test::Directory
is a Perl module designed to help write tests for modules that interact with the file system. It provides methods to handle directory-related operations in a test environment.
Check the error messages for any other missing module dependencies and follow a similar upgrading process using either APT or CPAN to resolve them.
Regularly check and upgrade your modules using CPAN or your distribution's package manager to minimize compatibility issues.
Published by Generatecode on Tuesday 13 May 2025 18:30
If you're trying to develop web applications using Perl, you might have come across the Mojolicious web framework. It's a powerful tool that allows you to create robust, real-time web applications with minimal setup. However, if you've copied the 'Hello World' code and run into some issues, you're not alone. In this article, we'll explore the potential reasons for the problems you're experiencing and provide a step-by-step guide to correctly running a basic Mojolicious application on Windows.
The error you encountered is quite common among beginners. Mojolicious applications require an execution context that matches the selected mode. When you try to run your test.pl
script directly from the command line, it doesn’t know if you intend to run it as a CGI application, HTTP daemon, or in another mode. This results in the usage help message instead of running your application.
To successfully run your 'Hello World' app, follow these steps:
Before proceeding, ensure that Mojolicious is properly installed. Open your command prompt and type:
cpan Mojolicious
If you see a message indicating that Mojolicious is already installed, you’re good to go.
Next, ensure you're in the directory where your test.pl
script is located. In this case, it should be:
cd C:\Mojolicious
You should run the Mojolicious application with the proper command to start the HTTP daemon. Instead of directly executing test.pl
, use:
morbo test.pl
This command starts the application in development mode and automatically picks up changes you make to the code.
After running the command, you should see an output indicating that your app is running. It usually defaults to port 3000. Open your browser and navigate to:
http://localhost:3000/
If everything is set up correctly, you should see the output 'Hello World' in your browser.
Once your application is working locally, you might want to deploy it. You can run your application with a PSGI or CGI environment using:
plackup test.pl
or
morbo test.pl -m production
for production mode.
Mojolicious is a real-time web framework for Perl that offers a full stack of features for building web applications. It includes a built-in web server, routing, and more.
This usually happens when the application is executed without specifying how it should run (as CGI, PSGI, etc.). Using morbo
or plackup
sets the context correctly.
Yes, Mojolicious is cross-platform runnable, which means it can work on various operating systems such as Windows, Linux, and macOS.
Setting up a Mojolicious application to display 'Hello World' is a straightforward process if you follow the right steps. Always remember to start with the correct commands to avoid confusion between different operable modes. From installing Mojolicious to navigating and running your application, these instructions should help you get started on the right foot. This process will pave the way for developing more advanced features and functionalities in your Perl-based web applications.
Published by /u/esiy0676 on Tuesday 13 May 2025 13:28
This might have been asked previously in different flavours, but I wonder why when Perl went on to lose popularity (as I think that's all that it is, e.g. in comparison with Python), why didn't it go on to become at least the default scripting language where shell scripts still reign.
Anyone who (has to) write a shell script feels instantly both 1) at home; and 2) liberated when the same can be written in Perl, in many ways Perl feels like a shell syntax on steroids. Perl is also ubiquitous.
It's almost like when I need constructs of Bash, I might as well rely on Perl being available on the target host. So twisting my original question a bit more: why do we even still have shell scripts when there's Perl?
Published by MarkB on Tuesday 13 May 2025 12:56
I consider myself maybe barely an intermediate Perl developer. I am trying to see if Mojolicious is a good platform to move forward with replacing some old CGI intranet sites. I can't get the basic "Hello World" app to run. I am running on Windows (10) and it appears that Mojolicious is installed. perldoc Mojolicious returns documentation.
I created a "test.pl" file in C:\Mojolicious. The entire code was copied as:
use Mojolicious::Lite;
get '/' => sub {
my $self = shift;
$self->render(text =>'Hello World');
};
app->start;
Trying to launch it from the command prompt yields:
C:\Mojolicious>test.pl
Usage: APPLICATION COMMAND [OPTIONS]
mojo version
mojo generate lite-app
./myapp.pl daemon -m production -l http://*:8080
./myapp.pl get /foo
./myapp.pl routes -v
Tip: CGI and PSGI environments can be automatically detected very often and
work without commands.
Options (for all commands):
-h, --help Get more information on a specific command
--home <path> Path to home directory of your application, defaults to
the value of MOJO_HOME or auto-detection
-m, --mode <name> Operating mode for your application, defaults to the
value of MOJO_MODE/PLACK_ENV or "development"
Commands:
cgi Start application with CGI
cpanify Upload distribution to CPAN
daemon Start application with HTTP and WebSocket server
eval Run code against application
generate Generate files and directories from templates
get Perform HTTP request
inflate Inflate embedded files to real files
prefork Start application with pre-forking HTTP and WebSocket server
psgi Start application with PSGI
routes Show available routes
version Show versions of available modules
See 'APPLICATION help COMMAND' for more information on a specific command.
C:\Mojolicious>
This has got to be something a beginner / amateur would be able to fix, but I have no idea. If I comment out "app->start;" the script completes without the above error, but of course there is no listener. Thoughts?
Published by /u/whoShotMyCow on Monday 12 May 2025 17:12
EDIT: solved.
I hope the title is proper, because I can't find another way to describe my issue. Basically, I've started learning perl recently, and decided to solve an year of Advent Of Code (daily coding questions game) using it. to start, I wrote the code for day 1. here's a dispatcher script I created:
#!/usr/bin/perl use strict; use warnings; use lib 'lib'; use feature 'say'; use Getopt::Long; use JSON::PP; use File::Slurper qw(read_text write_text); my ($day, $help); GetOptions( "d|day=i" => \$day, "h|help" => \$help, ) or die "Error in command-line arguments. Use --help for usage.\n"; if ($help || !$day) { say "Usage: perl aoc.pl -d DAY\nExample: perl aoc.pl -d 1"; exit; } my $json_file = 'solutions.json'; my $solutions = {}; if (-e $json_file) { $solutions = decode_json(read_text($json_file)); } my $module = "AOC::Day" . sprintf("%02d", $day); eval "require $module" or do { say "Day $day not solved yet!"; exit; }; # Load input file my $input_file = "inputs/day" . sprintf("%02d", $day) . ".txt"; unless (-e $input_file) { die "Input file '$input_file' missing!"; } my $input = read_text($input_file); # Debug: Show input length and first/last characters say "Input length: " . length($input); say "First char: '" . substr($input, 0, 1) . "'"; say "Last char: '" . substr($input, -1) . "'"; my $day_result = {}; if ($module->can('solve_p1')) { $day_result->{part1} = $module->solve_p1($input); say "Day $day - Part 1: " . ($day_result->{part1} // 'N/A'); } if ($module->can('solve_p2')) { $day_result->{part2} = $module->solve_p2($input); say "Day $day - Part 2: " . ($day_result->{part2} // 'N/A'); } $solutions->{"day" . sprintf("%02d", $day)} = $day_result; write_text($json_file, encode_json($solutions));
here's the code for lib/AOC/Day01.pm:
package AOC::Day01; use strict; use warnings; sub solve_p1 { my ($input) = @_; $input =~ s/\s+//g; return $input =~ tr/(// - $input =~ tr/)//; } sub solve_p2 { return undef; } 1;
however, part 1 always returns 0, even when running for verified inputs that shouldn't produce 0. the output is like this:
```
-> perl aoc.pl -d 1
Input length: 7000
First char: '('
Last char: '('
Day 1 - Part 1: 0
Day 1 - Part 2: N/A
```
i've manually verified that he input length and first and last character match the actual input file.
here's my directory structure:
. ├── aoc.pl ├── inputs │ └── day01.txt ├── lib │ └── AOC │ └── Day01.pm └── solutions.json
any idea why I'm getting a 0 for part 1, instead of the correct answer?
Published by Mayur Koshti on Monday 12 May 2025 11:39
Published by Leon Timmermans on Sunday 11 May 2025 17:54
A week ago I attended the 2025 PTS. For me it was a different PTS than the previous ones.
Firstly because it was my first PTS without Abe Timmerman. He was a regular in both the PTS (as maintainer of Test::Smoke), and of the Amsterdam Perl Mongers. In fact the last time I saw him was on our flight back to Amsterdam after the PTS in Lisbon last year. He was greatly missed.
Secondly, because of a question that Book asked at the very beginning of the PTS: how often we had been to the PTS before. I was one of the few who had attended more than 10 of them. Combined with the fact that several other regular attendees couldn't make it that meant that this PTS I spent more time than ever on helping others with various issues.
I also spent quite a bit of time in discussions. Most obviously on the future of the CPAN Security group that I joined last year. I also spent time giving feedback to the new CPAN Testers group, and talked with a bunch of people to get feedback on my idea to write a new CPAN client.
But obviously I also did a lot of programming work. I synced up a year's worth of updates on ExtUtils::ParseXS to CPAN (more changes are coming up from David Mitchell after 5.42). I released a new ExtUtils::Builder::Compiler (now working with Microsoft's compiler thanks to Mithaldu helping me out). I made ExtUtils::Manifest compatible with Test2::Harness/yath after someone (Breno?) pointed out it wasn't. I released a new App::ModuleBuildTiny with better configuration options after I sat down with Paul Evans to observe his release process. I fixed a minor issue in Dist::Zilla::Plugin::ModuleBuildTiny on older perls that Julien had pointed out to me. I worked on making Module::Build::Tiny's XS support more compatible with Devel::Cover after Paul Johnson pointed out that combination didn't work. I worked on making Test::Harness more asynchronous/parallel. I released a new Software::License and worked on improving PAUSE's password handling. And I updated experimental.pm for 5.42.
I spent a bunch of time investigating how to get TLS support into core. This is much more complicated than one might hope; partially because Net::SSLeay is a 29 years old large XS module (OpenSSL forked from SSLeay in 1998 ), and partially because OpenSSL itself deeply depends on Perl in its build system. This is looking like it will be my next big project.
And I scared everyone by tripping on a ramp that got treacherously slippy in the rain. I didn't break anything. I think.
All in all I had a really useful PTS, even if I mostly did completely different things from what I had been initially planning.
All of this wouldn't be possible without our wonderful organizers (Daniel, Philippe, Laurent, Tina and Breno), as well as our sponsors:
Booking.com, WebPros, CosmoShop, Datensegler, OpenCage, SUSE, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, plusW GmbH
Grant Street Group, Fastmail, shift2, Oleeo, Ferenc Erki
The Perl and Raku Foundation, Japan Perl Association, Harald Joerg, Alexandros Karelas (PerlModules.net), Matthew Persico, Michele Beltrame (Sigmafin), Rob Hall, Joel Roth, Richard Leach, Jonathan Kean, Richard Loveland, Bojan Ramsa
Published by /u/niceperl on Sunday 11 May 2025 13:52
Published by prz on Sunday 11 May 2025 15:51
This is the weekly favourites list of CPAN distributions. Votes count: 43
Week's winner: Const::XS (+3)
Build date: 2025/05/11 13:42:58 GMT
Clicked for first time:
Increasing its reputation:
Published by Dave Cross on Sunday 11 May 2025 12:38
You might know that I publish books about Perl at Perl School. What you might now know is that I also publish more general technical books at Clapham Technical Press. If you scroll down to the bottom of that page, you’ll see a list of the books that I’ve published. You’ll also see evidence of the problem I’ve been solving this morning.
Books tend to have covers that are in a portrait aspect ratio. But the template I’m using to display them requires images in a landscape aspect ratio. This is a common enough problem. And, of course, we’ve developed a common way of getting around it. You’ll see it on that page. We create a larger version of the image (large enough to fill the width of where the image is displayed), apply some level of Gaussian blur to the image and insert a new copy of the image over that. So we get our original image with a tastefully blurred background which echoes the colour of the image. ChatGPT tells me this is called a “Blurred Fill”.
So that’s all good. But as I’m publishing more books, I need to create these images on a pretty regular basis. And, of course, if I do something more than three or four times, I will want to automate.
A while ago, I wrote a simple program called “blur” that used Imager to apply the correct transformations to an image. But this morning, I decided I should really make that program a bit more useful. And release it to CPAN. So that’s what I’ve been doing.
Adjusting images to fit various aspect ratios without losing essential content or introducing unsightly borders is a frequent challenge. Manually creating a blurred background for each image is time-consuming and inefficient, especially when dealing with multiple images or integrating into automated workflows.
App::BlurFill is a Perl module and CLI tool designed to streamline the process of creating images with blurred backgrounds. It takes an input image and generates a new image where the original is centred over a blurred version of itself, adjusted to the specified dimensions.
Install via CPAN:
cpanm App::BlurFill
Then to use the CLI tool:
blurfill --width=800 --height=600 input.jpg
This command will generate input_blur.jpg
with the specified dimensions.
App::BlurFill also includes a web interface built with Dancer2. You can start the web server and send POST requests with an image file to receive the processed image in response.
Example using curl
:
curl -OJ -X POST http://localhost:5000/blur -F "image=@input.jpg"
The response will be the new image file, ready for use.
App::BlurFill is written in Perl 5.40, using the new perlclass feature. It makes use of the Imager
module for image processing tasks. Currently, it supports JPG, PNG and GIF.
Future enhancements may include:
App::Blurred aims to simplify the task of creating visually consistent images across various platforms and devices. Feedback and contributions are welcome to help improve its functionality and usability.
Please let me know if you find it useful or if there are extra features you would find useful.
Oh, and why not buy some Clapham Technical Press books!
Update: I forgot to include a link to the GitHub repository. It’s at https://github.com/davorg-cpan/app-blurfill
The post Reformating images with App::BlurFill first appeared on Perl Hacks.
Published on Sunday 11 May 2025 00:00
Published on Saturday 10 May 2025 19:15
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given an array of integers. Write a script to return the maximum between the number of positive and negative integers. Zero is neither positive nor negative.
Our solution will be pretty short, contained in just a single file that has the following structure.
The preamble is just whatever we need to include. Here we aren’t using anything special, just specifying the latest Perl version.
the main section is just some basic tests.
MAIN
:{
say maximum_count -3, -2, -1, 1, 2, 3;
say maximum_count -2, -1, 0, 0, 1;
say maximum_count 1, 2, 3, 4;
}
◇
Fragment referenced in 1.
All the work is done in the following subroutine.
We do the filtering with a grep.
$ perl perl/ch-1.pl 3 2 4
You are given an array of positive integers. Write a script to return the absolute difference between digit sum and element sum of the given array.
Our solution will be pretty short, contained in just a single file that has the following structure.
The main section is just some basic tests.
MAIN
:{
say sum_difference 1, 23, 4, 5;
say sum_difference 1, 2, 3, 4, 5;
say sum_difference 1, 2, 34;
}
◇
Fragment referenced in 7.
All the work is done in the following subroutine.
We compute the digit sum by splitting each element as a string and then summing the list of digits.
The element sum is a straightforward summing of the elements.
$ perl perl/ch-2.pl 18 0 27
Published by Dave Cross on Saturday 10 May 2025 16:27
I write blog posts in a number of different places:
And most of those posts get syndicated to other places:
It’s also possible that I’ll write original posts on one of these syndication sites without posting to one of my sites first.
Recently, when revamping my professional website I decided that I wanted to display a list recent posts from all of those sources. But because of the syndication, it was all a bit noisy: multiple copies of the same post, repeated titles, and a poor reading experience.
What I wanted was a single, clean feed — a unified view of everything I’ve written, without repetition.
So I wrote a tool.
I wanted to:
App::FeedDeduplicator is a new CPAN module and CLI tool for aggregating and deduplicating web feeds.
It reads a list of feed URLs from a JSON config file, downloads and parses them, filters out duplicates (based on canonical URLs or titles), sorts the results by date, and emits a clean, modern feed.
{ "output_format": "json", "max_entries": 10, "feeds": [{ "feed": "https://perlhacks.com/feed/", "web": "https://perlhacks.com/", "name": "Perl Hacks" }, { "feed": "https://davecross.substack.com/feed", "web": "https://davecross.substack.com/", "name": "Substack" }, { "feed": "https://blog.dave.org.uk/feed/", "web": "https://blog.dave.org.uk/", "name": "Davblog" }, { "feed": "https://dev.to/feed/davorg", "web": "https://dev.to/davorg", "name": "Dev.to" }, { "feed": "https://davorg.medium.com/feed", "web": "https://davorg.medium.com/", "name": "Medium" }] }
<link rel="canonical">
tagInstall via CPAN:
cpanm App::FeedDeduplicator
Then run it with:
feed-deduplicator config.json
If no config file is specified, it will try the FEED_DEDUP_CONFIG
environment variable or fallback to ~/.feed-deduplicator/config.json
.
There’s also a Docker image with the latest version installed.
The tool is written in Perl 5.38+ and uses the new class
feature (perlclass
) for a cleaner OO structure:
App::FeedDeduplicator::Aggregator
handles feed downloading and parsingApp::FeedDeduplicator::Deduplicator
detects and removes duplicatesApp::FeedDeduplicator::Publisher
generates the final outputIt’s all very much a work in progress at the moment. It works for me, but there are bound to be some improvements needed, so it works for more people. A few things I already know I want to improve:
If you want a clean, single-source feed that represents your writing without duplication, App::FeedDeduplicator
might be just what you need.
I’m using it now to power the aggregated feed on my site. Let me know what you think!
The post Cleaner web feed aggregation with App::FeedDeduplicator first appeared on Perl Hacks.
Published by Mohammad Sajid Anwar on Saturday 10 May 2025 01:35
Perl Toolchain Summit 2025, my first time, thanks to the organisers.
Here is my event report: https://theweeklychallenge.org/blog/pts-2025
Published by alh on Friday 09 May 2025 07:21
Tony writes:
``` [Hours] [Activity] 2025/03/03 Monday 0.83 #23015 research and comment 0.22 #23012 research and comment 0.45 #22827 review updates and re-approve 0.68 amagic_call/coverity follow-up, work on a fix, commit message wording, check is clangsa picks this up 0.15 amagic_call/coverity, commit message, push for CI 0.28 #22642 review updates and approve
3.39
2025/03/04 Tuesday 0.80 #23012 comment, consideration, comment some more 1.38 #23043 review, research 0.63 #23043 more review, comment 0.30 #23056 review and comment 0.08 #23058 review and approve 0.17 #23061 review and approve 0.10 #23062 review and approve
3.71
2025/03/05 Wednesday 0.27 #23053 review discussion and comment 0.12 #23056 review update and approve 0.12 #23057 review and comment 0.10 #23059 review and comment 0.17 #23063 review and comment 1.20 #22423 clean up, push for CI, message to p5p 0.28 coverity amagic_call PL_op:check CI results and open PR 23071 0.57 #23054 testing, comment 0.35 #23070 review and approve 0.38 #23072 review and approve 0.20 #23069 review and approve
4.03
2025/03/06 Thursday 0.18 #23063 review updates and approve 0.15 #23059 review updates and approve 1.10 #23075 review, research and comments 0.73 #23076 testing, debugging test failure, comment 0.75 #23076 more debugging, research and comment 0.23 #23077 review and approve 0.10 #23078 review and approve
4.46
2025/03/10 Monday 0.47 github notifications 0.23 #23079 review updates and approve 0.73 #23075 review discussion and comment 0.27 #23080 review and comment 0.47 #23095 research and comment 0.25 #23082 review and approve 0.22 #23083 review and comment 0.18 #23094 review change and links, comment 0.35 #23083 review changes, comment
4.10
2025/03/11 Tuesday 0.15 review overnight #p5p discussion 0.08 #23097 review change and discussion, approve 0.18 #23071 apply to blead (manually, github UI wigged out complaining I was trying to do a squash merge) 0.10 #23073 apply to blead manually, github is confused here too 0.30 review coverity results
2.33
2025/03/12 Wednesday 0.37 #23075 follow-up 1.30 #23076 look into hooks branch and comment 0.53 #23012 review latest and approve 0.17 #23087 review and approve 0.52 #23088 review, notice a separate typo and make PR 23099, approve 0.35 #23092 review, think, approve
3.56
2025/03/13 Thursday 0.37 #23091 review and approve 1.98 #23096 review... 0.65 #23096 review and approve
3.47
2025/03/17 Monday 1.25 #23075 read discussion, research and comment 0.30 #23108 review and approve, comment 0.83 #23120 start review, research 0.52 #23120 comment
3.47
2025/03/18 Tuesday 0.30 review leonerd’s av_store API improvements discussion 1.60 #23075 research, review overnight discussion (side trip into a coverity scan report) 1.32 #23075 more review, work on adding sv_vstring_get to
3.22
2025/03/19 Wednesday 0.22 #23108 review discussion, verify overload reordering, comment 1.02 #23112 review and approve 0.62 #23121 review, comment and approve 0.80 look into why dist-modules tests aren’t testing threaded perls, testing
3.34
2025/03/20 Thursday 0.33 github notifications 0.08 #23120 comment 1.37 #23144 review, review history 0.10 #23144 approve 0.38 #23074 see if downstream fixed it (hard to be sure at this point) 0.87 #21877 rebase and push, add comments and mark ready for
3.13
2025/03/24 Monday 0.77 #23152 research and comment 2.18 #23151 review code, work on a reproducer, comment, test a fix (needs tests), try to work out where/how to test
3.93
2025/03/25 Tuesday 1.68 #23151 more work on a testable case 0.97 #23081 research, code profiling and approve with comment
3.50
2025/03/26 Wednesday 0.33 ppc#70 comment 0.80 av_store thread, review discussion, consider some replies 0.35 #23150 comment 0.60 #23153 review and comment 0.32 #23153 follow-up, comment 0.22 #23157 review and approve
3.20
2025/03/27 Thursday 2.35 #23075 check ppport.h CI results, rewrite since the API changed, testing and push for CI again
3.48
2025/03/31 Monday 0.43 #23163 review and approve 0.95 #23162 review and comment 0.23 #23161 review and approve 0.27 #23153 review and approve 1.07 #23075 cleanup, trying to understand the code 1.62 #23151 work up a test code, testing, perldelta, push for
4.57
Which I calculate is 60.89 hours.
Approximately 60 tickets were reviewed or worked on, and 2 patches were applied. ```
Published by alh on Friday 09 May 2025 07:18
Tony writes:
``` [Hours] [Activity] 2025/02/03 Monday 0.28 ppc 30/31 list catch up 0.13 github notifications 0.08 #22955 briefly comment 0.62 #22956 review, testing and comment 0.23 #22957 review and approve 0.30 #22958 review and approve 0.42 #22970 review and approve
2.98
2025/02/04 Tuesday 0.45 github notifications 0.23 #22955 review and approve 1.27 #22963 review and comments
3.18
2025/02/05 Wednesday 2.02 #22959 debuggging, work up a fix and push for CI
3.27
2025/02/06 Thursday 0.95 #22967 review updates and approve 0.85 #22959 review CI results and fix an issue 0.78 #22423 work on a fix 0.18 #22959 review CI results, perldelta, make PR 22976
3.76
2025/02/10 Monday 0.12 #22976 re-check, apply to blead 0.17 #22963 review updates and approve 0.15 #22910 review updates and approve 0.82 #22940 review updates and comment 1.72 #22927 review, benchmarking, approve and comment
4.78
2025/02/11 Tuesday 1.07 #22423 more tied hash (internet down), get it working, some cleanup, need to work on related ticket but need detail, on hold for now 1.03 #21877 work on issue with fix, reproduce and isolate 0.23 #21877 work out what’s going on (in /(?{ s!!x! })/ moves the PV of $_ as the match is going through the PV of $_),
2.33
2025/02/12 Wednesday 0.23 #22940 review updates and approve 0.72 #22985 review and approve 0.35 #22986 review and comment 0.40 #22752 work on rebase
4.08
2025/02/17 Monday 0.10 #22766 work out what happened here 0.68 #22884 testing, comment 1.05 #22971 review and comments 0.35 #22960 review, research and comment 1.12 #22989 review, research, comment 0.40 #23007 review and comment
4.90
2025/02/18 Tuesday 1.02 #22989 review update, debugging i386 CI, comment 0.87 #22423 debug CI failure 0.40 #22989 review discussion, research and comment
2.77
2025/02/19 Wednesday 0.55 #22989 research and comment, look over changes and approve 1.72 #22423 debug test issues 1.07 #22423 fixes, push for more CI 0.80 #22880 comment some more
4.77
2025/02/20 Thursday 0.77 #23016 review test failure, review cpan code and comment 1.55 #23010 review discussion, research 1.05 #23010 research
3.77
2025/02/24 Monday 2.20 #23019 research and comments 0.77 #23022 work on reproduce and reproduce, comment
5.24
2025/02/25 Tuesday 0.28 #23022 fix porting error and re-push 0.25 #23016 open tokuhirom/Perl-Lexer#14 0.17 #23020 comment 0.62 #23015 look into Prima, comment 0.48 #p5p win32 performance discussion 0.33 #23012 review 1.08 #23012 review and comments
3.44
2025/02/26 Wednesday 0.38 #23022 check CI results, cleanup and push 0.13 #23025 briefly comment 2.12 review coverity scan results 0.68 review clang sa results, fix one issue and push for CI
4.11
2025/02/27 Thursday 0.17 clang sa fix: check CI results and make PR 23034 0.25 #23025 briefly comment (and look at dmq’s MSVC failure for wellrng) 0.43 #22971 review updates and approve 0.08 #23034 discussion catch up 0.25 #p5p discussion re RNGs 0.30 #23026 comment 0.08 #22971 look over perldelta and comment 0.08 #23029 review and approve 0.50 #22907 invoke the PSC
2.66
2025/02/28 Friday
0.37
Which I calculate is 56.41 hours.
Approximately 39 tickets were reviewed or worked on, and 2 patches were applied. ```
Published by alh on Friday 09 May 2025 07:12
Dave writes:
This is my monthly report on work done during Mar and Apr 2025, covered by my TPF perl core maintenance grant.
I spent most of my time continuing to refactor Extutils::ParseXS, as a precursor to adding reference-counted stack (PERL_RC_STACK) abilities to XS.
In particular, I've recently pushed a large PR, intended to be merged once 5.42.0 is done, which converts ParseXS to create an AST for each XSUB it parses. This has three main benefits.
First, it separates out the parsing and code generation.
Second, it splits up the parsing of XSUBs into manageable segments. For example, the longest sub that is concerned with parsing XSUBs is now 182 lines long and the longest concerned with code generation is 342 lines. Prior to this PR, the longest (concerned with both parsing and code generation) was 1412 lines.
Third, the parsing state is now stored in the AST's nodes, close to where it's relevant, rather than all state being stored in one big confusing Extutils::ParseXS hash.
In summary: in 5.40.0 and before, the XS parsing code was a buggy, mostly untested, unmaintainable mess, that nobody understood properly, and which was risky to modify. It is now modern and (hopefully) can accept changes easily.
Summary:
Total: * 126:52 (HH::MM)
Published by Randal L. Schwartz on Thursday 08 May 2025 01:02
A Futility Closet post references a Perl "poem" over two decades old. I remember chuckling at it when it first appeared. Although it was published "anonymously", I'm pretty sure I know who wrote it. :)
Published by Paul Johnson on Wednesday 07 May 2025 19:09
This weekend I was once again privileged to attend the Perl Toolchain Summit (PTS). This year it was held in the lovely city of Leipzig.
The PTS continues to be my favourite technical event of the year. In part this is because I get to meet old friends and make new ones, but it's also because the summit really serves its purpose and I am able to make so much progress on the projects I have which belong in Perl's toolchain ecosystem.
PTS isn't a conference - it's a four-day working meeting. It brings together people working on toolchain projects to solve common problems and push the work forward. I did get a lot of work done, but that's not the main focus, for me anyway. I see it as a time to solve problems and plan the way forward, and for me PTS facilitates that in the most wonderful fashion.
Numerous times after hitting a problem or having some question I was able to walk a few feet to the person in the world best qualified to solve the problem or answer the question. And a number of folk also came to me with questions or reported problems so I hope I was able to help in a similar way.
So, in terms of results, I:
^^=
operator)But beyond that I was able to have chats with various people and groups regarding all sorts of areas touching on coverage, testing, hosting, the perl core, build systems and various other adjacent topics. And this was the real value of the event. I came with a list of things to work on, knowing I wouldn't be able to do all of them. And whilst I worked on several topics I had planned, and especially the most important ones, I ended up not touching others and moving some in other directions based on discussions with people who understand things better than I.
In particular I want to call out Ferenc Erki for suggesting that the solution to my problem of running out of disk space for cpancover reports was not anything I was considering, but rather migrating to a filesystem which implements transparent compression, and then helping me implement that. I'm still playing with the small details, but this should solve that particular problem for a good few years.
And to give just one example of the value in bringing folk together, I was looking through some recent Devel::Cover tickets and noticed one related to using Devel::Cover with Module::Build::Tiny. I had previously put that to one side because it didn't mean much to me. But Leon Timmermans was sitting almost next to me and so I showed him the bug report. He immediately identified the problem and showed me a ticket and PR he had created a while back to implement the solution, but he hadn't had a use case and so he wasn't sure whether or not to merge it. Well, now Devel::Cover provided the use case and so we have two previously ignored tickets now referencing each other and a solution ready to go.
Part of the reason that this year I worked on areas I hadn't initially expected was because we were missing a few folk who regularly attend PTS. For some this was due to other commitments. For some the reason was more sad.
But this allowed a number of folk to attend for the first time. And this not only enhanced the summit itself, but hopefully will be of benefit to the Perl toolchain in general over the years.
PTS would not be possible without a considerable amount of support from many people. Obviously this includes the folk who give up their time to attend. But I'd also like to recognise Salve, who kicked the whole thing off, the organisers of every QAH/PTS since then for keeping the torch burning and, if I may mix my metaphors even further, the organisers of this year's event for knocking the ball out of the park. Big thanks to
Booking.com, WebPros, CosmoShop, Datensegler, OpenCage, SUSE, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, plusW GmbH
Grant Street Group, Fastmail, shift2, Oleeo, Ferenc Erki
The Perl and Raku Foundation, Japan Perl Association, Harald Joerg, Alexandros Karelas PerlModules.net, Matthew Persico, Michele Beltrame Sigmafin, Rob Hall, Joel Roth, Richard Leach, Jonathan Kean, Richard Loveland, Bojan Ramsa
Published by Tobenna Oduah on Tuesday 06 May 2025 04:00
Published on Sunday 04 May 2025 12:25
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given a list of words containing alphabetic characters only. Write a script to return the count of words either starting with a vowel or ending with a vowel.
Our solution will be pretty short, contained in just a single file that has the following structure.
The preamble is just whatever we need to include. Here we aren’t using anything special, just specifying the latest Perl version.
the main section is just some basic tests.
MAIN:{
say word_count qw/unicode xml raku perl/;
say word_count qw/the weekly challenge/;
say word_count qw/perl python postgres/;
}
◇
Fragment referenced in 1.
All the work is done in the count section which contains a single small subroutine.
For clarity we’ll break that vowel check into it’s own code section. It’s not too hard. We use the beginning and ending anchors (^, $) to see if there is a character class match at the beginning or end of the word.
$ perl perl/ch-1.pl 2 2 0
You are given two arrays of integers. Write a script to return the minimum integer common to both arrays. If none found return -1.
As in the first part, our solution will be pretty short, contained in just a single file that has the following structure.
(The preamble is going to be the same as before, we don’t need anything extra for this problem either.)
The main section just drives a few tests.
The subroutine that gets the bulk of the solution started is in this section.
The real work is done in this section. We determine the unique elements by creating two separate hashes and then, using the keys to each hash, count the number of common elements. We then sort the common elements, if there are any, and set $minimum to be the smallest one.
MAIN:{
say minimum_common [1, 2, 3, 4], [3, 4, 5, 6];
say minimum_common [1, 2, 3], [2, 4];
say minimum_common [1, 2, 3, 4], [5, 6, 7, 8];
}
◇
Fragment referenced in 6.
$ perl perl/ch-2.pl 3 2 -1
Published by prz on Saturday 03 May 2025 15:09
Published on Wednesday 30 April 2025 12:00
The first post in this
series
introduced us to Map::Tube
. There, we built the fundamental structure of
the Map::Tube::Hannover
module and created the basic map file for the
Hannover tram network. This time, we’ll look at a map file’s structure and
extend the network. At the end, we’ll visualise a graph of the railway
network we’ve created so far.
Now that we’ve created a basic map, our goal is to understand the structure
of Map::Tube
maps a bit more. This way we won’t trip up when extending
our map to a more full network.
As a reminder, the map file we have at present looks like this:
{
"name" : "Hannover",
"lines" : {
"line" : [
{
"id" : "L1",
"name" : "Linie 1"
}
]
},
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H2"
},
{
"id" : "H2",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H1"
}
]
}
}
Let’s consider each of the elements in this map.
Each map can have a name. This is a string providing a human-readable
name of the map, specified by the name
attribute. Even though a map
doesn’t necessarily have to have a name, it’s very handy to have, hence I’ve
included it here.
Each map has one lines
attribute which contains a line
array containing
the individual lines in the network. There should be at least one line in
the map. Each line has to have an ID (specified by the id
attribute) and
a name (specified by the name
attribute).
A valid map must also have at least two stations on one line. We define
stations in the stations
attribute. It contains a station
array of
individual stations. A given station must have an ID (given by the id
attribute) and a name (given by the name
attribute). We must assign a
station to at least one of the lines specified in the lines
attribute. It
should also link to at least one other station by specifying the relevant ID
in its link
attribute.
There are more things that a map file can contain. For instance, a line can
have a color
attribute to specify its colour. Also, stations can link to
other stations indirectly by using the other_link
attribute. This way we
can represent connections via something like a tunnel, passageway or
escalator.
For all the gory details, check out the formal requirements for maps
section of the Map::Tube::Cookbook
documentation.
Where to from here? Well, Linie 1
needs more stations to better reflect
the situation in real life. Also, the network needs more lines as well as
connections between those lines, again reflecting reality better.
Fortunately, these are things we can test, so we’ll expand the test suite as
we go along.
We’d best get on with it then!
We currently only have two stations in our network. That’s not enough! To
flesh things out a bit, let’s add some more stations along Linie 1
. To be
specific, let’s add the other terminal station on that line, Sarstedt
, as
well as stations between Hauptbahnhof
and the respective terminal
stations. I’ve decided to choose Kabelkamp
on the north side and
Laatzen
on the south side.
One thing that we’re going to have to be careful with is giving each station an ID. How are we going to do that in some kind of systematic way? The first thing I thought of was to go left to right across the network. By this, I mean that the station furthest in the west along the given line is what I shall consider to be the first station along that line. This decision is arbitrary, but it should be good enough for our purposes.
Although it’s not clear from the Üstra network
plan,1
it turns out that Langenhagen
is further west than Sarstedt
. So, I chose
Langenhagen
to be the first station along that line. Because this is also
the first line in the network, Langenhagen
has the honour of having the
first station ID in the map file.
Thus, for Linie 1
, we have these stations, their respective new labels, and
their links:
Station | ID | Links |
---|---|---|
Langenhagen | H1 | H2 |
Kabelkamp | H2 | H1,H3 |
Hauptbahnhof | H3 | H2,H4 |
Laatzen | H4 | H3,H5 |
Sarstedt | H5 | H4 |
Our goal for now is to have these stations connected along Linie 1
with
their respective IDs and links. We’ll achieve this goal in small steps so
that we’re not changing too much in one go.
Note also that I’m doing all this work by hand. This allows us to see how
all the pieces fit together, which is one main goal of this HOWTO. In
reality, however, a railway network is much more complex and with many more
stations. Thus, to create the full network, one would need an automated
way to extract line and station data from e.g.
OpenStreetMap.
We would then collect this information and export it in the form that
Map::Tube
needs. But, that’s not important right
now, so we’ll continue adding
stations and lines manually.
Each station in our current network describes the connection it has to other
stations via its link
attribute. Stations at the ends of the line only
link to one other station because that’s what it means for a station to be
at the end of a line. The remaining stations link to two stations each,
connecting one end of the line to the other like the proverbial string of
pearls. In general, one can have many links at a given station, especially
if many lines cross at a given station. Right now we don’t need this extra
complexity and will keep things simple.
Let’s kick-start the implementation of the full list of stations for Linie
1
by adding the station Sarstedt
. To get the ball rolling, we’ll start
(as usual) with a test.
What we want to check is that there is a route from Langenhagen
to Sarstedt
and that stations along that route match our expectations. How do we go
about doing this? Again, the Map::Tube
framework comes to our rescue. It
provides the ok_map_routes()
assertion in the Test::Map::Tube
module
which we can use to check our route. Also, the docs help again, by
providing a simple route-checking
example.
Before we add this test, let’s remove some code duplication in our test
suite. Note that currently, we’re creating a Map::Tube::Hannover
object
twice in the tests. Really, we only need to do that once. Let’s
instantiate a single object and pass that to our test functions.
Assign a variable called $hannover
to the instantiated
Map::Tube::Hannover
object before the ok_map*()
functions, like so:
my $hannover = Map::Tube::Hannover->new;
Then replace Map::Tube::Hannover->new
in the calls to the ok_map*()
functions with the new variable:
ok_map($hannover);
ok_map_functions($hannover);
Our test file (t/map-tube-hannover.t
) now looks like this:
use strict;
use warnings;
use Test::More;
use Map::Tube::Hannover;
use Test::Map::Tube;
my $hannover = Map::Tube::Hannover->new;
ok_map($hannover);
ok_map_functions($hannover);
done_testing();
Running this test with prove
, we find that the tests still pass.
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ok
All tests successful.
Files=1, Tests=2, 0 wallclock secs ( 0.04 usr 0.00 sys + 0.48 cusr 0.05 csys = 0.57 CPU)
Result: PASS
Great! That’s worthy of a commit:
$ git commit -m "Extract repeated object instantiation into variable in tests" t/map-tube-hannover.t
[main f9005d1] Extract repeated object instantiation into variable in tests
1 file changed, 4 insertions(+), 2 deletions(-)
Now we’re ready to check the route from Langenhagen
to Sarstedt
via
Hauptbahnhof
. We start by defining an array of strings containing route
descriptions:
my @routes = (
"Route 1|Langenhagen|Sarstedt|Langenhagen,Hauptbahnhof,Sarstedt",
);
Although this array only contains one element, we’ll be extending it later,
so using a plural name is ok in this situation. Also, the
ok_map_routes()
test function requires an array reference as one of its arguments, hence
creating an array now is sensible forward-thinking.
We check the route by calling ok_map_routes()
like so:
ok_map_routes($hannover, \@routes);
The complete test file is now:
use strict;
use warnings;
use Test::More;
use Map::Tube::Hannover;
use Test::Map::Tube;
my $hannover = Map::Tube::Hannover->new;
ok_map($hannover);
ok_map_functions($hannover);
my @routes = (
"Route 1|Langenhagen|Sarstedt|Langenhagen,Hauptbahnhof,Sarstedt",
);
ok_map_routes($hannover, \@routes);
done_testing();
We don’t expect the tests to pass. Even so, what feedback do they give us?
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. 1/? Map::Tube::get_node_by_name(): ERROR: Invalid Station Name [Sarstedt]. (status: 101) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 897
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 2.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
All 2 subtests passed
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 2 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=2, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.47 cusr 0.05 csys = 0.56 CPU)
Result: FAIL
Ok, so Map::Tube::Hannover
doesn’t know about the station Sarstedt
.
Let’s update the map file and add a station entry for Sarstedt
, linking it
to Hauptbahnhof
at this step. We’ll also relabel Langenhagen
and
Hauptbahnhof
and reorder the entries within the map file so that
Langenhagen
is at the top and Sarstedt
at the bottom, thus matching the
north-south direction of this line. Remember that the links I’m using right
now aren’t those that we want in the end: this is merely a step along that
path.
Our list of stations now looks like this:
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H3"
},
{
"id" : "H3",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H1,H5"
},
{
"id" : "H5",
"name" : "Sarstedt",
"line" : "L1",
"link" : "H3"
}
]
}
Testing this change:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ok
All tests successful.
Files=1, Tests=3, 1 wallclock secs ( 0.06 usr 0.00 sys + 0.49 cusr 0.05 csys = 0.60 CPU)
Result: PASS
Great! We’ve got a working route from Langenhagen
to Sarstedt
! Let’s add
in the stations that we left out in the last step.
Update the routes test to this:
my @routes = (
"Route 1|Langenhagen|Sarstedt|Langenhagen,Kabelkamp,Hauptbahnhof,Laatzen,Sarstedt",
);
ok_map_routes($hannover, \@routes);
And check what the test output tells us:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. 1/? Map::Tube::get_node_by_name(): ERROR: Invalid Station Name [Kabelkamp]. (status: 101) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm on line 1434
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 2.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
All 2 subtests passed
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 2 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=2, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.50 cusr 0.04 csys = 0.57 CPU)
Result: FAIL
Ok, Kabelkamp
is missing. Adding its entry to the map file after
Langenhagen
, and updating the links for the Langenhagen
and
Hauptbahnhof
stations, we now have this stations list:
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H2"
},
{
"id" : "H2",
"name" : "Kabelkamp",
"line" : "L1",
"link" : "H1,H3"
},
{
"id" : "H3",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H2,H5"
},
{
"id" : "H5",
"name" : "Sarstedt",
"line" : "L1",
"link" : "H3"
}
]
}
and re-running the tests, we get:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. 1/? Map::Tube::get_node_by_name(): ERROR: Invalid Station Name [Laatzen]. (status: 101) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm on line 1434
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 2.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
All 2 subtests passed
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 2 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=2, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.48 cusr 0.05 csys = 0.56 CPU)
Result: FAIL
The tests are still failing, but we’re getting the failure that we expect
to see. In other words, we expect to see the error about Kabelkamp
disappear but expect to see an error about Laatzen
. This is because we
haven’t added Laatzen
yet. Adding the station entry for Laatzen
and
fixing up the links in the stations list, we get:
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H2"
},
{
"id" : "H2",
"name" : "Kabelkamp",
"line" : "L1",
"link" : "H1,H3"
},
{
"id" : "H3",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H2,H4"
},
{
"id" : "H4",
"name" : "Laatzen",
"line" : "L1",
"link" : "H3,H5"
},
{
"id" : "H5",
"name" : "Sarstedt",
"line" : "L1",
"link" : "H4"
}
]
}
You’ll find that the tests now pass:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ok
All tests successful.
Files=1, Tests=3, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.51 cusr 0.03 csys = 0.57 CPU)
Result: PASS
Yay!
That’s worth another commit:
$ git commit -m "Extend list of stations on Linie 1" share/hannover-map.json t/map-tube-hannover.t
[main a742db3] Extend list of stations on Linie 1
2 files changed, 27 insertions(+), 3 deletions(-)
To visualise what our map looks like, we can use the
Map::Tube::Plugin::Graph
plugin. Let’s install the plugin and see what it does:
$ cpanm Map::Tube::Plugin::Graph
Note that you might need to install Graphviz before installing the plugin, e.g.:
$ sudo apt install graphviz
We’re going to create a small program to convert our Map::Tube
map into a
PNG image of a Graphviz graph. To keep things nice and tidy, let’s create a
bin/
directory to keep our program in:
$ mkdir bin
Now, with your favourite editor, open a file called bin/map2image.pl
and
enter into it the following code:
use strict;
use warnings;
use lib qw(lib);
use Map::Tube::Hannover;
my $hannover = Map::Tube::Hannover->new;
my $map_name = $hannover->name;
open(my $map_image, ">", "$map_name.png")
or die "ERROR: Can't open $map_name.png: $!";
binmode($map_image);
print $map_image $hannover->as_png;
close($map_image);
# vim: expandtab shiftwidth=4
In this program, we specify the location of the lib
directory explicitly.
This saves us from having to use -I lib
when invoking perl
on the
command line. We then import our Map::Tube::Hannover
module and
instantiate a new Map::Tube::Hannover
object. We also save the map’s name
for later use as part of the output image filename.
Then we open the output file and
barf if something went
wrong. Since the output image is a PNG, we need to set the output mode to
binary. After that, we print the output of the Map::Tube::Plugin::Graph
plugin’s as_png()
method2 to file and close the file.
Running our new program like so:
$ perl bin/map2image.pl
produces an image file called Hannover.png
in the base project directory.
Opening this image in an image viewer, you should output similar to this:
Nice!
It’s fairly obvious from the input map file that our network is a straight line. Even so, it’s nice to see this in an image rather than having to deduce it from only the map file’s structure.
It’ll be handy having such a tool around when developing the map further, so let’s add it to the repository and commit that change:
$ git commit -m "Add program to convert map into a PNG image"
[main bb709e8] Add program to convert map into a PNG image
1 file changed, 17 insertions(+)
create mode 100644 bin/map2image.pl
We didn’t do as much this time, but that’s OK. We put in a lot of work in
the previous post getting everything up and running, so taking it easy for a
bit will let us catch our breath. Still, we weren’t mucking around. We
used test-driven development to extend the tram network map for Hannover and
wrote a small program to visualise it. We’re making steady progress toward
our goal: a working Map::Tube
map that we can use to find our way from
station to station.
The next post in the series will describe how to add more lines to the network as well as how to use colour to tell them apart. Until then, keep cool till after school!
Originally posted on https://peateasea.de.
Image credits: Hannover coat of arms: Wikimedia Commons, U-Bahn symbol: Wikimedia Commons, Langenhagen coat of arms: Wikimedia Commons, Sarstedt coat of arms: Wikimedia Commons
Thumbnail credits: Swiss Cottage Underground Station (Jubilee Line) by Hugh Llewelyn
You’ll need to follow that link and then download the PDF for “Netzplan U”. A direct link would get outdated very quickly, so I thought it best only to mention how to get the right info. The “U” in “Netzplan U” stands for U-Bahn: i.e. the “underground” tram network. I use quotes around “underground” here because only the very centre of the network is underground; the rest is overground. This is why I use the English term “tram” rather than “subway”, because “tram” fits reality so much better. Also, I think there’s a certain class of train geek which takes exception at calling such a railway network an U-Bahn.
[return]As far as I can tell, the as_png()
method gets monkey
patched onto the Map::Tube
role, hence why we can call it from our
Map::Tube::Hannover
object.
Published by Marco Pessotto on Tuesday 29 April 2025 00:00
When we are dealing with legacy applications, it’s very possible that the code we are looking at does not deal with Unicode characters, instead assuming all text is ASCII. This will cause a myriad of glitches and visual errors.
In 2025, after more than 30 years since Unicode was born, how is that possible that old applications still survive while ignoring or working around the whole issue?
Well, if your audience is mainly English speaking, it’s possible that you just experience glitches sometimes, with some characters like typographical quotes, non breaking spaces, etc. which are not really mission-critical. If, on the contrary, you need to deal every day with diacritics or even different languages (say, Italian and Slovenian), your application simply won’t survive without a good understanding of encoding.
In this article we are going to focus on Perl, but other languages face the same problems.
As we know, machines work with numbers and bytes. A string of text is made of bytes, and each of them is 8 bits (each bit is a 0 or a 1). So one byte allows 256 possible combinations of bits.
Plain ASCII is made by 128 characters (7 bits), so it fits nicely in one byte, leaving room for more. One character is exactly one byte, and one byte carries a character.
However, ASCII is not enough for most of languages, even if they use the Latin alphabet, because they use diacritics like é, à, č, and ž.
To address this problem, the ISO 8859 encoding standards appeared (there are others, like the Windows code pages, using the same idea, but of course using different code points). These standards use 8th bit not used by ASCII, still using a single byte for each character but double the combinations from ASCII, allowing 256 possible characters. That’s better, but still not great. It suffices for handling text in a couple of languages if they share the same characters, but not more. For this reason, there are various ISO 8859 encoding standards (8859-1, 8859-2, etc.) — one for each group of related languages (e.g. 8859-1 is for Western Europe, 8859-2 for Central Europe and so on, and even revisions of the same encoding, like 8859-15 and 8859-16).
The problem is that if you have a random string, you have to guess which is the correct encoding. The same byte value could represent an “È” or a “Č”. You need to look at the context (which language is this?) or search for an encoding declaration. Most important, you are simply not able to type È and Č in the same plain text document. If your company works in Italy using the 8859-15 encoding, it means you can’t even accept the correct name of a customer from Slovenia, a neighbour country, because the encoding simply doesn’t have a place for characters with a caron (like “č”) and you have to work around this real problem.
So finally came the Unicode age. This standard allows for more than a million characters, which should be enough. You can finally type English, Italian, Russian, Arabic, and emojis all in the same plain text. This is truly great, but it creates a complication for the programmer: the assumption that one byte is one character is not true anymore. The common encoding for Unicode is UTF-8, which is also backward compatible with ASCII. This means that if you have ASCII text, it is also valid UTF-8. Any other character which is not ASCII will instead take from two to three bytes and the programming language needs to be aware of this.
Text manipulation is a very common task. If you need to process a string, say “ÈČ”, like in this document, you should be able to tell that it is a string with two characters representing two letters. You want to be able to use regular expression on it, and so on.
Now, if we read it as a string of bytes, we get 4 of them and the newline, which is not what we want.
Let’s see an example:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper::Concise;
# sample.txt contains ÈČ and a new line
{
open my $fh, '<', 'sample.txt';
while (my $l = <$fh>) {
print $l;
if ($l =~ m/\w\w/) {
print "Found two characters\n"
}
print Dumper($l);
}
close $fh;
}
{
open my $fh, '<:encoding(UTF-8)', 'sample.txt';
while (my $l = <$fh>) {
print $l;
if ($l =~ m/\w\w/) {
print "Found two characters\n"
}
print Dumper($l);
}
close $fh;
}
This is the output:
ÈČ
"\303\210\304\214\n"
Wide character in print at test.pl line 24, <$fh> line 1.
ÈČ
Found two characters
"\x{c8}\x{10c}\n"
In the first block the file is read verbatim, without any decoding. The regular expression doesn’t work, we have basically 4 bytes which don’t seem to mean much.
In the second block we decoded the input, converting it in the Perl internal representation. Now we can use regular expressions and have a consistent approach to text manipulation.
In the first block, we got a warning:
Wide character in print at test.pl line 25, <$fh> line 1
That’s because we printed something to the screen, but given that the string is now made by characters (decoded for internal use), Perl warns us that we need to encode it back to bytes (for the outside world to consume). A wide character is basically a character which needs to be encoded.
This can either be done by calling the encode()
method from the Encode
API:
use strict;
use warnings;
use Encode;
print encode("UTF-8", "\x{c8}\x{10c}\n");
Or, better, by declaring the global encoding for the standard output:
use strict;
use warnings;
binmode STDOUT, ":encoding(UTF-8)";
print "\x{c8}\x{10c}\n";
So, the golden rule is:
Any other approach is going to lead to double encoded characters (seeing things like à and Ä in English text is a clear symptom of this), corrupted text, and confusion.
If you are dealing with standard input/output on the shell, you should have this in your script:
binmode STDIN, ":encoding(UTF-8)";
binmode STDOUT, ":encoding(UTF-8)";
binmode STDERR, ":encoding(UTF-8)";
So you’re decoding on input and encoding on output automatically.
For files, you can add the layer in the second argument of open
like in the sample script above, or use a handy module like Path::Tiny
, which provides methods like slurp_utf8
and spew_utf8
to read and write files using the correct encoding.
Interactions with web frameworks should always happen with the internal Perl representation. When you receive the input from a form, it should be considered already decoded. It’s also the framework’s responsibility to handle the encoding on output. Here at End Point we have many Interchange applications. Interchange can support this, via the MV_UTF8
variable.
The same rules apply to databases. It’s responsibility of the driver to take your strings and encode/decode them when talking to the database. E.g. DBD::Pg has the pg_enable_utf8
option, while DBD::mysql has mysql_enable_utf8
. These options should usually be turned on or off explicitly. Not specifying the option is usually source of confusion because of the heuristic approach it requires for understanding the code.
It may not be the most correct approach, but I’ve been using Dumper
for more than a decade and it works. You simply use Data::Dumper
or Data::Dumper::Concise
and call Dumper
on the string you want to examine.
If you see hexadecimal codepoints like \x{c8}\x{10c}
, it means the string is decoded and you’re working with the characters. If you see the raw bytes or characters with diacritics (the latter would happing if the terminal is interpreting the bytes and showing you the characters), you’re dealing with an encoded string. If you see weird characters in an English context, it probably means the text has been encoded more than once.
If you’re still using legacy encoding systems like ISO 8859 or the similar Windows code pages, or worse, you simply don’t know and you’re relying on the browsers’ heuristics (they’re quite good at guessing) you should change to handle the input and the output correctly along the whole application:
iconv
should do the trick).This looks like a challenging task, and it can be, but it’s totally worth it because fancy and well-supported characters nowadays are the norm. Typographical quotes like “this” and ‘this’ are very common and inserted by word processors automatically. So are emojis. People and customers simply expect them to work.
If your client is on a budget or can’t deal with a large upgrade like this one, which has the potential to be disruptive and expose bugs which are lurking around, you can try to downgrade the Unicode characters to ASCII with tools like Text::Unidecode (which has been ported to other languages as well). So typographical quotes will become the plain ASCII ones, diacritics will be stripped, and various other characters will get their ASCII representation. Not great, but better than dealing with unexpected behavior!
Published on Sunday 27 April 2025 19:21
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given a string of lowercase letters. Write a script to find the position of all groups in the given string. Three or more consecutive letters form a group. Return “” if none found.
Here’s our one subroutine, this problem requires very little code.
sub groupings{
my($s) =
@_;
my
@groups;
my
@group;
my($current, $previous);
my
@letters = split //, $s;
$previous = shift
@letters;
@group = ($previous);
do {
$current = $_;
if($previous eq $current){
push
@group, $current;
}
if($previous ne $current){
if(
@group >= 3){
push
@groups, [
@group];
}
@group = ($current);
}
$previous = $current;
} for
@letters;
if(
@group >= 3){
push
@groups, [
@group];
}
my
@r = map {q/"/␣.␣join(q//,␣
@{$_}) . q/"/␣}␣
@groups;
return join(q/, /,
@r) || q/""/;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some basic tests.
MAIN:{
say groupings q/abccccd/;
say groupings q/aaabcddddeefff/;
say groupings q/abcdd/;
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl "cccc" "aaa", "dddd", "fff" ""
You are given two arrays of integers, each containing the same elements as the other. Write a script to return true if one array can be made to equal the other by reversing exactly one contiguous subarray.
Here’s the process we’re going to follow.
Now let’s check and see how many differences were found.
return 1 if
@{$indices_different} == 0;
$indices_different = [sort {$a <=> $b}
@{$indices_different}];
my $last_i = $indices_different->[
@{$indices_different} - 1];
my $length = 1 + $last_i - $indices_different->[0];
my
@u_ = reverse
@{$u}[$indices_different->[0] .. $last_i];
my
@v_ = reverse
@{$v}[$indices_different->[0] .. $last_i];
splice
@{$u}, $indices_different->[0], $length,
@u_;
splice
@{$v}, $indices_different->[0], $length,
@v_;
return 1 if join(q/,/,
@{$u}) eq join(q/,/,
@{$t});
return 1 if join(q/,/,
@{$v}) eq join(q/,/,
@{$s});
return 0;
◇
The rest of the code combines the previous steps and drives some tests.
MAIN:{
say reverse_equals [3, 2, 1, 4], [1, 2, 3, 4];
say reverse_equals [1, 3, 4], [4, 1, 3];
say reverse_equals [2], [2];
}
◇
Fragment referenced in 7.
$ perl perl/ch-2.pl 1 0 1
Published by prz on Saturday 26 April 2025 14:29
Published by Ume Aiman Rajput on Thursday 24 April 2025 13:18
While modern programming languages have gained popularity, Perl remains a powerful, reliable, and versatile tool for text processing…
Mohammad Sajid Anwar’s post
in last year’s Perl Advent Calendar about his
Map::Tube
module intrigued me. I
decided I wanted to build such a map for the tram network in the city where
I live: Hannover, Germany. Along the way, I thought it’d be nice to have a
detailed HOWTO explaining the steps needed to create a Map::Tube
map for
one’s city of interest. Since I enjoy explaining things in detail, this got
… long. So I broke it up into parts.
Welcome to the first post in a five-part series about how to create
Map::Tube
maps.
Originally I wrote this as a single post, which made it, you might say, rather protracted. I’ve thus split it up into five separate posts, each building upon the previous. This way each is more digestible and hopefully the reader doesn’t–in the words of P.D.Q. Bach–fall into a confused slumber. Let’s see how I manage…
In this five-part series, we’re going to:
Map::Tube
map we can create (this post).Map::Tube
map files and then extend the
map to more stations along the first line, displaying a graph of the
line.This first post is the longest because I spend time discussing how to set up
a module from scratch. Experienced readers can skip this section if they so
wish and go directly to the section about building the Map::Tube
map file
guided by tests.
As I mentioned in my post about finding all tram stops in
Hannover,
Mohammad Sajid Anwar’s Perl Advent Calendar article about his Perl-based
routing network module for railway
systems interested me and I
wanted to create my own. This series of posts will use Hannover as the main
focus to show you how to build Map::Tube
maps, giving you the information
you need to create your own.
There’s a lot to get through, so we’d better get started!
Each map for a given railway network is a Perl module in its own right.
Hence, the first thing we need to do is create a stub module for our
project. Maps for specific cities follow the same naming pattern:
Map::Tube::<city-name>
. Their project directories follow a similar naming
pattern: Map-Tube-<city-name>
. Thus, for our current example, the goal is
to create a module called Map::Tube::Hannover
within a directory named
Map-Tube-Hannover
. Let’s do that now.
For the rest of the discussion, I’m going to assume that you have a recent perlbrew-ed Perl1 and that you’ve set that all up properly.
As mentioned in the perlnewmod
documentation, the recommended way to
create a new stub module (including its files and directory layout) is to
use the module-starter
program. This isn’t distributed with Perl, so we
have to install it before we can use it. It’s part of the
Module::Starter
distribution;
install it now with cpanm
:
$ cpanm Module::Starter
To create our stub Map::Tube::Hannover
module we run module-starter
,
giving it some required module meta-data:
$ module-starter --module=Map::Tube::Hannover --author="Paul Cochrane" --email=ptc@cpan.org \
--ignores=git --ignores=manifest
Created starter directories and files
The --ignores=git
and --ignores=manifest
options create .gitignore
and MANIFEST.SKIP
files for us. Thus, anything we don’t need in the
repository or the final CPAN distribution is skipped and ignored from the
get-go. This is handy as it saves mucking about with admin stuff when we
could be getting going with our shiny new module.
The module-starter
command created a directory called Map-Tube-Hannover
in the current directory and filled it with some standard files every Perl
distribution/module should have. Let’s enter the directory and see what
we’ve got.
$ cd Map-Tube-Hannover
$ tree
.
├── Changes
├── lib
│ └── Map
│ └── Tube
│ └── Hannover.pm
├── Makefile.PL
├── MANIFEST.SKIP
├── README
├── t
│ ├── 00-load.t
│ ├── manifest.t
│ ├── pod-coverage.t
│ └── pod.t
└── xt
└── boilerplate.t
5 directories, 10 files
We see that module-starter
created a Perl module file
(lib/Map/Tube/Hannover.pm
) for our planned Map::Tube::Hannover
module.
The command also created the associated (sub-)directory structure, a test
directory with some useful initial tests, as well as various module-related
build and information files.
This is a great starting point, so let’s save this state by creating a Git repository in this directory and adding the files to the repo in an initial commit.2
$ git init
Initialized empty Git repository in /path/to/Map-Tube-Hannover/.git/
$ git add .
$ git commit -m "Initial import of Map::Tube::Hannover stub module files"
[main (root-commit) 7bd778e] Initial import of Map::Tube::Hannover stub module files
11 files changed, 380 insertions(+)
create mode 100644 .gitignore
create mode 100644 Changes
create mode 100644 MANIFEST.SKIP
create mode 100644 Makefile.PL
create mode 100644 README
create mode 100644 lib/Map/Tube/Hannover.pm
create mode 100644 t/00-load.t
create mode 100644 t/manifest.t
create mode 100644 t/pod-coverage.t
create mode 100644 t/pod.t
create mode 100644 xt/boilerplate.t
If you want to follow along with how I built things, the Git repo for this project is on GitHub.
Personally, I love tests. They help reduce risk and (if the project has high test coverage) give me confidence that the code is doing what I expect it to do. They also help me be more fearless when refactoring a codebase. A good test suite can make for a wonderful development experience.
So, before we start implementing things, let’s build the project and run the
test suite so that we know that everything is working as we expect. Yes, I
expect the authors of Module::Starter
will have created everything
correctly, but it’s a good feeling to know that one is starting from a solid
foundation before changing anything.
To build the project, we create its Makefile
by running Makefile.PL
with
perl
. Then we simply call make test
:
$ perl Makefile.PL
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
$ make test
cp lib/Map/Tube/Hannover.pm blib/lib/Map/Tube/Hannover.pm
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... skipped: Author tests not required for installation
t/pod-coverage.t .. skipped: Author tests not required for installation
t/pod.t ........... skipped: Author tests not required for installation
All tests successful.
Files=4, Tests=1, 0 wallclock secs ( 0.04 usr 0.01 sys + 0.34 cusr 0.03 csys = 0.42 CPU)
Result: PASS
Cool! The tests passed! Erm, ‘test’, I should say, as only one ran. That
test showed that the module can be loaded (this is what t/00-load.t
does).
However, some of our tests didn’t run because they’re only to be run by
module authors. To run these tests, we need to set the RELEASE_TESTING
environment variable:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... skipped: Test::CheckManifest 0.9 required
t/pod-coverage.t .. skipped: Test::Pod::Coverage 1.08 required for testing POD coverage
t/pod.t ........... skipped: Test::Pod 1.22 required for testing POD
All tests successful.
Files=4, Tests=1, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.32 cusr 0.04 csys = 0.39 CPU)
Result: PASS
Hrm, the author tests were still skipped. We need to install some modules from CPAN to get everything running:
$ cpanm Test::CheckManifest Test::Pod::Coverage Test::Pod
This time the author tests run, but the t/manifest.t
test fails:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... Bailout called. Further testing stopped: Cannot find a MANIFEST. Please check!
t/manifest.t ...... Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/1 subtests
FAILED--Further testing stopped: Cannot find a MANIFEST. Please check!
make: *** [Makefile:851: test_dynamic] Error 255
Weird! I didn’t expect that.
It turns out that we’ve not created an initial MANIFEST
file. That’s easy
to fix, though. We only need to run make
with the manifest
target:
$ make manifest
"/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Manifest=mkmanifest" -e mkmanifest
Added to MANIFEST: Changes
Added to MANIFEST: lib/Map/Tube/Hannover.pm
Added to MANIFEST: Makefile.PL
Added to MANIFEST: MANIFEST
Added to MANIFEST: README
Added to MANIFEST: t/00-load.t
Added to MANIFEST: t/manifest.t
Added to MANIFEST: t/pod-coverage.t
Added to MANIFEST: t/pod.t
Added to MANIFEST: xt/boilerplate.t
So far, so good. Let’s see what the tests say now:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... ok
t/pod-coverage.t .. ok
t/pod.t ........... ok
All tests successful.
Files=4, Tests=4, 0 wallclock secs ( 0.04 usr 0.00 sys + 0.41 cusr 0.05 csys = 0.50 CPU)
Result: PASS
That’s better!
You’ll note that although we’ve created some files not tracked by Git (e.g.
the Makefile
and MANIFEST
files), the working directory is still clean:
$ git status
On branch main
nothing to commit, working tree clean
This is because the --ignores=git
option passed to module-starter
generates a .gitignore
file which ignores the MANIFEST
among other such
files. Nice!
Since we installed some modules as part of getting everything running, we
need to update our dependencies. These dependencies aren’t required to get
the module up and running. Nor are they strictly required to test
everything, because they’re tests for module authors, not for users of the
module. However, since we’re creating a module, we’re our own module
author, so it’s a good idea to set up the author tests. Thus, we need to
specify them as recommended test-stage prerequisites. Neil Bowers has a
good blog post about specifying dependencies for your CPAN
distribution
which describes things in more detail. For our case here, this boils down
to inserting the following code at the end of the %WriteMakefileArgs
hash
in Makefile.PL
:
# rest of %WriteMakefileArgs content
META_MERGE => {
"meta-spec" => { version => 2 },
prereqs => {
test => {
recommends => {
'Test::CheckManifest' => '0.9',
'Test::Pod::Coverage' => '1.08',
'Test::Pod' => '1.22',
},
},
},
},
Let’s try running the tests again to make sure that we haven’t broken anything:
$ RELEASE_TESTING=1 make test
Makefile out-of-date with respect to Makefile.PL
Cleaning current config before rebuilding Makefile...
make -f Makefile.old clean > /dev/null 2>&1
"/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" Makefile.PL
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
==> Your Makefile has been rebuilt. <==
==> Please rerun the make command. <==
false
make: *** [Makefile:809: Makefile] Error 1
Oops, we forgot to rebuild the Makefile
. Let’s do that quickly:
$ perl Makefile.PL
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
Now the test suite runs and passes as we hope:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... ok
t/pod-coverage.t .. ok
t/pod.t ........... ok
All tests successful.
Files=4, Tests=4, 0 wallclock secs ( 0.03 usr 0.01 sys + 0.42 cusr 0.05
csys = 0.51 CPU)
Result: PASS
Great! It’s time for another commit.
$ git commit -m "Add recommended test-stage dependencies" Makefile.PL
[main 819c069] Add recommended test-stage dependencies
1 file changed, 12 insertions(+)
Now that we’re sure our test suite is working properly (and we’ve got a
clean working directory), we can start developing Map::Tube::Hannover
by
… adding another test! But where to start? Fortunately for us, the
Map::Tube
docs mention a basic data validation
test as well as a
basic functional validation
test to ensure
that the input data makes sense and that basic map functionality is
available. That’s a nice starting point, so let’s do that.
Open your favourite editor and create a file called t/map-tube-hannover.t
and fill it with this code:3
use strict;
use warnings;
use Test::More;
use Map::Tube::Hannover;
use Test::Map::Tube;
ok_map(Map::Tube::Hannover->new);
ok_map_functions(Map::Tube::Hannover->new);
done_testing();
Running the test suite (but avoiding the author tests for now), we find that things aren’t working.
$ make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ............ 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ............ ok
t/manifest.t ........... skipped: Author tests not required for installation
t/map-tube-hannover.t .. Can't locate Test/Map/Tube.pm in @INC (you may need to install the Test::Map::Tube module) (@INC entries checked: /path/to/Map-Tube-Hannover/blib/lib /path/to/Map-Tube-Hannover/blib/arch /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/x86_64-linux /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3 /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3/x86_64-linux /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3 .) at t/map-tube-hannover.t line 7.
BEGIN failed--compilation aborted at t/map-tube-hannover.t line 7.
t/map-tube-hannover.t .. Dubious, test returned 2 (wstat 512, 0x200)
No subtests run
t/pod-coverage.t ....... skipped: Author tests not required for installation
t/pod.t ................ skipped: Author tests not required for installation
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 512 (exited 2) Tests: 0 Failed: 0)
Non-zero exit status: 2
Parse errors: No plan found in TAP output
Files=5, Tests=1, 0 wallclock secs ( 0.04 usr 0.01 sys + 0.38 cusr 0.07 csys = 0.50 CPU)
Result: FAIL
Failed 1/5 test programs. 0/1 subtests failed.
make: *** [Makefile:851: test_dynamic] Error 255
This is completely ok: we expected that the tests wouldn’t pass. We’re
using the tests to help guide us as we slowly build the
Map::Tube::Hannover
module.
The first error we have is:
Can't locate Test/Map/Tube.pm in @INC (you may need to install the Test::Map::Tube module)
As the message says, we can try to get further by installing
Test::Map::Tube
:
$ cpanm Test::Map::Tube
This will install almost 90 distributions in a freshly-built Perl, so you
might want to go and have a walk or get an appropriate beverage while
cpanm
does its thing.
Welcome
back! Now that the
next set of dependencies has been installed, we make a mental note to add
Test::Map::Tube
to the list of required test dependencies in
Makefile.PL
. Then we try running the tests again:
$ make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ............ 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ............ ok
t/manifest.t ........... skipped: Author tests not required for installation
t/map-tube-hannover.t .. Can't locate object method "new" via package "Map::Tube::Hannover" at t/map-tube-hannover.t line 9.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
t/pod-coverage.t ....... skipped: Author tests not required for installation
t/pod.t ................ skipped: Author tests not required for installation
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=5, Tests=1, 0 wallclock secs ( 0.05 usr 0.00 sys + 0.67 cusr 0.08 csys = 0.80 CPU)
Result: FAIL
Failed 1/5 test programs. 0/1 subtests failed.
make: *** [Makefile:857: test_dynamic] Error 255
This time we’ve got a problem in the module we’re creating. There’s
something about a method new
not being available. If you have a look at
lib/Map/Tube/Hannover.pm
, you’ll find that it’s filled with lots of docs,
but there’s almost no code. How do we solve this? Well, the hint is in the
error message above:
Can't locate object method "new"
If we see words like “object” and “method”, this means we’re dealing with
object orientation.4 Thus, we need to turn our package
into a class so that the failing test can call a new
method and hence
create an instance of the Map::Tube::Hannover
class. There are several
ways to create classes in Perl, so which one do we use? The hint is in the
first sentence of Map::Tube
’s
DESCRIPTION:
The core module defined as Role (Moo) to process the map data.
In other words, we need to use Moo
for
object orientation. This should have been installed along with
Test::Map::Tube
, but just in case it wasn’t, you can install it with
cpanm
:
$ cpanm Moo
To use Moo
to turn our package into a class, we only need to import it.
Open lib/Map/Tube/Hannover.pm
in your favourite editor and add the line
use Moo;
just after the use warnings;
statement.
We don’t really need to run the full test suite each time we’re developing
this code, so let’s use prove
on only the t/map-tube-hannover.t
test
file instead:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. # Not a Map::Tube object
# Failed test 'An object'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Looks like you failed 1 test of 1.
t/map-tube-hannover.t .. 1/?
# Failed test 'ok_map_data'
# at t/map-tube-hannover.t line 9.
Don't know how to access underlying map data at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3/Test/Builder.pm line 374.
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 1.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/1 subtests
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 1 Failed: 1)
Failed test: 1
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=1, 1 wallclock secs ( 0.04 usr 0.00 sys + 0.36 cusr 0.02 csys = 0.42 CPU)
Result: FAIL
The tests still aren’t passing, but that’s ok, we’re getting somewhere. The important part here is:
# Not a Map::Tube object
Ok, so how do we make this into a Map::Tube
object? We use the with
statement from Moo
, which
Composes one or more Moo::Role (or Role::Tiny) roles into the current class.
Add the following code under the use Moo;
statement we added earlier:
with 'Map::Tube';
Running the test again will still fail, but this time we get a different error:5
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ERROR: Can't apply Map::Tube role, missing 'xml' or 'json'. at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm line 148.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.41 cusr 0.04 csys = 0.49 CPU)
Result: FAIL
The central issue is here:
Can't apply Map::Tube role, missing 'xml' or 'json'.
What does that mean?
We’ve arrived at the core of the problem we’re trying to solve: we now need to create the input map file describing the railway network. This file can be either XML or JSON formatted, hence why the error message mentions that there is missing XML or JSON.
To load the map file, we need to define either a json()
or xml()
method,
depending upon the format we’ve chosen. The map file defines the lines and
stations associated with our railway network and their connections.
One pattern is to place the map file in a share/
directory in the
project’s base directory and to load it lazily by defining the respective
json()
or xml()
method with the is
option set to lazy
, i.e.
has json => (is => 'lazy');
or
has xml => (is => 'lazy');
Because this is “lazy”, we need to define the builder method as well, e.g. for JSON-formatted files:
sub _build_json { dist_file('Map-Tube-Hannover', 'hannover-map.json') }
or for XML-formatted files:
sub _build_xml { dist_file('Map-Tube-Hannover', 'hannover-map.xml') }
It’s also possible to do this in one step, which is the approach that I prefer and which we’ll discuss now.
Another pattern for loading Map::Tube
map files is to set the default
option in the json()
or xml()
method, passing a sub
which returns the
file’s location. I found this to be a more direct approach and hence have
used this pattern here.
As mentioned above, one usually places this file in a directory called
share/
located in the project’s root directory. What’s not always clear
is how we should name this file or how we connect it to the
Map::Tube::<whatever>
class. In the end, it doesn’t matter and one can
simply follow the pattern used in e.g.
Map::Tube::London
, i.e. call
the file something like <city-name>-map.json
.
How to connect this file to the Map::Tube::<whatever>
class is described
in the Map::Tube::Cookbook
WORK WITH A MAP
documentation.
The trick is to create a getter called json()
6 which
returns the name of the input file. If you use the share/
directory
pattern, you can use the File::Share
module to get the location within the
dist easily.
Let’s implement this now. Create the share/
directory and then create an
empty input map file by touch
ing it:
$ mkdir share
$ touch share/hannover-map.json
Now we import the dist_file
function from the File::Share
module by
adding the following code after the use Moo;
statement:7
use File::Share qw(dist_file);
Note that to be able to use this module, we’ll have to install it:
$ cpanm File::Share
We’ll also have to make another mental note to add this as a prerequisite in
our Makefile.PL
. We’ll get around to that later.
Further down the module, remove the stub function1
and function2
definitions that module-starter
created for us and replace them with the
recommended json
getter:
has json => (
is => 'ro',
default => sub {
return dist_file('Map-Tube-Hannover', 'hannover-map.json')
}
);
Running the test file gives a new error! Yay!
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. Map::Tube::_init_map(): ERROR: Malformed Map Data (/path/to/Map-Tube-Hannover/share/hannover-map.json): malformed JSON string, neither array, object, number, string or atom, at character offset 1 (before "(end of string)")
(status: 126) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 151
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 1 wallclock secs ( 0.03 usr 0.00 sys + 0.44 cusr 0.02 csys = 0.49 CPU)
Result: FAIL
We seem to have malformed map data. That’s to be expected because the input file is empty.
Since it’s JSON, it’ll need some curly braces in it at the very least. Let’s add some to it and see what happens:
$ echo "{}" > share/hannover-map.json
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. Map::Tube::_validate_map_structure(): ERROR: Invalid line structure in map data. (status: 128) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 151
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.46 cusr 0.01 csys = 0.50 CPU)
Result: FAIL
Another different error! Nice. We don’t want to be crawling forward like
this all day, though. We need some real data in this file and with the
correct structure. Fortunately, both the Map::Tube
JSON
docs and the
Map::Tube::Cookbook
formal requirements for
maps
describe this for us nicely.
Our basic structure will need a name
and a lines
object containing a
line
array of all lines in our railway network. We’ll also need a
stations
object containing a station
array of all stations in the
network and how they are connected to the lines. Phew! That was a
mouthful! How does that look in practice? Let’s implement it!
Open the map file (share/hannover-map.json
) in your favourite editor and
enter the following data structure:
{
"name" : "Hannover",
"lines" : {
"line" : [
{
"id" : "L1",
"name" : "Linie 1"
}
]
},
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H1"
}
]
}
}
This creates a map called Hannover
, with one line (called Linie 1
) and
one station on that line (Hauptbahnhof
). The link
attribute must be
set, hence we’ve set it to point to the station itself. I expect this to
give an error because links should be between stations, not to themselves.
However, this is the smallest basic example that I could think of. The
station’s ID, H1
, that I’ve used here doesn’t represent Hauptbahnhof 1
(as one could mistake it to mean) but means Hannover 1
because this will
be the first station in the Hannover network.
Let’s see what the tests now tell us.
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. # Line id L1 defined but serves only one station
# Failed test 'Hannover'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Station ID H1 links to itself
# Failed test 'Hannover'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Looks like you failed 2 tests of 14.
t/map-tube-hannover.t .. 1/?
# Failed test 'ok_map_data'
# at t/map-tube-hannover.t line 9.
Map::Tube::get_shortest_route(): ERROR: Missing Station Name. (status: 100) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 193
# Failed test at t/map-tube-hannover.t line 10.
# got: 0
# expected: 1
# Looks like you failed 2 tests of 2.
t/map-tube-hannover.t .. Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/2 subtests
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 512 (exited 2) Tests: 2 Failed: 2)
Failed tests: 1-2
Non-zero exit status: 2
Files=1, Tests=2, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.46 cusr 0.07 csys = 0.57 CPU)
Result: FAIL
As I guessed, this still gives us an error. Even so, we’re getting somewhere. Focusing on the first error:
Line id L1 defined but serves only one station
we see we’ve been told that the line defined by the ID L1
only serves one
station (true, it does, but that’s something we’ll change soon). We’ve also
been told that the station referred to by the ID H1
links to itself,
Station ID H1 links to itself
which is what we already thought was dodgy. It’s nice that the basic validation test checks such things!
Ok, let’s add another station to see what happens. In our
share/hannover-map.json
map file, we extend the network to include the
station Langenhagen
8 and we change the links so that the
stations connect to one another. The map file now looks like this:
{
"name" : "Hannover",
"lines" : {
"line" : [
{
"id" : "L1",
"name" : "Linie 1"
}
]
},
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H2"
},
{
"id" : "H2",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H1"
}
]
}
}
A note for anyone familiar with Hannover and its tram system: yes, the stations Hauptbahnhof and Langenhagen are on the same line (Linie 1), however, they are not directly linked to one another. Langenhagen is the final station along that line heading northwards; Hauptbahnhof is effectively the middle of the entire network. We’ll flesh out a more full version of the network as we go along.
Running the tests this time gives:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ok
All tests successful.
Files=1, Tests=1, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.50 cusr 0.02 csys = 0.56 CPU)
Result: PASS
Success!! Go and have a bit of a dance! You’ve created your first
functional Map::Tube
map! :tada:
Now things get interesting. We can start adding new lines and stations and
start linking them together. Then we can see how to use
Map::Tube::Hannover
to find routes between stations and even show a graph
of the railway network.
Let’s not get too far ahead of ourselves though. Let’s stay calm and focused and take things one step at a time.
But first, we’ve got some unfinished business. We’ve added some modules as
dependencies, so we need to ensure that our Makefile.PL
includes them and
commit that change. We also need to add our first iteration of the map file
to the Git repository as well as the code which integrates it into the
Map::Tube
framework and its test. To work!
If you remember correctly, the first module we added was Test::Map::Tube
.
We need to add this to the TEST_REQUIRES
key in the %WriteMakefileArgs
hash. Open Makefile.PL
and extend TEST_REQUIRES
to look like this:
TEST_REQUIRES => {
'Test::More' => '0',
'Test::Map::Tube' => '4.03',
},
Note that the Test::More
requirement was already present. We’ve specified
the version number for Test::Map::Tube
to be '4.03'
because this is the
version I used when writing this HOWTO. If you’re following along at home,
I recommend checking what the current version is and using that number
instead.
The remaining dependencies are “prerequisite Perl modules”, hence we need to
set the PREREQ_PM
hash key in %WriteMakefileArgs
. Change the initial
value from
PREREQ_PM => {
#'ABC' => '1.6',
#'Foo::Bar::Module' => '5.0401',
},
to
PREREQ_PM => {
'File::Share' => '0',
'Map::Tube' => '4.03',
'Moo' => '0',
},
where I’ve again chosen to select the specific Map::Tube
version being
used for this HOWTO. I’ve set the other modules to use version '0'
because any version should be sufficient for our
purposes.
Technically, we don’t need to add the Map::Tube
dependency because it’s
pulled in by Test::Map::Tube
. Still, it’s a good idea to add this
dependency explicitly as this ends up in the project metadata, informing
your users and any tools such as MetaCPAN,
CPANTS and CPAN
testers what is required to build and run the
module. Also, I’ve listed the prerequisites alphabetically so that it’s
easier to find and update this list in the future.
Looking at the diff for these changes, you should see something like this:
$ git diff Makefile.PL
diff --git a/Makefile.PL b/Makefile.PL
index b889368..22afd9a 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -14,11 +14,13 @@ my %WriteMakefileArgs = (
'ExtUtils::MakeMaker' => '0',
},
TEST_REQUIRES => {
- 'Test::More' => '0',
+ 'Test::More' => '0',
+ 'Test::Map::Tube' => '4.03',
},
PREREQ_PM => {
- #'ABC' => '1.6',
- #'Foo::Bar::Module' => '5.0401',
+ 'File::Share' => '0',
+ 'Map::Tube' => '4.03',
+ 'Moo' => '0',
},
dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', },
clean => { FILES => 'Map-Tube-Hannover-*' },
Let’s commit that change:
$ git commit -m "Add base and test deps for first working example" Makefile.PL
[main e4e6f93] Add base and test deps for first working example
1 file changed, 5 insertions(+), 3 deletions(-)
The remaining changes are all interrelated. The change to import the
relevant third-party modules into our main module, the addition of the input
map file, the code which links this to Map::Tube
, as well as the test
file, are all sufficiently related that it makes sense to bundle all these
changes into a single commit.9
$ git add t/map-tube-hannover.t share/hannover-map.json lib/Map/Tube/Hannover.pm
$ git commit -m "
> Add initial minimal working map input file
>
> This is a first-cut implementation of the railway network for Hannover.
> Note that this is *not* intended to reflect the real-world situation just yet.
> I've chosen to use station names here which make the initial validation tests
> pass and which vaguely reflect the nature of the network itself. Both
> stations do exist on Linie 1, however are separated by several other stations
> in reality. Since the validation tests pass, we know that things are wired
> up to the Map::Tube framework properly."
[main fb94aab] Add initial minimal working map input file
Date: Sun Mar 30 20:02:06 2025 +0200
4 files changed, 55 insertions(+), 14 deletions(-)
create mode 100644 share/hannover-map.json
create mode 100644 t/map-tube-hannover.t
create mode 100644 t/map-tube-hannover.t
Note that you should not enter the greater-than signs at the beginning of
each line of the commit message entered above. These are the line
continuation markers shown by the shell. In other words, if you’re
following along and want to enter the commit message shown above, you will
need to remove the >
(including the space) from the text.
That should do for today! We got a lot done! We created a new module from
scratch and then used test-driven development to create the fundamental
structure for Map::Tube::Hannover
while also creating the most basic
Map::Tube
map file we could.
In the second post in the series, we’ll carefully extend the network to create a full line and then create a graph of the stations. Until then!
Originally posted on https://peateasea.de.
Image credits: Wikimedia Commons, Noto-Emoji project, Wikimedia Commons
Thumbnail credits: Swiss Cottage Underground Station (Jubilee Line) by Hugh Llewelyn
Anyone who knows me knows that I
despise inline commit messages made with git commit -m ""
. So why am
I using them here? Well, I want to keep the discussion moving and
I feel that describing the full commit message entering process would
disturb the flow too much. My advice: in real life, describe the “what”
of the change in the commit message’s subject line and the “why” in the
body. Taking the time to write a good commit message (explaining the
“why” of the change) will save you and your colleagues sooo much time
and pain in the future!
Note that the example code in the Map::Tube
documentation doesn’t specify an explicit test plan, nor does it end
the tests with done_testing()
. Consequently, you’ll find that the
tests will fail with the error:
Tests were run but no plan was declared and done_testing() was not seen.
This is why I’ve added done_testing();
to the test code I present
here.
This assumes that the file is JSON-formatted.
If you create an XML-formatted input file then you’ll need to create a
getter called xml()
.
Why only import the dist_file()
function and not use the
‘:all’ option as mentioned in the File::Share
documentation? Well,
we don’t need all the functions, so don’t import them. See also
perlimports
.
I can see where one might want to commit on an
even finer-grained scale. For instance, one could split the commits up
like so:
- Import the third-party modules into the main module file.
- Remove the stub functions.
- Add the test file, the input map file and the json()
getter.
Such decisions are a matter of taste and in this case, I think the commit I’ve made is sufficiently atomic for our purposes.
[return]Published on Sunday 20 April 2025 11:37
While looking at some old bash
script that bumps my semantic
versions I almost puked looking at my old ham handed way of bumping
the version. That led me to see how I could do it “better”. Why? I
dunno know…bored on a Saturday morning and not motivated enough to
do the NY Times crossword…
So you want to bump a semantic version string like 1.2.3
- major, minor, or patch - and you don’t want ceremony. You want one
line, no dependencies, and enough arcane flair to scare off
coworkers.
Here’s a single-line Bash–Perl spell that does exactly that:
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
VERSION
file (1.2.3
)0
for major, 1
for minor, 2
for patch)v
Wrap it like this in a shell function:
bump() {
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
echo "$v" > VERSION
}
Then run:
bump 2 # bump patch (1.2.3 => 1.2.4)
bump 1 # bump minor (1.2.3 => 1.3.0)
bump 0 # bump major (1.2.3 => 2.0.0)
Want to bump right from make
?
bump-major:
@v=$$(cat VERSION | p=0 perl -a -F[.] -pe '$$i=$$ENV{p};$$F[$$i]++;$$j=$$i+1;$$F[$$_]=0 for $$j..2;$$"=".";$_="$$F"') && \
echo $$v > VERSION && echo "New version: $$v"
bump-minor:
@$(MAKE) bump-major p=1
bump-patch:
@$(MAKE) bump-major p=2
Or break it out into a .bump-version
script and source it from your build tooling.
-a # autosplit into @F
-F[.] # split on literal dot
$i=$ENV{p} # get part index from environment (e.g., 1 for minor)
$F[$i]++ # bump it
$j=$i+1 # start index for resetting
$F[$_]=0 ... # zero the rest
$"="."; # join array with dots
$_="@F" # set output
If you have to explain this to some junior dev, just say RTFM skippy
perldoc perlrun
. Use the force Luke.
And if the senior dev wags his finger and say UUOC, tell him Ego malum edo.
Published by prz on Sunday 20 April 2025 13:58
Published by Saif Ahmed on Friday 18 April 2025 10:45
Stefan ( niner ) has now come to a conclusion of his efforts with RakuAST. This mammoth task started previously by Jonathan Worthington. In the time since his award of the grant, he has made 823 commits to RakuAST, and his overall contribution to Raku in the past couple of years is second only to the very prolific Elizabeth Mattijsen. His contributions can be viewed on github. It is impossible to describe all his activity with this project, and I imagine it will have taken much more than the 200 hours he had thought it would take in his original application. His commentary on the project is available on his own blog pages which also contains other interesting stuff. A summary of key activities can be extracted from Rakudo Weekly Blogs by Elizabeth, and these are shamelessly reproduced in reverse chronological order with links to the original blog pages, as they are representative of the vast scope of his work : -
Stefan Seifert basically concluded [his] work on the Raku bootstrap, with the number of test-files passing equalling the number of passing test-files in the non-bootstrapped Rakudo.
The number of passing test-files with the new Raku grammar are now 141/153 (make test +0) and 1299/1345 (make spectest +20).
Stefan Seifert fixed a potential segfault in generating object IDs, and an issue with signatures containing multiple slurpies, and an issue with the will trait.
Stefan Seifert started focusing on bootstrapping the new Raku grammar from scratch (whereas until now it assumed there was a working Raku available) as opposed to try fixing errors in roast. This work is available in a branch as of this writing, and the number of passing spectest files in this fully bootstrapped implementation of the Raku Programming Language is now already 1228 (out of 1345, as opposed to 1279 in the non-bootstrapped version). Another major step forward to making RakuAST mainstream!
Stefan Seifert also fixed quite a few issues (and that’s an understatement!) in the non-bootstrapped RakuAST as well.
Stefan Seifert continued working on RakuAST. The most significant fixes: * BEGIN time call for non-simple constructs * support for %?RESOURCES and $?DISTRIBUTION * blocks as defaults for parameters * many attribute and package stub issues * added several warnings * and many smaller fixes!
Stefan Seifert continued working on RakuAST. The most significant fixes: * operators / terms defined as variables * return with pair syntax * several variable visibility issues at BEGIN time * fixes to ss/// and S// * several (sub-)signature and generics issues * binding attributes in method arguments * several issues related to categoricals * support <|c> and <|w> assertions in regexes * several return issues / return value issues * progress in making require work * and many, many, many more smaller fixes!
Stefan Seifert continued working on RakuAST. The most significant fixes: * non-trivial lazy loops * allow declaration of $_ in loops and other loop related fixes * handling labels with loop structures * a large number of regex related features, such as fixing LTM (Longest Token Match) matching and interpolation of attributes in regexes * exceptions thrown in CHECK phasers * support added for tr/// and TR/// * better handling of subroutine stubs * and many, many more smaller fixes!
Stefan Seifert continued working on RakuAST and fixed some more issues with the phasers, multi-part named roles, language versions, where clauses on subsets and much more!
Stefan Seifert continued working on RakuAST and fixed issues with the will trait, CHECK phasers, the use variables pragma, multi regexes and much more!
Stefan Seifert continued working on RakuAST and produced more than 50 commits, fixing all of the remaining S03 tests and other issues.
Stefan Seifert changed the behaviour of throws-like (for the better) in light of compilation errors. Stefan Seifert continued working on RakuAST, fixing: error messages, operator properties on custom operators, several meta-operator and hypering issues, dispatch using .?, .+ and .*, adverbs on infixes, and more.
Stefan Seifert returned to RakuAST development and completed the work on the branch that took a new approach to compile time actions (really a GBR aka Great BEGIN Refactor). A branch that was started by Jonathan Worthington over a year ago. Stefan continued from there by fixing use fatal.
Published on Thursday 17 April 2025 20:08
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given an array of words and a word. Write a script to return true if concatenating the first letter of each word in the given array matches the given word, return false otherwise.
Here’s our one subroutine, this problem requires very little code.
sub acronyms{
my($word_list, $word) =
@_;
my
@first_letters = map{
(split //, $_)[0]
}
@{$word_list};
return 1 if $word eq join q//,
@first_letters;
return 0;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some simple tests.
MAIN:{
say acronyms([qw/Perl Weekly Challenge/], q/PWC/);
say acronyms([qw/Bob Charlie Joe/], q/BCJ/);
say acronyms([qw/Morning Good/], q/MM/);
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl 1 1 0
You are given two strings. Write a script to return true if swapping any two letters in one string match the other string, return false otherwise.
Here’s the process we’re going to follow.
Now let’s check and see how many differences were found.
The rest of the code combines the previous steps and drives some tests.
MAIN:{
say friendly q/desc/, q/dsec/;
say friendly q/cat/, q/dog/;
say friendly q/stripe/, q/sprite/;
}
◇
Fragment referenced in 7.
$ perl perl/ch-2.pl 1 0 1