Published by Rawley Fowler on Wednesday 23 April 2025 23:44
I have a very large program that writes a lot of things to STDOUT
, I'd like to be able to feed all of that output to both STDOUT
and to an arbitrary FILE
. I've decided to use IO::Tee
to create the shared writeable handle:
use IO::Tee;
pipe(my $reader, my $writer);
my $new_stdout = IO::Tee->new(\*STDOUT, $writer);
*{STDOUT} = $new_stdout;
print "foo";
print(do { local $\; <$reader> });
However, this causes a deep-recursion and crashes the program. So, instead I can not reference *STDOUT
, and it creates it:
use IO::Tee;
pipe(my $reader, my $writer);
my $new_stdout = IO::Tee->new(*STDOUT, $writer);
*{STDOUT} = $new_stdout;
print "foo";
print(do { local $\; <$reader> });
This creates the warning: Undefined value assigned to typeglob at ... line 42
, and when I use Data::Printer
to describe $new_stdout
, it is undef
. How can I do this?
Published by /u/CantaloupeConnect717 on Wednesday 23 April 2025 23:09
Heard they've been moving away from Perl? Any more recent insight?
https://www.teamblind.com/post/Tech-stack-at-bookingcom-F5d5wyZz
Published by Andrew on Wednesday 23 April 2025 15:53
My Perl code is below:
my %hash = {
"payload": {
"1": {
"name": "Andrew"
},
"2": {
"name": "Dylan"
},
"3": {
"name": "Mike"
},
"4": {
"name": "Nick"
},
"5": {
"name": "Tom"
}
},
}
I would like to sort $hash{payload} by the "name" value. How can I do so?
You are given two arrays of integers, each containing the same elements as the other. Write a script to return true if one array can be made to equal to the other by reversing exactly one contiguous subarray.
@source = (3, 2, 1, 4)
@target = (1, 2, 3, 4)
true
(3,2,1
reverses to 1,2,3
)
@source = (1, 3, 4)
@target = (4, 1, 3)
false
@source = (2)
@target = (2)
true
At first glance, it seems like we might have to enumerate all the possible sub-sequences and reverse each one. But after a little more time for the coffee to work its magic, there's a simpler approach.
@source: = = = = > > > = = = =
@target: = = = = < < < = = = =
The string has three parts. Somewhere in the middle is a sequence that can be reversed (maybe), and on either side of that are sequences that are equal. With that in mind, let's put on some music and write some Perl.
sub revEq ($source, $target)
{
use List::Util qw/zip/;
use List::MoreUtils qw/first_index last_index/;
# Arrays must be the same size
return false if $source->$#* != $target->$#*;
# Combine the two arrays into pairs
my @pair = zip($source, $target);
# Find the left segment of equal elements
my $left = first_index { $_->[0] != $_->[1] } @pair;
# If strings are equal, we can stop
return true if $left == -1;
# Find the right segment of equal elements
my $right = last_index { $_->[0] != $_->[1] } @pair;
# Extract the middle that could be reversed.
my @midsrc = $source->@[$left .. $right];
my @midtrg = $target->@[$left .. $right];
# Check that one is the reverse of the other
while ( @midsrc && shift @midsrc == pop @midtrg ) {}
return @midsrc == 0;
}
There are some sub-problems that we don't need to re-invent. (Previous PWC challenges have already re-invented them anyway). I'm going to use common library routines from List::Util
and List::MoreUtils
zip
-- I want to compare corresponding elements. I could loop over indexes and start down the road to off-by-one hell, but I'd rather operate on lists as a whole and let the language and its libraries do my array bookkeeping. zip
will turn two lists into one list of pairs.
first_index
and last_index
-- I want the first position on the left and on the right where the two sequences don't match. If the strings are equal, this would return -1.
array slices -- Knowing the bounds of the possibly-reversed sub-sequence, I can extract it with a slice. I know that the range will be valid, because I already returned if there was no unequal element, so at this point in the program, $left
must be less than or equal to $right
.
reversal check -- The obvious thing to do would be to reverse either @midsrc
or @midtrg
and then do an array compare. What I'm doing here instead is walking over both of them from opposite ends, destroying them in the process. If @midsrc
is reduced to empty, then the strings must be mirror images.
Published by Ted James on Wednesday 23 April 2025 13:32
Published by khwilliamson on Wednesday 23 April 2025 03:06
mem_collxfrm(): White space, comments, only The previous commit removed a block; so can outdent
Published by khwilliamson on Wednesday 23 April 2025 03:06
mem_collxfrm(): Return early if locale collation not sane This changes a subsidiary function's return value from void to bool, returning false if it finds the locale doesn't have sane collation. The calling function is changed to check this, and give up immediately if the locale isn't sane.
Published by khwilliamson on Wednesday 23 April 2025 03:06
locale.c: Don't do asymmetric back out on failure This fixes #23519 When something goes wrong doing locale-aware string collation, the code attempts to carry on as well as can be expected. Prior to this commit the backout code was asymmetric, trying to undo things that had not been done. This happened when the failure was early on. In the case of this ticket, the platform has a defective locale that was detectable before getting very far along. The solution adopted here is to jump to a different label for those early failures that does less backout than for later failures.
Published by khwilliamson on Wednesday 23 April 2025 03:06
locale.c: Change name of macro This is in preparation for the next commit where it will be split out to be a stand-alone macro.
Published by Zilore Mumba on Wednesday 23 April 2025 01:19
I have a python program which I want to modify using Perl, at each run. Somehow the expressions I want to replace are not found at all. Here are code snippets: i open the python script and read all the lines with
my @lines=<$INFILE>
Then
if($line =~ m/dfmx=df[['\d{4}', 'Mean', '\d{4}']].max(axis=1)/){
$line =~ s/dfmx=df[['\d{4}', 'Mean', '\d{4}']].max(axis=1)/dfmx=df[['$year1', 'Mean', '$year2']].max(axis=1)/g;
}
if($line =~ m/plt.plot(dfdi[1], df['\d{4}'], color='g', label='\d{4}')/){
$line =~ s/plt.plot(dfd[1], df['\d{4}'], color='g', label='\d{4}')/plt.plot(dfd[1], df['$year1'], color='g', label='$year1')/g;
}
I have defined year1 and year2 before these lines. I am really baffled that these lines cannot be seen at all. When I put a print statement after the "if", nothing happens. I have tried to escape all occurences of [ and ], nothing happens.
any ideas pleas!
Here are typical lines to change
dfmx=df[['1988', 'Mean', '1984']].max(axis=1)
#Generate plots
plt.plot(dfd[1], df['1988'], color='g', label='1988')
plt.plot(dfd[1], df['Mean'], color='b', label='Mean')
plt.plot(dfd[1], df['1984'], color='r', label='1984')
if($line =~ m/'\d{4}', 'Mean',/){
$line =~ s/'\d{4}', 'Mean', '\d{4}'/'$year1', 'Mean', '$year2'/g;
}
if($line =~ m/color='g'/){
$line =~ s/\d{4}/$year1/g;
}
if($line =~ m/color='r'/){
$line =~ s/\d{4}/$year2/g;
}
Published by G4143 on Wednesday 23 April 2025 00:09
How do you print "undef" for array elements that are undefined?
#! /usr/bin/env perl
use warnings;
use strict;
use utf8;
use feature qw<say>;
my @a1 = 1..8;
my @a2 = (4143, undef, 8888);
my @a3 = qw<a b c>;
say "@a1 and @a2 and @a3";
exit(0);
This outputs:
Use of uninitialized value $a2[1] in join or string at ./my.pl line 12.
1 2 3 4 5 6 7 8 and 4143 8888 and a b c
But I'd like it to output (without warnings)
1 2 3 4 5 6 7 8 and 4143 undef 8888 and a b c
Map::Tube now supports Unicode character in station names.
Please check out the link below for more information.
https://theweeklychallenge.org/blog/map-tube-unicode
Published by jkeenan on Tuesday 22 April 2025 01:36
Spelling correction only.
Published by /u/harrisonfordatemyass on Monday 21 April 2025 23:46
Hi everyone, I'm curious if anyone here has made money from a Perl application. I'm interested in hearing about your experiences, the type of application, and if you're comfortable sharing, the amount of money you've made. Any insights or advice would be greatly appreciated! Thanks!
Published by DragosTrif on Monday 21 April 2025 21:23
A while a go I released on Metacpan DBD::Mock::Session::GenerateFixtures. So lets how can we mock data for Mysql/Maria db via Rose::Ddb ORM.
So lets install this:
sudo apt-get install -y mysql-server mysql-client libmysqlclient-dev
cpanm DBD::Mock::Session::GenerateFixtures
At this point we can work on our unit test:
use Test2::V0;
use lib qw(lib);
use DBI;
use Data::Dumper;
use DBD::Mock::Session::GenerateFixtures;
use Rose::DB::Object::Loader;
use Sub::Override;
use File::Path qw(rmtree);
# first connect to your db
my $db = DB->new(
domain => 'mysql',
type => 'mysql'
);
# pass the real dbh to DBD::Mock::Session::GenerateFixtures
my $mock_dumper = DBD::Mock::Session::GenerateFixtures->new({dbh => $db->dbh()});
my $num_rows_updated = DB::Media::Manager->update_media(
set => {
location => '/data/music/claire_de_lune.ogg',
},
where => [
id => 2,
],
);
is($num_rows_updated, 1, 'update media table is ok');
After the data is collected an JSON file it can be accessed by our test in the next run. Some extra lined need to be added for insert, update and delete to work as expected.
my $mock_dumper = DBD::Mock::Session::GenerateFixtures->new();
# get the mocked dbh
my $dbh = $mock_dumper->get_dbh();
my $override = Sub::Override->new();
my $last_insert_id = 4;
my $update_or_deleted = 1;
# override the dbh
$override->replace('Rose::DB::dbh' => sub {return $dbh});
# override last inserted id
$override->replace('Rose::DB::MySQL::last_insertid_from_sth' => sub {$last_insert_id++; return $last_insert_id});
# override rows
$override->replace('DBD::Mock::st::rows' => sub {return 1});
A full unit test example can be seen here:
test_rose_db_with_mysql
Bibliography
Rose::DB ORM
Generate Fixtures for DBD::Mock::Session
Testing with a real database
Published by /u/oalders on Monday 21 April 2025 17:53
![]() | submitted by /u/oalders [link] [comments] |
Mohammad Sajid Anwar’s post
in last year’s Perl Advent Calendar about his
Map::Tube
module intrigued me. I
decided I wanted to build such a map for the tram network in the city where
I live: Hannover, Germany. Along the way, I thought it’d be nice to have a
detailed HOWTO explaining the steps needed to create a Map::Tube
map for
one’s city of interest. Since I enjoy explaining things in detail, this got
… long. So I broke it up into parts.
Welcome to the first post in a five-part series about how to create
Map::Tube
maps.
Originally I wrote this as a single post, which made it, you might say, rather protracted. I’ve thus split it up into five separate posts, each building upon the previous. This way each is more digestible and hopefully the reader doesn’t–in the words of P.D.Q. Bach–fall into a confused slumber. Let’s see how I manage…
In this five-part series, we’re going to:
Map::Tube
map we can create (this post).Map::Tube
map files and then extend the
map to more stations along the first line, displaying a graph of the
line.This first post is the longest because I spend time discussing how to set up
a module from scratch. Experienced readers can skip this section if they so
wish and go directly to the section about building the Map::Tube
map file
guided by tests.
As I mentioned in my post about finding all tram stops in
Hannover,
Mohammad Sajid Anwar’s Perl Advent Calendar article about his Perl-based
routing network module for railway
systems interested me and I
wanted to create my own. This series of posts will use Hannover as the main
focus to show you how to build Map::Tube
maps, giving you the information
you need to create your own.
There’s a lot to get through, so we’d better get started!
Each map for a given railway network is a Perl module in its own right.
Hence, the first thing we need to do is create a stub module for our
project. Maps for specific cities follow the same naming pattern:
Map::Tube::<city-name>
. Their project directories follow a similar naming
pattern: Map-Tube-<city-name>
. Thus, for our current example, the goal is
to create a module called Map::Tube::Hannover
within a directory named
Map-Tube-Hannover
. Let’s do that now.
For the rest of the discussion, I’m going to assume that you have a recent perlbrew-ed Perl1 and that you’ve set that all up properly.
As mentioned in the perlnewmod
documentation, the recommended way to
create a new stub module (including its files and directory layout) is to
use the module-starter
program. This isn’t distributed with Perl, so we
have to install it before we can use it. It’s part of the
Module::Starter
distribution;
install it now with cpanm
:
$ cpanm Module::Starter
To create our stub Map::Tube::Hannover
module we run module-starter
,
giving it some required module meta-data:
$ module-starter --module=Map::Tube::Hannover --author="Paul Cochrane" --email=ptc@cpan.org \
--ignores=git --ignores=manifest
Created starter directories and files
The --ignores=git
and --ignores=manifest
options create .gitignore
and MANIFEST.SKIP
files for us. Thus, anything we don’t need in the
repository or the final CPAN distribution is skipped and ignored from the
get-go. This is handy as it saves mucking about with admin stuff when we
could be getting going with our shiny new module.
The module-starter
command created a directory called Map-Tube-Hannover
in the current directory and filled it with some standard files every Perl
distribution/module should have. Let’s enter the directory and see what
we’ve got.
$ cd Map-Tube-Hannover
$ tree
.
├── Changes
├── lib
│ └── Map
│ └── Tube
│ └── Hannover.pm
├── Makefile.PL
├── MANIFEST.SKIP
├── README
├── t
│ ├── 00-load.t
│ ├── manifest.t
│ ├── pod-coverage.t
│ └── pod.t
└── xt
└── boilerplate.t
5 directories, 10 files
We see that module-starter
created a Perl module file
(lib/Map/Tube/Hannover.pm
) for our planned Map::Tube::Hannover
module.
The command also created the associated (sub-)directory structure, a test
directory with some useful initial tests, as well as various module-related
build and information files.
This is a great starting point, so let’s save this state by creating a Git repository in this directory and adding the files to the repo in an initial commit.2
$ git init
Initialized empty Git repository in /path/to/Map-Tube-Hannover/.git/
$ git add .
$ git commit -m "Initial import of Map::Tube::Hannover stub module files"
[main (root-commit) 7bd778e] Initial import of Map::Tube::Hannover stub module files
11 files changed, 380 insertions(+)
create mode 100644 .gitignore
create mode 100644 Changes
create mode 100644 MANIFEST.SKIP
create mode 100644 Makefile.PL
create mode 100644 README
create mode 100644 lib/Map/Tube/Hannover.pm
create mode 100644 t/00-load.t
create mode 100644 t/manifest.t
create mode 100644 t/pod-coverage.t
create mode 100644 t/pod.t
create mode 100644 xt/boilerplate.t
If you want to follow along with how I built things, the Git repo for this project is on GitHub.
Personally, I love tests. They help reduce risk and (if the project has high test coverage) give me confidence that the code is doing what I expect it to do. They also help me be more fearless when refactoring a codebase. A good test suite can make for a wonderful development experience.
So, before we start implementing things, let’s build the project and run the
test suite so that we know that everything is working as we expect. Yes, I
expect the authors of Module::Starter
will have created everything
correctly, but it’s a good feeling to know that one is starting from a solid
foundation before changing anything.
To build the project, we create its Makefile
by running Makefile.PL
with
perl
. Then we simply call make test
:
$ perl Makefile.PL
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
$ make test
cp lib/Map/Tube/Hannover.pm blib/lib/Map/Tube/Hannover.pm
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... skipped: Author tests not required for installation
t/pod-coverage.t .. skipped: Author tests not required for installation
t/pod.t ........... skipped: Author tests not required for installation
All tests successful.
Files=4, Tests=1, 0 wallclock secs ( 0.04 usr 0.01 sys + 0.34 cusr 0.03 csys = 0.42 CPU)
Result: PASS
Cool! The tests passed! Erm, ‘test’, I should say, as only one ran. That
test showed that the module can be loaded (this is what t/00-load.t
does).
However, some of our tests didn’t run because they’re only to be run by
module authors. To run these tests, we need to set the RELEASE_TESTING
environment variable:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... skipped: Test::CheckManifest 0.9 required
t/pod-coverage.t .. skipped: Test::Pod::Coverage 1.08 required for testing POD coverage
t/pod.t ........... skipped: Test::Pod 1.22 required for testing POD
All tests successful.
Files=4, Tests=1, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.32 cusr 0.04 csys = 0.39 CPU)
Result: PASS
Hrm, the author tests were still skipped. We need to install some modules from CPAN to get everything running:
$ cpanm Test::CheckManifest Test::Pod::Coverage Test::Pod
This time the author tests run, but the t/manifest.t
test fails:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... Bailout called. Further testing stopped: Cannot find a MANIFEST. Please check!
t/manifest.t ...... Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/1 subtests
FAILED--Further testing stopped: Cannot find a MANIFEST. Please check!
make: *** [Makefile:851: test_dynamic] Error 255
Weird! I didn’t expect that.
It turns out that we’ve not created an initial MANIFEST
file. That’s easy
to fix, though. We only need to run make
with the manifest
target:
$ make manifest
"/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Manifest=mkmanifest" -e mkmanifest
Added to MANIFEST: Changes
Added to MANIFEST: lib/Map/Tube/Hannover.pm
Added to MANIFEST: Makefile.PL
Added to MANIFEST: MANIFEST
Added to MANIFEST: README
Added to MANIFEST: t/00-load.t
Added to MANIFEST: t/manifest.t
Added to MANIFEST: t/pod-coverage.t
Added to MANIFEST: t/pod.t
Added to MANIFEST: xt/boilerplate.t
So far, so good. Let’s see what the tests say now:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... ok
t/pod-coverage.t .. ok
t/pod.t ........... ok
All tests successful.
Files=4, Tests=4, 0 wallclock secs ( 0.04 usr 0.00 sys + 0.41 cusr 0.05 csys = 0.50 CPU)
Result: PASS
That’s better!
You’ll note that although we’ve created some files not tracked by Git (e.g.
the Makefile
and MANIFEST
files), the working directory is still clean:
$ git status
On branch main
nothing to commit, working tree clean
This is because the --ignores=git
option passed to module-starter
generates a .gitignore
file which ignores the MANIFEST
among other such
files. Nice!
Since we installed some modules as part of getting everything running, we
need to update our dependencies. These dependencies aren’t required to get
the module up and running. Nor are they strictly required to test
everything, because they’re tests for module authors, not for users of the
module. However, since we’re creating a module, we’re our own module
author, so it’s a good idea to set up the author tests. Thus, we need to
specify them as recommended test-stage prerequisites. Neil Bowers has a
good blog post about specifying dependencies for your CPAN
distribution
which describes things in more detail. For our case here, this boils down
to inserting the following code at the end of the %WriteMakefileArgs
hash
in Makefile.PL
:
# rest of %WriteMakefileArgs content
META_MERGE => {
"meta-spec" => { version => 2 },
prereqs => {
test => {
recommends => {
'Test::CheckManifest' => '0.9',
'Test::Pod::Coverage' => '1.08',
'Test::Pod' => '1.22',
},
},
},
},
Let’s try running the tests again to make sure that we haven’t broken anything:
$ RELEASE_TESTING=1 make test
Makefile out-of-date with respect to Makefile.PL
Cleaning current config before rebuilding Makefile...
make -f Makefile.old clean > /dev/null 2>&1
"/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" Makefile.PL
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
==> Your Makefile has been rebuilt. <==
==> Please rerun the make command. <==
false
make: *** [Makefile:809: Makefile] Error 1
Oops, we forgot to rebuild the Makefile
. Let’s do that quickly:
$ perl Makefile.PL
Generating a Unix-style Makefile
Writing Makefile for Map::Tube::Hannover
Writing MYMETA.yml and MYMETA.json
Now the test suite runs and passes as we hope:
$ RELEASE_TESTING=1 make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ....... 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ....... ok
t/manifest.t ...... ok
t/pod-coverage.t .. ok
t/pod.t ........... ok
All tests successful.
Files=4, Tests=4, 0 wallclock secs ( 0.03 usr 0.01 sys + 0.42 cusr 0.05
csys = 0.51 CPU)
Result: PASS
Great! It’s time for another commit.
$ git commit -m "Add recommended test-stage dependencies" Makefile.PL
[main 819c069] Add recommended test-stage dependencies
1 file changed, 12 insertions(+)
Now that we’re sure our test suite is working properly (and we’ve got a
clean working directory), we can start developing Map::Tube::Hannover
by
… adding another test! But where to start? Fortunately for us, the
Map::Tube
docs mention a basic data validation
test as well as a
basic functional validation
test to ensure
that the input data makes sense and that basic map functionality is
available. That’s a nice starting point, so let’s do that.
Open your favourite editor and create a file called t/map-tube-hannover.t
and fill it with this code:3
use strict;
use warnings;
use Test::More;
use Map::Tube::Hannover;
use Test::Map::Tube;
ok_map(Map::Tube::Hannover->new);
ok_map_functions(Map::Tube::Hannover->new);
done_testing();
Running the test suite (but avoiding the author tests for now), we find that things aren’t working.
$ make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ............ 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ............ ok
t/manifest.t ........... skipped: Author tests not required for installation
t/map-tube-hannover.t .. Can't locate Test/Map/Tube.pm in @INC (you may need to install the Test::Map::Tube module) (@INC entries checked: /path/to/Map-Tube-Hannover/blib/lib /path/to/Map-Tube-Hannover/blib/arch /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/x86_64-linux /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3 /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3/x86_64-linux /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3 .) at t/map-tube-hannover.t line 7.
BEGIN failed--compilation aborted at t/map-tube-hannover.t line 7.
t/map-tube-hannover.t .. Dubious, test returned 2 (wstat 512, 0x200)
No subtests run
t/pod-coverage.t ....... skipped: Author tests not required for installation
t/pod.t ................ skipped: Author tests not required for installation
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 512 (exited 2) Tests: 0 Failed: 0)
Non-zero exit status: 2
Parse errors: No plan found in TAP output
Files=5, Tests=1, 0 wallclock secs ( 0.04 usr 0.01 sys + 0.38 cusr 0.07 csys = 0.50 CPU)
Result: FAIL
Failed 1/5 test programs. 0/1 subtests failed.
make: *** [Makefile:851: test_dynamic] Error 255
This is completely ok: we expected that the tests wouldn’t pass. We’re
using the tests to help guide us as we slowly build the
Map::Tube::Hannover
module.
The first error we have is:
Can't locate Test/Map/Tube.pm in @INC (you may need to install the Test::Map::Tube module)
As the message says, we can try to get further by installing
Test::Map::Tube
:
$ cpanm Test::Map::Tube
This will install almost 90 distributions in a freshly-built Perl, so you
might want to go and have a walk or get an appropriate beverage while
cpanm
does its thing.
Welcome
back! Now that the
next set of dependencies has been installed, we make a mental note to add
Test::Map::Tube
to the list of required test dependencies in
Makefile.PL
. Then we try running the tests again:
$ make test
PERL_DL_NONLAZY=1 "/home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00-load.t ............ 1/? # Testing Map::Tube::Hannover 0.01, Perl 5.038003, /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/bin/perl
t/00-load.t ............ ok
t/manifest.t ........... skipped: Author tests not required for installation
t/map-tube-hannover.t .. Can't locate object method "new" via package "Map::Tube::Hannover" at t/map-tube-hannover.t line 9.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
t/pod-coverage.t ....... skipped: Author tests not required for installation
t/pod.t ................ skipped: Author tests not required for installation
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=5, Tests=1, 0 wallclock secs ( 0.05 usr 0.00 sys + 0.67 cusr 0.08 csys = 0.80 CPU)
Result: FAIL
Failed 1/5 test programs. 0/1 subtests failed.
make: *** [Makefile:857: test_dynamic] Error 255
This time we’ve got a problem in the module we’re creating. There’s
something about a method new
not being available. If you have a look at
lib/Map/Tube/Hannover.pm
, you’ll find that it’s filled with lots of docs,
but there’s almost no code. How do we solve this? Well, the hint is in the
error message above:
Can't locate object method "new"
If we see words like “object” and “method”, this means we’re dealing with
object orientation.4 Thus, we need to turn our package
into a class so that the failing test can call a new
method and hence
create an instance of the Map::Tube::Hannover
class. There are several
ways to create classes in Perl, so which one do we use? The hint is in the
first sentence of Map::Tube
’s
DESCRIPTION:
The core module defined as Role (Moo) to process the map data.
In other words, we need to use Moo
for
object orientation. This should have been installed along with
Test::Map::Tube
, but just in case it wasn’t, you can install it with
cpanm
:
$ cpanm Moo
To use Moo
to turn our package into a class, we only need to import it.
Open lib/Map/Tube/Hannover.pm
in your favourite editor and add the line
use Moo;
just after the use warnings;
statement.
We don’t really need to run the full test suite each time we’re developing
this code, so let’s use prove
on only the t/map-tube-hannover.t
test
file instead:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. # Not a Map::Tube object
# Failed test 'An object'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Looks like you failed 1 test of 1.
t/map-tube-hannover.t .. 1/?
# Failed test 'ok_map_data'
# at t/map-tube-hannover.t line 9.
Don't know how to access underlying map data at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/5.38.3/Test/Builder.pm line 374.
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 1.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
Failed 1/1 subtests
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 1 Failed: 1)
Failed test: 1
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=1, 1 wallclock secs ( 0.04 usr 0.00 sys + 0.36 cusr 0.02 csys = 0.42 CPU)
Result: FAIL
The tests still aren’t passing, but that’s ok, we’re getting somewhere. The important part here is:
# Not a Map::Tube object
Ok, so how do we make this into a Map::Tube
object? We use the with
statement from Moo
, which
Composes one or more Moo::Role (or Role::Tiny) roles into the current class.
Add the following code under the use Moo;
statement we added earlier:
with 'Map::Tube';
Running the test again will still fail, but this time we get a different error:5
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ERROR: Can't apply Map::Tube role, missing 'xml' or 'json'. at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm line 148.
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.41 cusr 0.04 csys = 0.49 CPU)
Result: FAIL
The central issue is here:
Can't apply Map::Tube role, missing 'xml' or 'json'.
What does that mean?
We’ve arrived at the core of the problem we’re trying to solve: we now need to create the input map file describing the railway network. This file can be either XML or JSON formatted, hence why the error message mentions that there is missing XML or JSON.
To load the map file, we need to define either a json()
or xml()
method,
depending upon the format we’ve chosen. The map file defines the lines and
stations associated with our railway network and their connections.
One pattern is to place the map file in a share/
directory in the
project’s base directory and to load it lazily by defining the respective
json()
or xml()
method with the is
option set to lazy
, i.e.
has json => (is => 'lazy');
or
has xml => (is => 'lazy');
Because this is “lazy”, we need to define the builder method as well, e.g. for JSON-formatted files:
sub _build_json { dist_file('Map-Tube-Hannover', 'hannover-map.json') }
or for XML-formatted files:
sub _build_xml { dist_file('Map-Tube-Hannover', 'hannover-map.xml') }
It’s also possible to do this in one step, which is the approach that I prefer and which we’ll discuss now.
Another pattern for loading Map::Tube
map files is to set the default
option in the json()
or xml()
method, passing a sub
which returns the
file’s location. I found this to be a more direct approach and hence have
used this pattern here.
As mentioned above, one usually places this file in a directory called
share/
located in the project’s root directory. What’s not always clear
is how we should name this file or how we connect it to the
Map::Tube::<whatever>
class. In the end, it doesn’t matter and one can
simply follow the pattern used in e.g.
Map::Tube::London
, i.e. call
the file something like <city-name>-map.json
.
How to connect this file to the Map::Tube::<whatever>
class is described
in the Map::Tube::Cookbook
WORK WITH A MAP
documentation.
The trick is to create a getter called json()
6 which
returns the name of the input file. If you use the share/
directory
pattern, you can use the File::Share
module to get the location within the
dist easily.
Let’s implement this now. Create the share/
directory and then create an
empty input map file by touch
ing it:
$ mkdir share
$ touch share/hannover-map.json
Now we import the dist_file
function from the File::Share
module by
adding the following code after the use Moo;
statement:7
use File::Share qw(dist_file);
Note that to be able to use this module, we’ll have to install it:
$ cpanm File::Share
We’ll also have to make another mental note to add this as a prerequisite in
our Makefile.PL
. We’ll get around to that later.
Further down the module, remove the stub function1
and function2
definitions that module-starter
created for us and replace them with the
recommended json
getter:
has json => (
is => 'ro',
default => sub {
return dist_file('Map-Tube-Hannover', 'hannover-map.json')
}
);
Running the test file gives a new error! Yay!
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. Map::Tube::_init_map(): ERROR: Malformed Map Data (/path/to/Map-Tube-Hannover/share/hannover-map.json): malformed JSON string, neither array, object, number, string or atom, at character offset 1 (before "(end of string)")
(status: 126) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 151
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 1 wallclock secs ( 0.03 usr 0.00 sys + 0.44 cusr 0.02 csys = 0.49 CPU)
Result: FAIL
We seem to have malformed map data. That’s to be expected because the input file is empty.
Since it’s JSON, it’ll need some curly braces in it at the very least. Let’s add some to it and see what happens:
$ echo "{}" > share/hannover-map.json
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. Map::Tube::_validate_map_structure(): ERROR: Invalid line structure in map data. (status: 128) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 151
t/map-tube-hannover.t .. Dubious, test returned 255 (wstat 65280, 0xff00)
No subtests run
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 65280 (exited 255) Tests: 0 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
Files=1, Tests=0, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.46 cusr 0.01 csys = 0.50 CPU)
Result: FAIL
Another different error! Nice. We don’t want to be crawling forward like
this all day, though. We need some real data in this file and with the
correct structure. Fortunately, both the Map::Tube
JSON
docs and the
Map::Tube::Cookbook
formal requirements for
maps
describe this for us nicely.
Our basic structure will need a name
and a lines
object containing a
line
array of all lines in our railway network. We’ll also need a
stations
object containing a station
array of all stations in the
network and how they are connected to the lines. Phew! That was a
mouthful! How does that look in practice? Let’s implement it!
Open the map file (share/hannover-map.json
) in your favourite editor and
enter the following data structure:
{
"name" : "Hannover",
"lines" : {
"line" : [
{
"id" : "L1",
"name" : "Linie 1"
}
]
},
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H1"
}
]
}
}
This creates a map called Hannover
, with one line (called Linie 1
) and
one station on that line (Hauptbahnhof
). The link
attribute must be
set, hence we’ve set it to point to the station itself. I expect this to
give an error because links should be between stations, not to themselves.
However, this is the smallest basic example that I could think of. The
station’s ID, H1
, that I’ve used here doesn’t represent Hauptbahnhof 1
(as one could mistake it to mean) but means Hannover 1
because this will
be the first station in the Hannover network.
Let’s see what the tests now tell us.
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. # Line id L1 defined but serves only one station
# Failed test 'Hannover'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Station ID H1 links to itself
# Failed test 'Hannover'
# at /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Test/Map/Tube.pm line 196.
# Looks like you failed 2 tests of 14.
t/map-tube-hannover.t .. 1/?
# Failed test 'ok_map_data'
# at t/map-tube-hannover.t line 9.
Map::Tube::get_shortest_route(): ERROR: Missing Station Name. (status: 100) file /home/cochrane/perl5/perlbrew/perls/perl-5.38.3/lib/site_perl/5.38.3/Map/Tube.pm on line 193
# Failed test at t/map-tube-hannover.t line 10.
# got: 0
# expected: 1
# Looks like you failed 2 tests of 2.
t/map-tube-hannover.t .. Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/2 subtests
Test Summary Report
-------------------
t/map-tube-hannover.t (Wstat: 512 (exited 2) Tests: 2 Failed: 2)
Failed tests: 1-2
Non-zero exit status: 2
Files=1, Tests=2, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.46 cusr 0.07 csys = 0.57 CPU)
Result: FAIL
As I guessed, this still gives us an error. Even so, we’re getting somewhere. Focusing on the first error:
Line id L1 defined but serves only one station
we see we’ve been told that the line defined by the ID L1
only serves one
station (true, it does, but that’s something we’ll change soon). We’ve also
been told that the station referred to by the ID H1
links to itself,
Station ID H1 links to itself
which is what we already thought was dodgy. It’s nice that the basic validation test checks such things!
Ok, let’s add another station to see what happens. In our
share/hannover-map.json
map file, we extend the network to include the
station Langenhagen
8 and we change the links so that the
stations connect to one another. The map file now looks like this:
{
"name" : "Hannover",
"lines" : {
"line" : [
{
"id" : "L1",
"name" : "Linie 1"
}
]
},
"stations" : {
"station" : [
{
"id" : "H1",
"name" : "Hauptbahnhof",
"line" : "L1",
"link" : "H2"
},
{
"id" : "H2",
"name" : "Langenhagen",
"line" : "L1",
"link" : "H1"
}
]
}
}
A note for anyone familiar with Hannover and its tram system: yes, the stations Hauptbahnhof and Langenhagen are on the same line (Linie 1), however, they are not directly linked to one another. Langenhagen is the final station along that line heading northwards; Hauptbahnhof is effectively the middle of the entire network. We’ll flesh out a more full version of the network as we go along.
Running the tests this time gives:
$ prove -lr t/map-tube-hannover.t
t/map-tube-hannover.t .. ok
All tests successful.
Files=1, Tests=1, 1 wallclock secs ( 0.03 usr 0.01 sys + 0.50 cusr 0.02 csys = 0.56 CPU)
Result: PASS
Success!! Go and have a bit of a dance! You’ve created your first
functional Map::Tube
map! :tada:
Now things get interesting. We can start adding new lines and stations and
start linking them together. Then we can see how to use
Map::Tube::Hannover
to find routes between stations and even show a graph
of the railway network.
Let’s not get too far ahead of ourselves though. Let’s stay calm and focused and take things one step at a time.
But first, we’ve got some unfinished business. We’ve added some modules as
dependencies, so we need to ensure that our Makefile.PL
includes them and
commit that change. We also need to add our first iteration of the map file
to the Git repository as well as the code which integrates it into the
Map::Tube
framework and its test. To work!
If you remember correctly, the first module we added was Test::Map::Tube
.
We need to add this to the TEST_REQUIRES
key in the %WriteMakefileArgs
hash. Open Makefile.PL
and extend TEST_REQUIRES
to look like this:
TEST_REQUIRES => {
'Test::More' => '0',
'Test::Map::Tube' => '0',
},
Note that the Test::More
requirement was already present. We’ve specified
the version number for Test::Map::Tube
to be '0'
because this will give
us the latest version.
The remaining dependencies are “prerequisite Perl modules”, hence we need to
set the PREREQ_PM
hash key in %WriteMakefileArgs
. Change the initial
value from
PREREQ_PM => {
#'ABC' => '1.6',
#'Foo::Bar::Module' => '5.0401',
},
to
PREREQ_PM => {
'File::Share' => '0',
'Map::Tube' => '0',
'Moo' => '0',
},
where we’ve again chosen to select the most recent versions of the
respective modules by setting their version number to '0'
. Technically,
we don’t need to add the Map::Tube
dependency because it’s pulled in by
Test::Map::Tube
. Still, it’s a good idea to add this dependency
explicitly as this ends up in the project metadata, informing your users and
any tools such as MetaCPAN,
CPANTS and CPAN
testers what is required to build and run the
module. Also, I’ve listed the prerequisites alphabetically so that it’s
easier to find and update this list in the future.
Looking at the diff for these changes, you should see something like this:
$ git diff Makefile.PL
diff --git a/Makefile.PL b/Makefile.PL
index b889368..22afd9a 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -14,11 +14,13 @@ my %WriteMakefileArgs = (
'ExtUtils::MakeMaker' => '0',
},
TEST_REQUIRES => {
- 'Test::More' => '0',
+ 'Test::More' => '0',
+ 'Test::Map::Tube' => '0',
},
PREREQ_PM => {
- #'ABC' => '1.6',
- #'Foo::Bar::Module' => '5.0401',
+ 'File::Share' => '0',
+ 'Map::Tube' => '0',
+ 'Moo' => '0',
},
dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', },
clean => { FILES => 'Map-Tube-Hannover-*' },
Let’s commit that change:
$ git commit -m "Add base and test deps for first working example" Makefile.PL
[main e4e6f93] Add base and test deps for first working example
1 file changed, 5 insertions(+), 3 deletions(-)
The remaining changes are all interrelated. The change to import the
relevant third-party modules into our main module, the addition of the input
map file, the code which links this to Map::Tube
, as well as the test
file, are all sufficiently related that it makes sense to bundle all these
changes into a single commit.9
$ git add t/map-tube-hannover.t share/hannover-map.json lib/Map/Tube/Hannover.pm
$ git commit -m "
> Add initial minimal working map input file
>
> This is a first-cut implementation of the railway network for Hannover.
> Note that this is *not* intended to reflect the real-world situation just yet.
> I've chosen to use station names here which make the initial validation tests
> pass and which vaguely reflect the nature of the network itself. Both
> stations do exist on Linie 1, however are separated by several other stations
> in reality. Since the validation tests pass, we know that things are wired
> up to the Map::Tube framework properly."
[main fb94aab] Add initial minimal working map input file
Date: Sun Mar 30 20:02:06 2025 +0200
4 files changed, 55 insertions(+), 14 deletions(-)
create mode 100644 share/hannover-map.json
create mode 100644 t/map-tube-hannover.t
create mode 100644 t/map-tube-hannover.t
Note that you should not enter the greater-than signs at the beginning of
each line of the commit message entered above. These are the line
continuation markers shown by the shell. In other words, if you’re
following along and want to enter the commit message shown above, you will
need to remove the >
(including the space) from the text.
That should do for today! We got a lot done! We created a new module from
scratch and then used test-driven development to create the fundamental
structure for Map::Tube::Hannover
while also creating the most basic
Map::Tube
map file we could.
In the second post in the series, we’ll carefully extend the network to create a full line and then create a graph of the stations. Until then!
Originally posted on https://peateasea.de.
Image credits: Wikimedia Commons, Noto-Emoji project, Wikimedia Commons
Thumbnail credits: Swiss Cottage Underground Station (Jubilee Line) by Hugh Llewelyn
Anyone who knows me knows that I
despise inline commit messages made with git commit -m ""
. So why am
I using them here? Well, I want to keep the discussion moving and
I feel that describing the full commit message entering process would
disturb the flow too much. My advice: in real life, describe the “what”
of the change in the commit message’s subject line and the “why” in the
body. Taking the time to write a good commit message (explaining the
“why” of the change) will save you and your colleagues sooo much time
and pain in the future!
Note that the example code in the Map::Tube
documentation doesn’t specify an explicit test plan, nor does it end
the tests with done_testing()
. Consequently, you’ll find that the
tests will fail with the error:
Tests were run but no plan was declared and done_testing() was not seen.
This is why I’ve added done_testing();
to the test code I present
here.
This assumes that the file is JSON-formatted.
If you create an XML-formatted input file then you’ll need to create a
getter called xml()
.
Why only import the dist_file()
function and not use the
‘:all’ option as mentioned in the File::Share
documentation? Well,
we don’t need all the functions, so don’t import them. See also
perlimports
.
I can see where one might want to commit on an
even finer-grained scale. For instance, one could split the commits up
like so:
- Import the third-party modules into the main module file.
- Remove the stub functions.
- Add the test file, the input map file and the json()
getter.
Such decisions are a matter of taste and in this case, I think the commit I’ve made is sufficiently atomic for our purposes.
[return]Published by Gabor Szabo on Monday 21 April 2025 06:27
Originally published at Perl Weekly 717
Hi there!
First of all I need to apologize. Last week we mistakenly included a link to a crypto scam. Clearly we have to be more careful with our inclusion policy. Thanks to the Pull-Request of Dave Cross the link was removed from our web site within a few hours. The strange thing is that the post is still up on Medium. I would have thought they would want to avoid having scams on their website.
Last week Mohammad wrote that he keeps planning to be involved in the MetaCPAN project. As far as I remember it has two major part written in Perl. The back-end is indeed hard to setup, but I think the front-end is quite easy and that part is also written in Perl. I have a bunch of posts, some of them videos about MetaCPAN, including direction, how to contribute to it. I think it is also a very good feeling once your contribution is accepted and it goes live on the site. Besides, you get on the list of contributors
Have a nice week!
--
Your editor: Gabor Szabo.
Pretty Mandelbrot fractal with Juliagraph in Perl.
A cute one-liner in Perl with explanation.
Couple of experimental features added to Map::Tube.
Map::Tube::<*>
maps, a HOWTO: first steps
Building a map for the tram network in Hannover, Germany.
This problem has been around for decades.
"Claude Code" is a new product from Anthropic. It's a terminal-based app that allows you to interact with Claude, Anthropic's name for its AI bot.
A newbie asking for general directions in Perl with lots of responses.
Removing leading and trailing white-spaces.
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
Welcome to a new week with a couple of fun tasks "Group Position" and "Reverse Equals". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Acronyms" and "Friendly Strings" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
Compact and concise solutions in Perl, nothing much left for discussion. Keep it up great work.
Cool and fancy one-liner in Perl, big fan of compact solution. Thanks for sharing knowledge with us.
Repeat of week 240, not again. How did I miss that? Well idetical solutions. Enjoy if you are new to Raku.
Interesting use of lvalue, you don't want to skip it. Highly recommended.
Raku powerful function chaining is on display. You can't afford to miss it. Keep sharing.
Special regex dealing with an interesting edge case. Good catch, worth checking.
Raku is flexing muscle as always. Well done and keep it up.
In-house one-liner in Perl is not going to miss the golden opportunity. Highly recommended.
Power of zip function can make job very easy. Love the story around it. Great work, keep sharing.
Interesting edge case and work around. Very engaging discussion, don't forget to try DIY tool.
Pure Perl solution without any gimmicks. Thanks for sharing knowledge with us.
Reading colorful code is very tempting. Syntax highlighting makes the blog very attractive. And above all, Python is doing the magic. Well done.
Join Dave Cross, author of Data Munging With Perl, for an insightful talk marking the release of the second edition, nearly 25 years after the original publication.
Great CPAN modules released last week;
MetaCPAN weekly report.
Munich, Germany
Paris, France
Paris, France
Greenville, South Carolina, USA
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Yes, this is a Mandelbrot fractal with three heads. No cloning needed, just multiply z to the power of four and proceed as usual. Well all this and so much more contains the latest release of Juliagraph 0.7. Intro here. All I wrote about the Cellgraph and Harmonograph applies again, more features, better controls and ... you can cruise the fractal by mouse.
Published on Sunday 20 April 2025 11:37
While looking at some old bash
script that bumps my semantic
versions I almost puked looking at my old ham handed way of bumping
the version. That led me to see how I could do it “better”. Why? I
dunno know…bored on a Saturday morning and not motivated enough to
do the NY Times crossword…
So you want to bump a semantic version string like 1.2.3
- major, minor, or patch - and you don’t want ceremony. You want one
line, no dependencies, and enough arcane flair to scare off
coworkers.
Here’s a single-line Bash–Perl spell that does exactly that:
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
VERSION
file (1.2.3
)0
for major, 1
for minor, 2
for patch)v
Wrap it like this in a shell function:
bump() {
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
echo "$v" > VERSION
}
Then run:
bump 2 # bump patch (1.2.3 => 1.2.4)
bump 1 # bump minor (1.2.3 => 1.3.0)
bump 0 # bump major (1.2.3 => 2.0.0)
Want to bump right from make
?
bump-major:
@v=$$(cat VERSION | p=0 perl -a -F[.] -pe '$$i=$$ENV{p};$$F[$$i]++;$$j=$$i+1;$$F[$$_]=0 for $$j..2;$$"=".";$_="$$F"') && \
echo $$v > VERSION && echo "New version: $$v"
bump-minor:
@$(MAKE) bump-major p=1
bump-patch:
@$(MAKE) bump-major p=2
Or break it out into a .bump-version
script and source it from your build tooling.
-a # autosplit into @F
-F[.] # split on literal dot
$i=$ENV{p} # get part index from environment (e.g., 1 for minor)
$F[$i]++ # bump it
$j=$i+1 # start index for resetting
$F[$_]=0 ... # zero the rest
$"="."; # join array with dots
$_="@F" # set output
If you have to explain this to some junior dev, just say RTFM skippy
perldoc perlrun
. Use the force Luke.
And if the senior dev wags his finger and say UUOC, tell him Ego malum edo.
While looking at some old bash
script that bumps my semantic
versions I almost puked looking at my old ham handed way of bumping
the version. That led me to see how I could do it "better". Why? I
dunno know...bored on a Saturday morning and not motivated enough to
do the NY Times crossword...
So you want to bump a semantic version string like 1.2.3
Here's a single-line Bash–Perl spell that does exactly that:
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
VERSION
file (1.2.3
)0
for major, 1
for minor, 2
for patch)v
Wrap it like this in a shell function:
bump() {
v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
echo "$v" > VERSION
}
Then run:
bump 2 # bump patch (1.2.3 => 1.2.4)
bump 1 # bump minor (1.2.3 => 1.3.0)
bump 0 # bump major (1.2.3 => 2.0.0)
Want to bump right from make
?
bump-major:
@v=$$(cat VERSION | p=0 perl -a -F[.] -pe '$$i=$$ENV{p};$$F[$$i]++;$$j=$$i+1;$$F[$$_]=0 for $$j..2;$$"=".";$_="$$F"') && \
echo $$v > VERSION && echo "New version: $$v"
bump-minor:
@$(MAKE) bump-major p=1
bump-patch:
@$(MAKE) bump-major p=2
Or break it out into a .bump-version
script and source it from your build tooling.
-a # autosplit into @F
-F[.] # split on literal dot
$i=$ENV{p} # get part index from environment (e.g., 1 for minor)
$F[$i]++ # bump it
$j=$i+1 # start index for resetting
$F[$_]=0 ... # zero the rest
$"="."; # join array with dots
$_="@F" # set output
If you have to explain this to some junior dev, just say RTFM skippy
perldoc perlrun
. Use the force Luke.
And if the senior dev wags his finger and say UUOC, tell him Ego
malum edo.
Published by /u/Patentsmatter on Sunday 20 April 2025 13:49
for me, cpanm refuses to install Net::SSLeay, which in turn means that LWP::Protocol::https cannot be installed either.
# Failed test 'X509V3_EXT_print nid=103 extended-cert.cert.pem:4'
# at t/local/32_x509_get_cert_info.t line 273.
# got: 'Full Name:
# URI:http://intermediate-ca.net-ssleay.example/crl1.crl
#
# Full Name:
# URI:http://intermediate-ca.net-ssleay.example/crl2.crl
# '
# expected: 'Full Name:
# URI:http://intermediate-ca.net-ssleay.example/crl1.crl
# Full Name:
# URI:http://intermediate-ca.net-ssleay.example/crl2.crl'
# Failed test 'X509V3_EXT_print nid=86 extended-cert.cert.pem:6'
# at t/local/32_x509_get_cert_info.t line 273.
# got: 'email:intermediate-ca@net-ssleay.example, URI:http://intermediate-ca.net-ssleay.example, DNS:intermediate-ca.net-ssleay.example, Registered ID:1.2.0.0, IP Address:192.168.0.1, IP Address:FD25:F814:AFB5:9873:0:0:0:1, othername: emailAddress:ica@net-ssleay.example'
# expected: 'email:intermediate-ca@net-ssleay.example, URI:http://intermediate-ca.net-ssleay.example, DNS:intermediate-ca.net-ssleay.example, Registered ID:1.2.0.0, IP Address:192.168.0.1, IP Address:FD25:F814:AFB5:9873:0:0:0:1, othername: emailAddress::ica@net-ssleay.example'
# Failed test 'X509V3_EXT_print nid=85 extended-cert.cert.pem:8'
# at t/local/32_x509_get_cert_info.t line 273.
# got: 'email:john.doe@net-ssleay.example, URI:http://johndoe.net-ssleay.example, DNS:johndoe.net-ssleay.example, Registered ID:1.2.3.4, IP Address:192.168.0.2, IP Address:FD25:F814:AFB5:9873:0:0:0:2, othername: emailAddress:jd@net-ssleay.example'
# expected: 'email:john.doe@net-ssleay.example, URI:http://johndoe.net-ssleay.example, DNS:johndoe.net-ssleay.example, Registered ID:1.2.3.4, IP Address:192.168.0.2, IP Address:FD25:F814:AFB5:9873:0:0:0:2, othername: emailAddress::jd@net-ssleay.example'
# Looks like you failed 3 tests of 746.
According to the [Metacpan issues page](https://github.com/radiator-software/p5-net-ssleay/issues), it seems the errors are persisting at least since November 2024.
Any suggestions for getting LWP to accept https connections?
Published by prz on Sunday 20 April 2025 13:58
This is the weekly favourites list of CPAN distributions. Votes count: 51
Week's winner: perl (+3)
Build date: 2025/04/20 11:57:45 GMT
Clicked for first time:
Increasing its reputation:
Published by /u/niceperl on Sunday 20 April 2025 08:24
Published by Simon Green on Sunday 20 April 2025 04:54
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
You are given an array of words and a word.
Write a script to return true if concatenating the first letter of each word in the given array matches the given word, return false otherwise.
For input from the command line, I take the last string as an acronym, and the rest as an array of words. My solution is two lines. It could be a one liner, but that would make it less readable.
The first line takes the first letter of each word and combines them to create a variable called acronym
. The then compare this case-insensitively with the given word.
def is_acronyms(array: list, word: str) -> bool:
acronym = ''.join(s[0] for s in array)
return acronym.lower() == word.lower()
The Perl code follows the same logic.
sub main (@array) {
my $word = pop(@array);
my $acronym = join( '', map { substr( $_, 0, 1 ) } @array );
say lc($acronym) eq lc($word) ? 'true' : 'false';
}
$ ./ch-1.py Perl Weekly Challenge PWC
True
$ ./ch-1.py Bob Charlie Joe BCJ
True
$ ./ch-1.py Morning Good MM
False
You are given two strings.
Write a script to return true
if swapping any two letters in one string match the other string, return false
otherwise.
The first thing I check is if the strings are of the same length. They they are of different lengths, I know that this is False
.
def friendly_strings(str1: str, str2: str) -> bool:
if len(str1) != len(str2):
return False
The next thing I check is if the strings are the same, there is at least one letter that appears more than once. If I take the string bitter
for example, swapping the third and fourth letter is possible to ensure both strings remain the same. OTOH, if the strings one container each letter once - like abc
- it isn't possible to swap two letters.
if str1 == str2:
freq = Counter(str1)
return any(i for i in freq.values() if i > 1)
The next step I do is get the position of the letters where the character is different between the two strings.
differences = [ i for i in range(len(str1)) if str1[i] != str2[i]]
If there are only two differences, I check that the letters at those positions are in the opposite position in the other string, and return True
if that is the case.
if len(differences) == 2:
pos1, pos2 = differences
if str1[pos1] == str2[pos2] and str2[pos1] == str1[pos2]:
return True
Finally, I return False
if we haven't return True
previously.
return False
$ ./ch-2.py desc dsec
True
$ ./ch-2.py fcyn fcny
True
$ ./ch-2.py poo eop
False
$ ./ch-2.py stripe sprite
True
$ ./ch-2.py bitter bitter
True
$ ./ch-2.py biter biter
False
Published on Sunday 20 April 2025 00:00
Published by Perl Steering Council on Friday 18 April 2025 15:03
We were all present.
CVE-2024-56406 is published and has been addressed by new point releases. Please upgrade or patch your perl promptly if affected. We thank Steve Hay, Andreas König and Stig Palmquist for doing the heavy lifting, as well as Nathan Mills for discovering the problem, and Karl Williams for providing the fix. We re-/learned a number of old and new lessons about the handling of security issues, which we will write up as new process for the PSC, the Perl Security Team, and the CPANSec group, to be jointly reviewed and agreed at the looming PTS.
We started winnowing this release cycle’s pull requests for potential release blockers. We briefly reviewed all 72 pull requests and identified 11 of interest for a closer look.
We reviewed the 2 new issues filed since last week for release blocker potential and put one of them on our list for closer review. We then started a closer examination of the 20 issues we identified as candidate blockers. We got through 5 issues, none of which we considered blockers.
Published by Saif Ahmed on Friday 18 April 2025 10:45
Stefan ( niner ) has now come to a conclusion of his efforts with RakuAST. This mammoth task started previously by Jonathan Worthington. In the time since his award of the grant, he has made 823 commits to RakuAST, and his overall contribution to Raku in the past couple of years is second only to the very prolific Elizabeth Mattijsen. His contributions can be viewed on github. It is impossible to describe all his activity with this project, and I imagine it will have taken much more than the 200 hours he had thought it would take in his original application. His commentary on the project is available on his own blog pages which also contains other interesting stuff. A summary of key activities can be extracted from Rakudo Weekly Blogs by Elizabeth, and these are shamelessly reproduced in reverse chronological order with links to the original blog pages, as they are representative of the vast scope of his work : -
Stefan Seifert basically concluded [his] work on the Raku bootstrap, with the number of test-files passing equalling the number of passing test-files in the non-bootstrapped Rakudo.
The number of passing test-files with the new Raku grammar are now 141/153 (make test +0) and 1299/1345 (make spectest +20).
Stefan Seifert fixed a potential segfault in generating object IDs, and an issue with signatures containing multiple slurpies, and an issue with the will trait.
Stefan Seifert started focusing on bootstrapping the new Raku grammar from scratch (whereas until now it assumed there was a working Raku available) as opposed to try fixing errors in roast. This work is available in a branch as of this writing, and the number of passing spectest files in this fully bootstrapped implementation of the Raku Programming Language is now already 1228 (out of 1345, as opposed to 1279 in the non-bootstrapped version). Another major step forward to making RakuAST mainstream!
Stefan Seifert also fixed quite a few issues (and that’s an understatement!) in the non-bootstrapped RakuAST as well.
Stefan Seifert continued working on RakuAST. The most significant fixes: * BEGIN time call for non-simple constructs * support for %?RESOURCES and $?DISTRIBUTION * blocks as defaults for parameters * many attribute and package stub issues * added several warnings * and many smaller fixes!
Stefan Seifert continued working on RakuAST. The most significant fixes: * operators / terms defined as variables * return with pair syntax * several variable visibility issues at BEGIN time * fixes to ss/// and S// * several (sub-)signature and generics issues * binding attributes in method arguments * several issues related to categoricals * support <|c> and <|w> assertions in regexes * several return issues / return value issues * progress in making require work * and many, many, many more smaller fixes!
Stefan Seifert continued working on RakuAST. The most significant fixes: * non-trivial lazy loops * allow declaration of $_ in loops and other loop related fixes * handling labels with loop structures * a large number of regex related features, such as fixing LTM (Longest Token Match) matching and interpolation of attributes in regexes * exceptions thrown in CHECK phasers * support added for tr/// and TR/// * better handling of subroutine stubs * and many, many more smaller fixes!
Stefan Seifert continued working on RakuAST and fixed some more issues with the phasers, multi-part named roles, language versions, where clauses on subsets and much more!
Stefan Seifert continued working on RakuAST and fixed issues with the will trait, CHECK phasers, the use variables pragma, multi regexes and much more!
Stefan Seifert continued working on RakuAST and produced more than 50 commits, fixing all of the remaining S03 tests and other issues.
Stefan Seifert changed the behaviour of throws-like (for the better) in light of compilation errors. Stefan Seifert continued working on RakuAST, fixing: error messages, operator properties on custom operators, several meta-operator and hypering issues, dispatch using .?, .+ and .*, adverbs on infixes, and more.
Stefan Seifert returned to RakuAST development and completed the work on the branch that took a new approach to compile time actions (really a GBR aka Great BEGIN Refactor). A branch that was started by Jonathan Worthington over a year ago. Stefan continued from there by fixing use fatal.
Published by SSilk on Friday 18 April 2025 07:57
I'm trying make a simple Perl text table using Text::Table
. I want pipes as the main column separator, hyphens for a horizontal rule, and plus signs where the rule crosses the column separator.
Here is a MWE:
use strict;
use warnings;
use Text::Table;
my $sep = \'│';
my $tb = Text::Table->new("Heading 1", $sep, "Heading 2", $sep, "Heading 3");
$tb->add("Row 1 Col 1", "Row 1 Col 2", "Row 1 Col 3");
$tb->add("Row 2 Col 1", "Row 2 Col 2", "Row 2 Col 3");
print $tb->title;
print $tb->rule('-','+');
print $tb->body;
I used the suggestion from the Text::Table
documentation that "Another useful combo is $tb->rule( '-', '+'), together with separators that contain a single nonblank "|", for a popular representation of line crossings."
But it's giving me this output:
Heading 1 │Heading 2 │Heading 3
-----------+++-----------+++-----------
Row 1 Col 1│Row 1 Col 2│Row 1 Col 3
Row 2 Col 1│Row 2 Col 2│Row 2 Col 3
Notice there are three plus signs at each rule/column intersection rather than the expected one.
I've tested this in Text::Table
1.133 and 1.135 (latest).
What am I doing wrong?
Published by Mohammad Sajid Anwar on Friday 18 April 2025 07:51
Couple of experimental features added to Map::Tube.
Please check out the link below for more information.
https://theweeklychallenge.org/blog/map-tube-experimental
Published on Thursday 17 April 2025 20:08
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given an array of words and a word. Write a script to return true if concatenating the first letter of each word in the given array matches the given word, return false otherwise.
Here’s our one subroutine, this problem requires very little code.
sub acronyms{
my($word_list, $word) =
@_;
my
@first_letters = map{
(split //, $_)[0]
}
@{$word_list};
return 1 if $word eq join q//,
@first_letters;
return 0;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some simple tests.
MAIN:{
say acronyms([qw/Perl Weekly Challenge/], q/PWC/);
say acronyms([qw/Bob Charlie Joe/], q/BCJ/);
say acronyms([qw/Morning Good/], q/MM/);
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl 1 1 0
You are given two strings. Write a script to return true if swapping any two letters in one string match the other string, return false otherwise.
Here’s the process we’re going to follow.
Now let’s check and see how many differences were found.
The rest of the code combines the previous steps and drives some tests.
MAIN:{
say friendly q/desc/, q/dsec/;
say friendly q/cat/, q/dog/;
say friendly q/stripe/, q/sprite/;
}
◇
Fragment referenced in 7.
$ perl perl/ch-2.pl 1 0 1
Published by The Perl and Raku Conference - Greenville, SC 2025 on Wednesday 16 April 2025 14:29
Published by alh on Wednesday 16 April 2025 07:27
Paul writes:
Hours:
any
and all
features
Total: 13 hours
Published on Saturday 12 April 2025 22:58
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given a list of words. Write a script to find out whether the last character of each word is the first character of the following word.
This seems straightforward enough. One question is whether we need to only consider the words in their given order. We’ll assume so.
Here’s our one subroutine, this problem requires very little code.
sub circular{
my $current = shift
@_;
my $current_last = (split //, $current)[length($current) - 1];
do{
my $previous_last = $current_last;
$current = $_;
my $current_first = (split //, $current)[0];
$current_last = (split //, $current)[length($current) - 1];
return 0 if $previous_last ne $current_first;
} for
@_;
return 1;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some simple tests.
MAIN:{
say circular(qw/perl loves scala/);
say circular(qw/love the programming/);
say circular(qw/java awk kotlin node.js/);
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl 1 0 1
You are given two strings. Write a script to find out if one string is a subsequence of another.
A subsequence of a string is a new string that is formed from the original string by deleting some (can be none) of the characters without disturbing the relative positions of the remaining characters.
We’re going to do this in teh shortest way possible, via a regular expression.
We’re going to construct the regular expression dynamically each time
The shorter of the two strings will be what we test as the potential subsequence of the other longer one.
We’re going to have the work done in a single subroutine which determines which string to test, builds the regex, and runs it.
The rest of the code drives some tests.
MAIN:{
say subsequence q/uvw/, q/bcudvew/;
say subsequence q/aec/, q/abcde/;
say subsequence q/sip/, q/javascript/;
}
◇
Fragment referenced in 8.
$ perl perl/ch-2.pl 1 0 1
One of my pleasures in perl is learning the C language again. Something about the perl language makes it easier to write C, but while sharing the same space in my brain.
So how can I write a trivial program to write exactly one GB (2^30) of data to disk?
first in perl- (Of course you prototype in perl!)
But since my c program is cleaner, here’s the C program
# include <stdio.h>
# include <assert.h>
// this simple program writes 1 GB of data, to a file called "somefile"
// setup 256 bytes of data
char data2[] =
"0x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x000x00";
// using `calc` from debian; do two simple calculations.
// answers are shown on even lines below
/* 2**30/256
4194304
4194304 / 1024
4096
*/
void main () {
FILE* fp= fopen("somefile", "w");
assert (fp);
for (int i=0;i< 4096*1024;i++)
fwrite (data2,256,1,fp); // lol don't use sizeof
}
compile and run
$ cc writeGB.c -g
$ time ./a.out
since I used bash’s time
command, it tells me it worked in 1.991s
$ stat somefile
stat tells me that the file is exactly 1073741824 bytes, which is what I expect.
now the perl version
#! /usr/bin/perl
use strict;
use warnings;
use 5.010;
# can perl write exactly a 2GB file? or whatever? this writes 1GB ... GiB
die unless open my $fhandl, ">somefile";
my $bytes = 2**30; # 1 GiB in bytes
my $data2 = "0y00" x 64; # 256 bytes
for (my $a=0; $a<$bytes ;$a+=256 ) {
print $fhandl $data2;
}
using binmode doesn’t seem to speed it up … I thought it would have. Oddly enough, perl is faster than the compiled C code, by about 5%. I am gobsmacked, but It must be buffering a few system calls.
BTW, less
doesn’t like viewing the file due to the lack of newlines. I had to kill the process.
I can make sure the bytes are written to spinning rust by appending two lines to the perl source code
close $fhandl;
system 'sync', 'somefile'; # alternatively use File::Sync on cpan, (builds with XS) (sync is not flush)
So, project finished. A file that’s exactly 1 GB. (or GiB for the youngsters). everything about this was easy and fun. Maybe I’ll turn it into a swap file.
Published by prz on Saturday 12 April 2025 20:47
Published on Friday 11 April 2025 14:58
Published by Ted James on Thursday 10 April 2025 17:18
As we discovered previously, your MIDI devices can be enhanced to function in different ways besides just triggering a single note per key (or pad) press.
Being a serial module creator, and with the help of that article’s author John, I bundled these concepts and more into a few handy CPAN packages that allow you to control your devices with minimal lines of code. So far, these are: MIDI::RtController, MIDI::RtController::Filter::Tonal, MIDI::RtController::Filter::Drums, and MIDI::RtController::Filter::CC.
With these, you can do lots of cool things to enhance your MIDI device with filters (special subroutines). These routines are then executed in real-time when a key or pad is pressed on your MIDI device.
First, let’s inspect the module MIDI::RtController itself.
Crucially, it has required input
and output
attributes that are turned into instances of MIDI::RtMidi::FFI::Device. The first is your controller. The second is your MIDI output, like fluidsynth
, timidity
, virtual port, your DAW (“digital audio workstation”), etc.
Also, because RtController can operate asynchronously, it uses IO::Async::Loop and IO::Async::Channels. Within the module, the latter serves as MIDI in. This channel is listened to, and messages from the input device are processed by the known filters, before being sent out.
The module’s public interface has four methods: add_filter
, send_it
, delay_send
, and run
.
#!/usr/bin/env perl
use v5.36;
use MIDI::RtController;
my $in = $ARGV[0] || 'oxy'; # part of the name of the MIDI controller device
my $out = $ARGV[1] || 'gs'; # part of the name of the MIDI output device
my $rtc = MIDI::RtController->new(input => $in, output => $out);
$rtc->add_filter('pedal', [qw(note_on note_off)], \&pedal_tone);
$rtc->run;
sub pedal_notes ($note) {
return 55, $note, $note + 7; # 55 = G below middle-C
}
sub pedal_tone ($port, $dt, $event) {
my ($ev, $chan, $note, $vel) = $event->@*;
my @notes = pedal_notes($note);
my $delay_time = 0;
for my $n (@notes) {
$delay_time += $delay;
$rtc->delay_send($delay_time, [ $ev, $chan, $n, $vel ]);
}
return 0;
}
The filter subroutine, “pedal_tone”, is called with the MIDI input port, a “delta-time” ($dt
) and the MIDI event ($event
). The event is first broken into its 4 parts and the $note
is used to compute and return the pedal_notes
. Next the notes are played, with a delay (but could be played simultanously with the send_it
method, instead).
First, let’s hear the unprocessed sound, to have a point of reference:
Ok. Here’s what the pedal-tone filter sounds like with roughly the same phrase:
Pretty different!
You can use this example program in the MIDI::RtMidi::FFI::Device distribution. Also, you can install and use the cross-platform program ReceiveMIDI, which is useful for many things.
Right now on my system, executing receivemidi list
returns:
IAC Driver Bus 1
Synido TempoPAD Z-1
Logic Pro Virtual Out
And I start the fluidsynth
program with:
fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2
Currently, I’m on my Mac, so this command tells fluidsynth
that I’m using coreaudio
for the audio driver, coremidi
for the midi driver, 2.0
for the gain (because rendered MIDI playback is quiet), and finally my soundfont file.
You are in luck! There are currently tonal, drums, and control-change filters on CPAN. Each includes example programs (tonal and drums respectively). Here is an example of one of the simpler tonal filters:
#!/usr/bin/env perl
use curry;
use MIDI::RtController ();
use MIDI::RtController::Filter::Tonal ();
my $input_name = shift || 'tempopad'; # midi controller device
my $output_name = shift || 'fluid'; # fluidsynth
my $rtc = MIDI::RtController->new(
input => $input_name,
output => $output_name,
);
my $rtf = MIDI::RtController::Filter::Tonal->new(rtc => $rtc);
$rtc->add_filter('pedal', [qw(note_on note_off)], $rtf->curry::pedal_tone);
$rtc->run;
By the way, curry
allows us to refer to an object-oriented method as a CODE reference in a smooth way.
And yes, this pedal_tone
routine is the same as the previous, above - just OO now.
If you would like to craft your own musical or control filters, you can use MIDI::RtController::Filter::Math as a spring-board, point-of-reference example. This implements a “stair-step” filter (detailed below). Here is an example of that in action:
#!/usr/bin/env perl
use curry;
use MIDI::RtController ();
use MIDI::RtController::Filter::Math ();
my $input_name = shift || 'tempopad'; # midi controller device
my $output_name = shift || 'fluid'; # fluidsynth
my $rtc = MIDI::RtController->new(
input => $input_name,
output => $output_name,
);
my $rtf = MIDI::RtController::Filter::Math->new(rtc => $rtc);
# $rtf->delay(0.15); # slow down the delay time
# $rtf->feedback(6); # increase the number of steps
$rtc->add_filter('stair', [qw(note_on note_off)], $rtf->curry::stair_step);
$rtc->run;
And here’s what that sounds like:
Wacky! It’s like you’re Liberace, but crazier.
First-up is that MIDI::RtController::Filter::Math is a Moo module, but any OO will do the job. Second is that attributes are defined for all the parameters our filter routine(s) will need, like feedback
for instance:
has feedback => (
is => 'rw', # changable on-the-fly
isa => Num, # as defined by a Types::* module
default => sub { 1 }, # single echo default
);
Please see the source for these.
The public object oriented routine, stair_step
uses a private _stair_step_notes
local method and the delay_send
RtController method. The first decides what notes we will play, and the second sends a MIDI event to the MIDI output device, with a number of (usually fractional) seconds to delay output. So we gather the notes (more on this in a bit), then play them one at a time with a steadily incrementing delay time. Lastly we return false
AKA 0
(zero), so that RtController knows to continue processing other filters.
sub stair_step ($self, $dt, $event) {
my ($ev, $chan, $note, $vel) = $event->@*;
my @notes = $self->_stair_step_notes($note);
my $delay_time = 0;
for my $n (@notes) {
$delay_time += $self->delay;
$self->rtc->delay_send($delay_time, [ $ev, $self->channel, $n, $vel ]);
}
return 0;
}
For this particular “stair-step” filter, notes are played from the beginning event note, given the up
and down
attributes. Each note is first incremented by the up
value, then the next note is decremented by the value of down
- rinse, repeat. The value of feedback
determines how many steps will be made. (You may notice that the object channel
is used instead of the event $chan
. This is done in order to change channels regardless of the MIDI input device channel setting.)
Lastly, here is the subroutine that computes the notes to play:
sub _stair_step_notes ($self, $note) {
my @notes;
my $factor;
my $current = $note;
for my $i (1 .. $self->feedback) {
if ($i % 2 == 0) {
$factor = ($i - 1) * $self->down;
}
else {
$factor = $i * $self->up;
}
$current += $factor;
push @notes, $current;
}
return @notes;
}
You can soup-up your MIDI controller with this code, to fabulous effect and without much ado!
And for a more complete, real-world example (that is a work-in-progress), please see the code in rtmidi-callback.pl.
(And personally, I just rediscovered my MIDI Rock joystick controller and am very anxious to make an app like this, for it. Woo!)
Happy controlling!
Published by Unknown on Sunday 06 April 2025 23:51
This is the weekly favourites list of CPAN distributions. Votes count: 32
Week's winner: JQ::Lite (+3)
Build date: 2025/04/06 21:48:00 GMT
Clicked for first time:
Increasing its reputation:
Published on Sunday 06 April 2025 17:29
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given a list of words and a character. Write a script to return the index of word in the list where you find the given character.
This can be done in essentially one line. Rather than write a true Perl one-liner for the command line though, we’ll package this into a single subroutine.
Here’s our one subroutine.
sub find_words{
my($s, $c) =
@_;
return grep {$s->[$_] =~ m/$c/} 0 ..
@{$s} - 1;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some simple tests.
MAIN:{
say q/(/ . join(q/, /, find_words([q/the/, q/weekly/, q/challenge/], q/e/)). q/)/;
say q/(/ . join(q/, /, find_words([q/perl/, q/raku/, q/python/], q/p/)) . q/)/;
say q/(/ . join(q/, /, find_words([q/abc/, q/def/, q/bbb/, q/bcd/], q/b/)) . q/)/;
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl (0, 1, 2) (0, 2) (0, 2, 3)
You are given a sentence and two words. Write a script to return all words in the given sentence that appear in sequence to the given two words.
Similar to the first part this will be a single short subroutine. We’re just going to loop over the words and match as we go. There are two small things to note here: we strip out any punctuation from our sentence and the empty string q// is considered by Perl to be a false value. The latter is only important in that is how we initialize $next.
sub find_third{
my ($s, $first, $second) =
@_;
$s =~ s/[[:punct:]]//g;
my
@thirds = ();
my($previous, $current, $next) = (q//, q//, q//);
do{
push
@thirds, $_ if $next;
$current = $_;
$next = 1 if $previous eq $first && $current eq $second;
$next = 0 unless $previous eq $first && $current eq $second;
$previous = $current;
} for split q/\s+/, $s;
return
@thirds;
}
◇
Fragment referenced in 6.
The rest of the code drives some tests.
MAIN:{
say q/(/ . join(q/, /, find_third(q/Perl is a my favourite language but Python is my favourite too./, q/my/, q/favourite/)). q/)/;
say q/(/ . join(q/, /, find_third(q/Barbie is a beautiful doll also also a beautiful princess./, q/a/, q/beautiful/)) . q/)/;
say q/(/ . join(q/, /, find_third(q/we will we will rock you rock you./, q/we/, q/will/)) . q/)/;
}
◇
Fragment referenced in 6.
$ perl perl/ch-2.pl (language, too) (doll, princess) (we, rock)
I'm currently sitting in a TGV doing 300km/h from Marseille to Paris, traveling back home from the Koha Hackfest, hosted by BibLibre.
This year I did a lot of QA, which means reviewing patches, running their test plan, verifying that everything works and finally signing off the patches and marking the bug as "Passed QA". The process is documented in the wiki. According to the scoreboard I QA'ed 8 bugs (the second highest number!). After the third or fourth time I did not even have to look up all the steps anymore.
I moderated a short panel on ElasticSearch, because I found some weird behaviors on which I needed feedback from the experts. This resulted in a bunch of new "bugs" (Koha speak for issues, in this case a mix of actual bugs an feature requests): 39494, 39548, 39549, 39551, 39552.
I did a rather detailed review of 37020 - bulkmarcimport gets killed when inserting large files. The problem here is that the current code uses MARC::Batch, which does some horrible regex "parsing" of XML to implement a stream parser (so it can handle large files without using ALL the RAM) (see more details at the end of this post). But a recent change added a check-step which validates the records and puts the valid ones onto an Perl array. Which now again takes up ALL the RAM. I reviewed the two proposed patches, but I think we should use XML::LibXML::Reader directly, which should result in cleaner, faster, less-RAM-using and correct code.
I also participated in various other discussions and hope to have provided some helpful ideas & feedback from my still semi-external Koha perspective and semi-extensive knowledge of other environments and projects (I have been doing this "web dev" stuff for quite some time now..).
After help Clemens setup L10N on his KTD setup, I submitted a doc patch to KTD explaining the SKIP_L10N
setup and hopefully making the general L10N setup a bit clearer. I generally try to improve the docs if I hit a problem and was able to fix it. Give it a try the next time, it's very rewarding!
I could also provide some Perl help to various other attendees. But I still failed most of the questions of joubus Perl quiz. My excuse is that I trained my brain on writing only good/sane/nice Perl so that I forgot how to parse all the weird corner cases...
But the Hackfest is not only about hacking, there's also the "fest" part (or party?). I really enjoyed hanging out with the other attendees on the terrace during lunch in the sun. The food was as usual excellent and not too unhealthy (of course depending on how much cheese one is able to stack onto his plate). The evenings at various bars and restaurants where fun and entertaining (even though I did manage to go to bed early enough this year, and hardly had any alcohol).
I did not do any sightseeing or even just walking around Marseille this year. I blame the fact that our hotel was very near to the venue and most of the after-hack locations. And I didn't bring my swimming trunks so I was not motivated to go to the beach (but I've ticked that off last year..)
I had a lot of nice chats with old and new friends on topics ranging from the obvious (A.I., the sorry state of the world, Koha, Perl) to the obscure (US garbage collection trucks, the lifetime of ropes for hand-pulled elevators up to Greek monasteries, sweet potato heritage of Aotearoa, chicken egg sizes, anarcho-syndicalism, ...)
Thanks to BibLibre and Paul Poulain for organizing the event, and to all the attendees for making it such a wonderful 4 days!
So, how does MARC::Batch handle importing huge XML files without using too much RAM?
By breaking the first rule of XML handling: It "parses" the XML via regex!
This is actually implemented in MARC::File::XML, namely here. If you have a strong stomach I'll wait for you to take a look at that code.
Here are some "highlights":
## get a chunk of xml for a record
local $/ = 'record>';
my $xml = <$fh>;
Set the input record separator
(usually newline \n
, and telling Perl what it should consider a line) to the string record>
so, basically something which looks like and XML tag ending with record. It is NOT including the start <
because the code wants to ignore XML namespaces.
The it uses <$fh>
to read the next "line" from the record, which isn't a line in any usual sense, but all bytes up to the next occurrence of record>
.
## do we have enough?
$xml .= <$fh> if $xml !~ m!</([^:]+:){0,1}record>$!;
It continues reading until it find something that looks like a closing </record>
tag (which might contain a namespace). Then some more "cleanup", and finally the xml chunk is returned.
Obviously this works, as it is used by thousands of libraries around the world on millions of records all the time.
But still: Uff!
Photo © Salve J. Nilsen, 2023, CC-BY-NC-4.0
This year in particular, the organizers have had difficulty reaching our fundraising targets for the Perl Toolchain Summit.
In the words of Ricardo Signes:
The Perl Toolchain Summit is one of the most important events in the year for Perl. A lot of key projects have folks get together to get things done.
Everyone who is invited to the Summit is a project leader or important contributor that is going to give their time and expertise for four days, to move the Perl toolchain forward. They give their time (sometimes having to take days off work, which is already a loss of income or holidays for them).
This is why, since 2011, we’ve done our best to at least partially refund their travel and accommodation expenses when needed. Everyone who’s attending the PTS should really only have to give four days of their life for it.
If the PTS can’t support its participants, then more and more of them are going have to either decline our invitation, or spend their own money, in addition to their time, to continue supporting the Perl Toolchain.
This is bad for Perl and CPAN.
Perl differs from other programming languages which have large corporations funding their development: it’s entirely supported by the community and its sponsors. In other words, by you.
Let’s do a quick back-of-the-envelope calculation, assuming:
We’re expecting about 35 people coming (out of 44 invitations sent), 22 from Europe, and 13 from outside Europe.
That brings us to a total estimate of 53,100 €, almost all costs considered. That’s a lot of money.
The organizers never actually spend that amount, because many of our attendees pay for themselves, or have their expenses covered by their employer (which we list as in-kind sponsors, alongside our financial sponsors).
Our budget for 2025 is of 25,000 €: that is our financial sponsoring target, as well as the amount we expect to pay directly to various suppliers. The rest is covered by in-kind sponsors or the attendees themselves.
Here are a few examples of some of the many results of past Perl Toolchain Summits:
In this section, we’ll present two important projects some of the participants intend to work on this year.
The CPAN Testers is a system that collects all test reports sent by individual testers for all modules published on CPAN, on a wide collection of systems. This infrastructure has collected millions of test reports over the years, and provides an invaluable service to the community.
It makes those reports available to the module authors so that they can figure out failures on systems they don’t have access to, and other services depend on it to provide test-related data. Perl core development also depends on it, via a system we call Blead Breaks CPAN where development versions of Perl are used to test CPAN distributions, to ensure backwards compatibility.
Every company that depends on even a single CPAN module benefits from CPAN Testers.
The service has been running in a “degraded state” (as indicated on its home page) for several months now. One of the issues is that it has had a single person maintaining it for several years.
That person, as well as several volunteers willing to help them, will be attending the summit. The goal is not to just work together for 4 days to bring things back up, but to come up with a long term solution, and increase the size of the maintainer pool.
These volunteers are in the US, Brazil and France, to name a few.
PAUSE is the Perl Authors Upload SErvice. This is where CPAN authors uploads the tarballs for the distributions that end up on CPAN. That service took its first upload on August 26, 1995.
Accounts and uploads are only protected by passwords. As some people move away from Perl and CPAN, they stop using their accounts, making them targets for attackers. This is a very real supply chain attack vector. The PAUSE admins are very vigilant, but quickly reacting to issues is not a sustainable solution.
One of the topic that keeps coming up is protecting the accounts using SSH keys or Two Factor Authentication. This is not a trivial task, which involves dealing with very legacy code. Other avenues of improvement involve the expiration of accounts or permissions.
Over the years, in addition to fixing bugs and adding features, the maintainers attending the PTS have been able to port the server to a new web stack, made it possible to build the entire service on Docker for isolated testing, etc. The topic of 2FA came up in the past, but so far hasn’t been fully tackled yet. This will be on the agenda this year.
The PAUSE maintainers come from Austria, the US, and Japan.
Here’s our current list of confirmed sponsors for the Perl Toolchain Summit 2025. (We’re currently in discussion with other sponsors, but nothing has been confirmed yet.)
These sponsors simply wire some money to Les Mongueurs de Perl, the French non-profit that handles the organization of the event (they get an invoice in return), and expect the organizers to spend it on PTS expenses (see above).
Any money left over is used to kickstart the budget for the event the following year, as is our tradition since 2011.
We are very grateful for the companies whose employees are invited and that decide to cover their travel and accommodation expenses, and let them spend work hours on the event. This means a lot! This is why we’re promoting them as “in-kind” sponsors.
These sponsors pay for some of the PTS expenses directly (usually they own employees’ expenses). Just like our financial sponsors, the PTS wouldn’t be possible without them.
First, you can read five reasons to sponsor the Perl Toolchain Summit.
Now that you’re conviced, here’s how you can help:
On behalf of everyone who depends on Perl and CPAN, thank you in advance for your support!
Published by D Ruth Holloway on Thursday 03 April 2025 20:02
The Perl Community Roundtable met on March 21, via Zoom, and the following discussions were presented:
The Community Roundtable took a moment to remember Andrew Main (ZEFRAM), and discussed his contributions to the Perl community over many years.
Planning for the 2025 TPRC is well underway; speaker selection is almost completed, and registration is open now at Eventbrite. The conference will be held June 27-29, at the Holiday Inn Express and Suites in Greenville, South Carolina.
The Toronto Perl Mongers will be hosting a talk online by Dave Cross, and is planning for similar high-profile talks in the future. The previous such event in December with Randal Schwartz was considered a huge success, and raised a good bit of money for TPRF and raised the profile on the Toronto Perl Mongers.
Emmanuel is currently the president of the Mongueurs de Perl, which promotes Perl in French-speaking countries, particularly in France. The nonprofit organization has been briskly busy in the last year or so, including helping with organizing this year's Perl Toolchain Summit in Leipzig, Germany. They do a lot, and with very few people. The Mongueurs could use more folks helping them in their mission, particularly in the Francophone world outside France.
Dave Cross has created a simple advertising platform that MetaCPAN and others are using to publish advertisements. It is easily modified with a pull request on the JSON file of ads in Dave's repository.
Philippe is an organizer in this year's summit, and stressed the importance of including new people who are interested in projects at every level. He pointed out that he has made a point of inviting at least a few new attendees at each summit going forward, when interest in a toolchain project has been expressed. Getting folks interested in our projects--via this meeting and other venues--is a good way to grow our project teams and allow long-time members to rotate off of those projects when they desire.
The Dancer2 project has recently completed a grant to revise the documentation for Dancer2, and the team is finishing up final edits there. Fresh activity on Dancer2 in the last year or so attracted the attention of the original author, Alexis Sukrieh, who has released a new "LiteBlog" plugin for Dancer2.
Bruce has recently taken over the role of Secretary of the Foundation, after the departure of Makoto Nozaki, who had served in that role for several years.
While companies that make financial contributions, either via the conferences, the Mongueurs de Perl, or the TPRF, are absolutely appreciated, all of our organizations and major projects definitely need to make that appreciation much louder, which encourages further donations both by that company, and others.
Additionally, we discussed ways that companies could contribute to the success of projects in the Perl and Raku ecosystems by donating time for their employees, particularly to work on modules and projects that are of use to them. In some countries (including the USA), this can be considered "research and development" time, which may be leverageable in a company tax statement. Alternatively, it can be presented as an in-kind contribution to the non-profit organization in our ecosystem, and become tax-deductible in that way. Either way, companies should definitely contact the non-profits they wish to support in this way, to receive proper credit and documentation, as well as the non-profit's public gratitude for their contribution.
It has been the case for many years that groups of people are working on projects (and for companies) using Perl without knowledge of any other "islands" of Perl activity; the example of Koha was given--Ruth did not even know about the larger community until she'd been working on Koha for several years, and convinced her boss to send her to her first YAPC (Madison 2012), where many folks had not even heard of Koha! The focus of the Community Roundtable going forward is on connecting these islands of activity--this encourages participation by folks on similar projects, and a synergy of support and expertise that is available throughout the community. This Roundtable is one such way to connect the dots, globally, and will continue.
The next Perl Community Roundtable will be held on Friday, April 18, at 1730 UTC, via Zoom, and all interested people are welcome, and encouraged to attend.
Last week, I wrote a blog post about how I gave new life to an old domain by building a new website to live on that domain. With help from ChatGPT, it only took a few hours to build the site. While I’ll be adding new businesses and events to the site over time, that is currently a manual process and the site is mostly static.
This week, I wanted to take things a bit further. I wanted to build a site that was updated daily – but without any input from me.
Whois tells me I first registered cool-stuff.co.uk in September 1997. It was one of the first domains I registered. It has hosted a couple of very embarrassing early sites that I built, and for a while, it served as the email domain for several members of my family. But since they all moved to GMail, it’s been pretty much dormant. What it has never hosted is what I originally registered it for – a directory of cool things on the world wide web. So that’s what I decided to build.
So here’s the plan:
I decided to stick with Jekyll and Minimal Mistakes as I enjoyed using them to build Balham.org. They make it easy to spin up a good-looking website, but they also have ways to add complexity when required. That complexity wasn’t needed here.
The site itself was very simple. It’s basically driven from a YAML file called coolstuff.yml which lists the sites we’ve featured. From that, we build a front page which features a new site every day and an archive page which lists all the previous sites we have featured. Oh, and we also have an RSS feed of the sites we feature. This is all pretty basic stuff.
As you’d expect from one of my projects, the site is hosted on GitHub Pages and is updated automatically using GitHub Actions.
It’s in GitHub Actions where the clever (not really all that clever – just new to me) stuff happens. There’s a workflow called update-coolstuff.yml which runs at 02:00 every morning and adds a new site. And it does that by asking ChatGPT to recommend a site. Here’s the workflow:
name: Update Cool Stuff on: schedule: - cron: '0 2 * * *' # Runs at 2 AM UTC workflow_dispatch: jobs: update: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install Perl dependencies run: | sudo apt-get update && sudo apt-get install -y cpanminus cpanm -n --sudo OpenAPI::Client::OpenAI YAML JSON::MaybeXS - name: Get a cool website from OpenAI env: OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} run: | perl .github/scripts/fetch_cool_site - name: Commit and push if changed run: | git config user.name "github-actions" git config user.email "github-actions@github.com" git add docs/_data/coolstuff.yml git diff --cached --quiet || git commit -m "Add new cool site" git push
There’s not much clever going on there. I needed to ensure I had an OpenAI subscription with credit in the account (this is going to cost a tiny amount of money to run – I’m making one request a day!), and I set up the API key as a secret in the repo (with the name “OPENAI_API_KEY).
The magic all happens in the “fetch_cool_site” program. So let’s look at that next:
#!/usr/bin/env perl use strict; use warnings; use builtin qw[trim]; use OpenAPI::Client::OpenAI; use YAML qw(LoadFile DumpFile); use Time::Piece; use JSON::MaybeXS; my $api_key = $ENV{"OPENAI_API_KEY"} or die "OPENAI_API_KEY is not set\n"; my $client = OpenAPI::Client::OpenAI->new; my $prompt = join " ", "Suggest a really cool, creative, or fun website to feature today on a site called 'Cool Stuff'.", "Just return the name, URL, and a one-paragraph description of why it's cool. Only return one site.", "The URL should just be the URL itself. Do not wrap it in Markdown."; my $res = $client->createChatCompletion({ body => { model => 'gpt-4o', messages => [ { role => 'system', content => 'You are a helpful curator of awesome websites.' }, { role => 'user', content => $prompt }, ], temperature => 1.0, } }); my $text = $res->res->json->{choices}[0]{message}{content}; my @lines = split /\n/, $text; my ($name, $url, @desc) = @lines; $name =~ s/^\*\s*//; my $description = join ' ', @desc; my $new_entry = { date => localtime->ymd, name => trim($name), url => trim($url), description => trim($description), }; my $file = "docs/_data/coolstuff.yml"; my $entries = LoadFile($file); unless (grep { $_->{url} eq $new_entry->{url} } @$entries) { push @$entries, $new_entry; DumpFile($file, $entries); }
We’re using OpenAPI::Client::OpenAI to talk to the OpenAI API. From my limited knowledge, that seems to be the best option, currently. But I’m happy to be pointed to better suggestions.
Most of the code is copied from the examples in the module’s distribution. And the parsing of the response is probably a bit fragile. I expect I could tweak the prompt a bit to get the data back in a slightly more robust format.
But it works as it is. This morning I woke up and found a new site featured on the front page. So, rather than spend time tweaking exactly how it works, I thought it would be a good idea to get a blog post out there, so other people can see how easy it is to use ChatGPT in this way.
What do you think? Can you see ways that you’d like to include ChatGPT responses in some of your code?
The website is live at cool-stuff.co.uk.
The post Finding cool stuff with ChatGPT first appeared on Perl Hacks.
Published on Thursday 27 March 2025 18:14
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given three strings. You are allowed to remove the rightmost character of a string to make all equals. Write a script to return the number of operations to make it equal otherwise -1.
The fact that we’re give exactly three strings makes things slightly easier. The approach we’ll take is to pop off the last letter of each and compare the remainders. If they are equal then we are done. Otherwise we’ll continue popping off letter until we’re done.
A special case to consider is when the strings are of unequal length. In that case we make sure to only pop off letters from equal length strings, although the untouched strings will still be used when checking to see if we are done.
Everything can be easily contained in one subroutine. I know that the do blocks with postfix if are not common, but to me they are the most aesthetic way to conditionally perform two short statements.
sub loop_pop_compare{
my($s, $t, $u) =
@_;
my
@s = split //, $s;
my
@t = split //, $t;
my
@u = split //, $u;
my $counter = 0;
{
my $max_size = (sort {$b <=> $a} (0 +
@s, 0 +
@t, 0 +
@u))[0];
unless(join(q//,
@s) eq join(q//,
@t) &&
join(q//,
@t) eq join(q//,
@u)){
do{$counter++; pop
@s} if
@s == $max_size;
do{$counter++; pop
@t} if
@t == $max_size;
do{$counter++; pop
@u} if
@u == $max_size;
}
else{
return $counter;
}
redo unless
@s == 0 ||
@t == 0 ||
@u == 0;
}
return -1;
}
◇
Fragment referenced in 2.
Putting it all together...
The rest of the code just runs some simple tests.
MAIN:{
say loop_pop_compare q/abc/, q/abb/, q/ab/;
say loop_pop_compare q/ayz/, q/cyz/, q/xyz/;
say loop_pop_compare q/yza/, q/yzb/, q/yzc/;
}
◇
Fragment referenced in 2.
$ perl perl/ch-1.pl 2 -1 3
You are given a list of strings of same length. Write a script to make each column sorted lexicographically by deleting any non sorted columns. Return the total columns deleted.
Unlike the first part, the strings here are guaranteed to be all of the same length and we do not know how many we will need to consider.
my $remaining = [grep {$string->[$_] if $_ != $i} 0 ..
@{$string} - 1];
◇
Fragment never referenced.
Defines: $remaining
Never
used.
Uses: $i
8.
We’ll put everything together in a single subroutine.
The rest of the code drives some tests.
MAIN:{
say sort_columns qw/swpc tyad azbe/;
say sort_columns qw/cba daf ghi/;
say sort_columns qw/a b c/;
}
◇
Fragment referenced in 9.
$ perl perl/ch-2.pl 2 1 0
Published on Wednesday 26 March 2025 15:00
We are currently setting up a staging server (called beta
) for a project consisting of two sub-projects and a bunch of services in each project. Each service uses a distinct role
(postgres-speech for user
), so I needed to set up a lot of roles
, where each role
has a distinct username
and password
. We use gopass to manage our passwords (and related info like usernames). When deploying (via Ansible) we can extract the passwords from gopass and inject them into the environment
of the containers running the services. Instead of manually creating a password, storing it in gopass
and creating a role in our Postgres cluster, I wrote a short script to automate this:
#!/usr/bin/perl
use v5.36;
open(my $fh, ">", "create_roles.sql") || die $!;
my @stages = qw(beta production);
my @services = qw(accounts-admin accounts-mailer accounts-sso winxle-admin winxle-ga-logger winxle-merchant winxle-newsletter winxle-quickdatacollectionform winxle-scripts winxle-user winxle-winner);
for my $stage (@stages) {
for my $service (@services) {
$service =~ /^(\w+)-/;
my $project = $1;
my $gopasspath = join('/','validad', $project, $stage, $service, 'database');
my $snake_service = $service;
$snake_service=~s/-/_/g;
my $rv = `gopass generate -f -p $gopasspath 32`;
my @lines = split(/\n/, $rv);
my $pw = $lines[3];
my $role = sprintf('connect_%s_%s', $stage, $snake_service);
my $create_role = sprintf("CREATE ROLE %s IN ROLE %s LOGIN PASSWORD '%s';", $role, $project, $pw);
say $fh $create_role;
my $add_username_cmd = qq{echo "\nuser: $role" | gopass insert -a $gopasspath};
system($add_username_cmd);
}
}
close $fh;
I open a file to store the SQL statements (create_roles.sql
) and define my stages
and my services
and walk through all of them in a double for
loop.
I extract the $project
name from the $service
name (i.e. accounts
or winxle
) and use that to create the path to the secret in gopass. This could be for example validad/winxle/production/winxle-merchant/database
or validad/accounts/beta/accounts-sso/database
.
I convert the project name from kebab-case (winxle-user
) to snake-case (winxle_user
, notice the underscore).
Then I call gopass generate
via backticks to create a new password with a length of 32 characters. gopass
will store that new password in the store and return the newly created password in the output. Using backticks (`...`
) allows me to easily capture the return value into a string, which I split into an array. I visually "parsed" the output of gopass generate
and learned that the fourth line (or $lines[3]
) contains the password (if -p
is set, which I did), so I copy that value into $pw
.
Next I generate the $role
name, eg connect_beta_winxle_admin
.
Now I generate a CREATE ROLE
statement for Postgres, containing the newly created random password and output this to the file create_roles.sql
.
Finally I use gopass insert
to append the username to the gopass secret file.
After running the script, I have a file create_roles.sql
which I "deployed" via copy/paste into psql
running on the cluster.
And I have a bunch of new gopass
secrets like this:
gopass show validad/winxle/beta/winxle-winner/database
Secret: validad/winxle/beta/winxle-winner/database
iSh3Iepush7ohth7ieh2ed8aecoCh2oX
user: connect_beta_winxle_winner
Writing a script was definitely more fun than doing all this by hand. Not sure if was faster, though. But with a tiny more bit of work I can improve this script to update existing roles, so we can more often / more easily rotate our DB credentials. And I got a blog post out of it!