Published by craigberry on Tuesday 02 September 2025 00:59
Tie-File/t/09_gen_rs.t: close before reopening Opening for write a file you've already got open for write is generally a bad idea and on VMS is a hard error, causing this test to fail. So clean up in between tests.
Published by richardleach on Monday 01 September 2025 23:13
Perl_newSVsv_flags_NN_PVxx: do not copy the SVprv_WEAKREF flag When copying source SV flags to the new destination SV, this function failed to account for SVprv_WEAKREF and SVf_IVisUV flags having the same numerical value - 0x80000000. The SVprv_WEAKREF flag was consequently erroneously propagated when copying weakened references. This didn't trip existing tests because SVt_IVs (the predominant SV type for RVs) are copied using different code paths. This commit: * Always drops the SVprv_WEAKREF flag in the affected code path * Adds additional tests for copying weakened SVs
Published by mauke on Monday 01 September 2025 23:13
add another email alias for Richard Leach
Published by karenetheridge on Monday 01 September 2025 20:57
just include the regen instructions directly
Published by karenetheridge on Monday 01 September 2025 20:50
cpan/File-Temp - Update to version 0.2312 0.2312 2025-09-01 18:56:18Z - fix filename check for VMS (Craig Berry, GH#44)
Published by Perl Steering Council on Monday 01 September 2025 17:43
The transition meeting to the new PSC proved a bit tricky to schedule to get everyone from both the old and new PSC in attendance, but eventually we succeeded: Aristotle, Graham, Leon, Paul, and Philippe all participated.
Published by Gabor Szabo on Monday 01 September 2025 06:01
Originally published at Perl Weekly 736
Hi there,
The association of the weekly newsletter with NICEPERL goes back much further than mine.
Right from its tagline, "where we turn Perl inside out", the blog signals its mission: to dive deep into the Perl language and its ecosystem with a hands-on, exploratory spirit.
It covers notable updates and insightful commentary on CPAN modules and Perl tools.
Thereâs a balance of careful maintenance, practical technical insights and well-curated community news. By consistently showcasing noteworthy CPAN modules and pairing that with a personal, open-source-driven perspective, it provides steady, meaningful value, quietly but effectively.
In a world awash with generic tech content, efforts like NICEPERL stand out for their depth and specificity. For Perl professionals, whether you're tracking tools, following community trends or just curious about the latest CPAN releases, NICEPERL remains a reliable and worthwhile resource.
Do you remember, What is new on CPAN? series on perl.com? Unfortunately, the last post in the series was in January 2025. Hopefully, it makes a comeback soon, I would love to see it return as a regular feature.
Enjoy rest of the newsletter.
--
Your editor: Mohammad Sajid Anwar.
The article beautifully captures what many in open-source communities know but sometimes forget: that our everyday tools represent labor and generosity. Bryan's earnest confession and commitment to reciprocate stands out as both humble and inspiring, a timely reminder that supporting the communities behind our tools is not just benevolent, but essential.
The changes appear well-considered and address several common developer pain points while positioning the platform for more sophisticated use cases.
The Winter 2025 Perl Community Conference is a hybrid in-person and online event.
The post is a practical, hands-on tutorial demonstrating how to write tests for a web application built with the Mojolicious framework in Perl. It's aimed at developers who are familiar with the basics of Mojolicious and want to implement a test-driven development (TDD) workflow or simply add tests to their application.
This post is an advanced, in-depth tutorial that tackles a sophisticated Perl software design pattern: the Modulino. It explains how to transform a standalone script into a module-like entity that is both runnable directly and testable by external programs.
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
Welcome to a new week with a couple of fun tasks "Smaller Than Current" and "Odd Matrix". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Equal Group" and "Final Score" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
Excellent solutions. They are concise, efficient and demonstrate a strong command of Perl idioms and core modules. The logic is clear and directly addresses the problem requirements.
This is an excellent, intermediate-to-advanced level post that effectively showcases Raku's unique approach to OOP. It is well-structured, using clear, practical examples to demonstrate the "what," "why" and "how" of these features.
The post delivers a clean and approachable presentation of both challengesâwith clear problem statements, illustrative examples and step-by-step solutions. Each task is broken down intuitively, making it accessible to both new and seasoned Perl programmers.
This post is a fantastic and highly practical walkthrough of solving the task. It excels not just in providing solutions but in demonstrating a thought process and the iterative refinement of code.
Both solutions are clear, concise, and idiomatic, demonstrating solid problem-solving skills and effective use of Perl features. They balance simplicity with robustness and are well supported by examples.
Both solutions are clear, idiomatic and well-structured, with code that directly reflects the problem description.
This is a high-quality technical blog post that demonstrates excellent programming skills across multiple languages. It shows deep understanding of each language's unique features and idioms while maintaining consistent logic across implementations. The explanations are clear and the code is well-structured and readable.
Solutions are more mathematically elegant, while they are more explicit about verifying the actual grouping conditions. Both approaches have merit depending on the specific requirements!
Solutions are generally preferable for their clarity and efficiency, while Robbie's demonstrate alternative approaches with more defensive programming. The mathematical insight that GCD > 1 â all frequencies divisible by smallest frequency > 1 is particularly interesting!
The approaches are clean, readable and effective, though you could acknowledge the efficiency of a GCD-based shortcut for Task 1 or bolster Task 2 with safeguards (like guarding against empty-stack operations) to strengthen robustness.
The post delivers a clear, idiomatic Python and Perl solution that elegantly operationalises the stack-based scoring logic, complete with thorough error handling for invalid inputs. The structured walkthrough from leveraging GitHub Copilot to reinforcing robustness with exceptionsâresults in both readable and resilient code.
The title, "Computationally Irreducible", is a playful and clever reference to a core theme of the week: complex problems that can't be easily simplified or predicted without actually running the code.
Great CPAN modules released last week;
MetaCPAN weekly report.
A couple of entries sneaked in by Gabor.
September 9, 2025
September 10, 2025
September 25, 2025
December 6, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Published by /u/tiny_humble_guy on Monday 01 September 2025 00:35
Hello, I successfully build perl 5.40.2 using perl-cross 1.6, my configure part is :
./configure \ --all-static \ --prefix=/tools \ -Dusethreads \ -Dldflags="-static -zmuldefs" \ -Dprivlib=/tools/lib/perl5 \ -Dsitelib=/tools/lib/perl5/site_perl
But when I use the perl for building texinfo 7.2 I get this error :
$ cd texinfo-7.2 $ PERL=/tools/bin/perl ./configure checking Perl version and modules... no configure: error: perl >= 5.8.1 with Encode, Data::Dumper and Unicode::Normalize required by Texinfo.
I assume the perl can't use the modules (Encode, Data::Dumper and Unicode::Normalize).
Strangely enough, when I use perl from the perl build directory, it works fine. Any clue to fix it?
Published by Simon Green on Sunday 31 August 2025 11:44
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
You are given an array of integers.
Write a script to return true if the given array can be divided into one or more groups: each group must be of the same size as the others, with at least two members, and with all members having the same value.
Like usual with these challenges, I read the tasks on Monday, but actually write them in the weekend. Originally, I thought this was going to be straight forward. Count the frequency of each integer, and make sure all frequencies are divisible by the lowest one. And that does work for the examples given.
It then occurred to me during the week that this wouldn't work for four of one integer and six of a different one. Back to the drawing board.
For this task, I start by calculating the frequency of each integer. As we only care about the frequency and not the actual integer, I call the values()
method, which returns a list (array in Perl) of frequencies.
from collections import Counter
def equal_group(ints: list) -> bool:
freq = Counter(ints).values()
GitHub Copilot taught me a nice trick I didn't know about for the Perl solution. You can use an inline foreach
call to achieve this in fewer lines. Thanks Copilot.
my %freq = ();
$freq{$_}++ foreach @ints;
my @values = values %freq;
If any value only occurs once, we can return false
immediately, as no solution is possible.
if min(freq) == 1:
return False
I then use an iterator called i
which starts at 2 to the maximum frequency. If all values in freq
are evenly divisible by i
, I return true
. If the iterator is exhausted, I return false
.
for i in range(2, max(freq) + 1):
if all(f % i == 0 for f in freq):
return True
return False
The Perl code follows the same logic.
$ ./ch-1.py 1 1 2 2 2 2
True
$ ./ch-1.py 1 1 1 2 2 2 3 3
False
$ ./ch-1.py 5 5 5 5 5 5 7 7 7 7 7 7
True
$ ./ch-1.py 1 2 3 4
False
$ ./ch-1.py 8 8 9 9 10 10 11 11
True
You are given an array of scores by a team.
Write a script to find the total score of the given team. The score can be any integer, +
, C
or D
. The +
adds the sum of previous two scores. The score C
invalidates the previous score. The score D
will double the previous score.
It's challenges like this one where GitHub Copilot really shows its powers. Like usual, I start by writing my test.py
file to validate the code against the provided examples.
Bu the time I was ready to write the code, Copilot basically wrote it for me. I did change the syntax slightly (mostly to raise errors when unexpected input was provided, but the Copilot generated code would have done the right thing for any valid input.
As Copilot explains:
The function final_score
... processes a list of string commands representing scores and operations, and computes the final total score. Hereâs how it works:
score_stack
to keep track of valid scores.C
, it removes (undoes) the last score.D
, it doubles the last score and adds it as a new score.+
, it adds a new score equal to the sum of the previous two scores.The Python code is
def final_score(scores: list[str]) -> int:
score_stack = []
for score in scores:
if score == "C":
if not score_stack:
raise ValueError("No scores to remove for 'C' operation")
score_stack.pop()
elif score == "D":
if not score_stack:
raise ValueError("No scores to double for 'D' operation")
score_stack.append(2 * score_stack[-1])
elif score == "+":
if len(score_stack) < 2:
raise ValueError("Not enough scores to sum for '+' operation")
score_stack.append(score_stack[-1] + score_stack[-2])
elif re.match(r"^-?\d+$", score):
score_stack.append(int(score))
else:
raise ValueError(f"Invalid score entry: {score}")
return sum(score_stack)
The Perl code follows the same logic.
$ ./ch-2.py 5 2 C D +
30
$ ./ch-2.py 5 -2 4 C D 9 + +
27
$ ./ch-2.py 7 D D C + 3
45
$ ./ch-2.py -5 -10 + D C +
-55
$ ./ch-2.py 3 6 + D C 8 + D -2 C +
128
Published on Sunday 31 August 2025 00:00
Published by /u/niceperl on Saturday 30 August 2025 17:40
Published by prz on Saturday 30 August 2025 19:40
Published by prz on Saturday 30 August 2025 19:37
This is the weekly favourites list of CPAN distributions. Votes count: 56
Week's winners (+2): Time::Piece & Pod::Weaver::Section::SourceGitHub
Build date: 2025/08/30 17:36:06 GMT
Clicked for first time:
Increasing its reputation:
Published by skeetastax on Saturday 30 August 2025 01:40
Trying to install Image::Magick but keeps failing. This seems to be an ongoing issue for people. (I looked into creating an issue against ImageMagick in the github repo for it, but I don't think this is really an issue with ImageMagick, but rather with the perl module that uses it.)
Here is the result: (not sure why the cpan shell warning is repeated)
cpan> install Image::Magick
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Database was generated on Tue, 26 Aug 2025 22:21:32 GMT
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Running install for module 'Image::Magick'
Checksum for C:\STRAWB~1\cpan\sources\authors\id\J\JC\JCRISTY\Image-Magick-7.1.1-28.tar.gz ok
Scanning cache C:\STRAWB~1\cpan\build for sizes
.........-------------------------------------------------------------------DONE
DEL(1/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-0
DEL(2/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-0.yml
DEL(3/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-1.yml
DEL(4/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-1
DEL(5/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-2.yml
DEL(6/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-2
DEL(7/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-3
DEL(8/30): C:\STRAWB~1\cpan\build\Image-Size-3.300-3.yml
DEL(9/30): C:\STRAWB~1\cpan\build\Log-Log4perl-1.57-0
DEL(10/30): C:\STRAWB~1\cpan\build\Log-Log4perl-1.57-0.yml
DEL(11/30): C:\STRAWB~1\cpan\build\libxml-perl-0.08-0
DEL(12/30): C:\STRAWB~1\cpan\build\libxml-perl-0.08-0.yml
DEL(13/30): C:\STRAWB~1\cpan\build\XML-RegExp-0.04-0
DEL(14/30): C:\STRAWB~1\cpan\build\XML-RegExp-0.04-0.yml
DEL(15/30): C:\STRAWB~1\cpan\build\XML-DOM-1.46-0
DEL(16/30): C:\STRAWB~1\cpan\build\XML-DOM-1.46-0.yml
DEL(17/30): C:\STRAWB~1\cpan\build\Test-Inter-1.12-0
DEL(18/30): C:\STRAWB~1\cpan\build\Test-Inter-1.12-0.yml
DEL(19/30): C:\STRAWB~1\cpan\build\Date-Manip-6.98-0
DEL(20/30): C:\STRAWB~1\cpan\build\Date-Manip-6.98-0.yml
DEL(21/30): C:\STRAWB~1\cpan\build\Log-Dispatch-2.71-0
DEL(22/30): C:\STRAWB~1\cpan\build\Log-Dispatch-2.71-0.yml
DEL(23/30): C:\STRAWB~1\cpan\build\Log-Dispatch-FileRotate-1.38-0
DEL(24/30): C:\STRAWB~1\cpan\build\Log-Dispatch-FileRotate-1.38-0.yml
DEL(25/30): C:\STRAWB~1\cpan\build\Image-ExifTool-13.30-0
DEL(26/30): C:\STRAWB~1\cpan\build\Image-ExifTool-13.30-0.yml
DEL(27/30): C:\STRAWB~1\cpan\build\POSIX-strptime-0.13-0.yml
DEL(28/30): C:\STRAWB~1\cpan\build\POSIX-strptime-0.13-0
DEL(29/30): C:\STRAWB~1\cpan\build\Alien-cmake3-0.10-0
DEL(30/30): C:\STRAWB~1\cpan\build\Alien-cmake3-0.10-0.yml
Configuring J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz with Makefile.PL
Gonna create 'libMagickCore.a' from 'C:\Program Files\ImageMagick-7.1.2-Q16-HDRI\CORE_RL_MagickCore_.dll'
Checking if your kit is complete...
Looks good
Warning (mostly harmless): No library found for
Generating a gmake-style Makefile
Writing Makefile for Image::Magick
Writing MYMETA.yml and MYMETA.json
JCRISTY/Image-Magick-7.1.1-28.tar.gz
C:\Strawberry\perl\bin\perl.exe Makefile.PL -- OK
Running make for J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz
cp Magick.pm blib\lib\Image\Magick.pm
AutoSplitting blib\lib\Image\Magick.pm (blib\lib\auto\Image\Magick)
Running Mkbootstrap for Magick ()
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e chmod -- 644 "Magick.bs"
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command::MM -e cp_nonempty -- Magick.bs blib\arch\auto\Image\Magick\Magick.bs 644
"C:\Strawberry\perl\bin\perl.exe" "C:\Strawberry\perl\lib\ExtUtils/xsubpp" -typemap C:\STRAWB~1\perl\lib\ExtUtils\typemap -typemap C:\STRAWB~1\cpan\build\Image-Magick-7.1.1-1\typemap Magick.xs > Magick.xsc
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e mv -- Magick.xsc Magick.c
gcc -c -I"C:\Program Files\ImageMagick-7.1.2-Q16-HDRI\include" -std=c99 -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DMULTIPLICITY -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fwrapv -fno-strict-aliasing -mms-bitfields -O2 -DVERSION=\"7.1.1\" -DXS_VERSION=\"7.1.1\" "-IC:\STRAWB~1\perl\lib\CORE" -D_LARGE_FILES=1 -DHAVE_CONFIG_H Magick.c
In file included from C:\Program Files\ImageMagick-7.1.2-Q16-HDRI\include/MagickCore/magick-config.h:25,
from C:\Program Files\ImageMagick-7.1.2-Q16-HDRI\include/MagickCore/MagickCore.h:29,
from Magick.xs:56:
C:\Program Files\ImageMagick-7.1.2-Q16-HDRI\include/MagickCore/magick-baseconfig.h:263:6: error: #error ImageMagick was build with a 64 channel bit mask and that requires a C++ compiler
263 | # error ImageMagick was build with a 64 channel bit mask and that requires a C++ compiler
| ^~~~~
gmake: *** [makefile:353: Magick.o] Error 1
JCRISTY/Image-Magick-7.1.1-28.tar.gz
C:\STRAWB~1\c\bin\gmake.exe -- NOT OK
Stopping: 'install' failed for 'Image::Magick'.
Failed during this command:
JCRISTY/Image-Magick-7.1.1-28.tar.gz : make NO
cpan>
Strawberry Perl version:
>perl -V
Summary of my perl5 (revision 5 version 40 subversion 2) configuration:
Platform:
osname=MSWin32
osvers=10.0.22631.5189
archname=MSWin32-x64-multi-thread
uname='Win32 strawberry-perl 5.40.2.1 # 05:42:50 Sun May 11 2025 x64'
config_args='undef'
hint=recommended
useposix=true
d_sigaction=undef
useithreads=define
usemultiplicity=define
use64bitint=define
use64bitall=undef
uselongdouble=undef
usemymalloc=n
default_inc_excludes_dot=define
Compiler:
cc='gcc'
ccflags ='-std=c99 -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DMULTIPLICITY -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fwrapv -fno-strict-aliasing -mms-bitfields'
optimize='-O2'
cppflags='-DWIN32'
ccversion=''
gccversion='13.2.0'
gccosandvers=''
intsize=4
longsize=4
ptrsize=8
doublesize=8
byteorder=12345678
doublekind=3
d_longlong=define
longlongsize=8
d_longdbl=define
longdblsize=16
longdblkind=3
ivtype='long long'
ivsize=8
nvtype='double'
nvsize=8
Off_t='long long'
lseeksize=8
alignbytes=8
prototype=define
Linker and Libraries:
ld='g++'
ldflags ='-s -L"C:\STRAWB~1\perl\lib\CORE" -L"C:\STRAWB~1\c\lib" -L"C:\STRAWB~1\c\x86_64-w64-mingw32\lib" -L"C:\STRAWB~1\c\lib\gcc\x86_64-w64-mingw32\13.2.0"'
libpth=C:\STRAWB~1\c\lib C:\STRAWB~1\c\x86_64-w64-mingw32\lib C:\STRAWB~1\c\lib\gcc\x86_64-w64-mingw32\13.2.0 C:\STRAWB~1\c\x86_64-w64-mingw32\lib C:\STRAWB~1\c\lib\gcc\x86_64-w64-mingw32\13.2.0
libs= -lmoldname -lkernel32 -luser32 -lgdi32 -lwinspool -lcomdlg32 -ladvapi32 -lshell32 -lole32 -loleaut32 -lnetapi32 -luuid -lws2_32 -lmpr -lwinmm -lversion -lodbc32 -lodbccp32 -lcomctl32
perllibs= -lmoldname -lkernel32 -luser32 -lgdi32 -lwinspool -lcomdlg32 -ladvapi32 -lshell32 -lole32 -loleaut32 -lnetapi32 -luuid -lws2_32 -lmpr -lwinmm -lversion -lodbc32 -lodbccp32 -lcomctl32
libc=-lucrt
so=dll
useshrplib=true
libperl=libperl540.a
gnulibc_version=''
Dynamic Linking:
dlsrc=dl_win32.xs
dlext=xs.dll
d_dlsymun=undef
ccdlflags=' '
cccdlflags=' '
lddlflags='-shared -s -L"C:\STRAWB~1\perl\lib\CORE" -L"C:\STRAWB~1\c\lib" -L"C:\STRAWB~1\c\x86_64-w64-mingw32\lib" -L"C:\STRAWB~1\c\lib\gcc\x86_64-w64-mingw32\13.2.0"'
Characteristics of this binary (from libperl):
Compile-time options:
HAS_LONG_DOUBLE
HAS_TIMES
HAVE_INTERP_INTERN
MULTIPLICITY
PERLIO_LAYERS
PERL_COPY_ON_WRITE
PERL_DONT_CREATE_GVSV
PERL_HASH_FUNC_SIPHASH13
PERL_HASH_USE_SBOX32
PERL_IMPLICIT_SYS
PERL_MALLOC_WRAP
PERL_OP_PARENT
PERL_PRESERVE_IVUV
PERL_USE_SAFE_PUTENV
USE_64_BIT_INT
USE_ITHREADS
USE_LARGE_FILES
USE_LOCALE
USE_LOCALE_COLLATE
USE_LOCALE_CTYPE
USE_LOCALE_NUMERIC
USE_LOCALE_TIME
USE_PERLIO
USE_PERL_ATOF
USE_THREAD_SAFE_LOCALE
Built under MSWin32
Compiled at May 11 2025 15:48:20
@INC:
C:/Strawberry/perl/site/lib
C:/Strawberry/perl/vendor/lib
C:/Strawberry/perl/lib
Image Magick version installed:
>magick -version
Version: ImageMagick 7.1.2-2 Q16-HDRI x64 8289a33:20250824 https://imagemagick.org
Copyright: (C) 1999 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Channel-masks(64-bit) Cipher DPC HDRI Modules OpenCL OpenMP(2.0)
Delegates (built-in): bzlib cairo freetype gslib heic jng jp2 jpeg jxl lcms lqr lzma openexr pangocairo png ps raqm raw rsvg tiff webp xml zip zlib
Compiler: Visual Studio 2022 (194435214)
gcc compiler version:
>gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=C:/Strawberry/c/bin/../libexec/gcc/x86_64-w64-mingw32/13.2.0/lto-wrapper.exe
OFFLOAD_TARGET_NAMES=nvptx-none
Target: x86_64-w64-mingw32
Configured with: ../configure --prefix=/R/winlibs64ucrt_stage/inst_gcc-13.2.0/share/gcc --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --enable-offload-targets=nvptx-none --with-pkgversion='MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8' --with-tune=generic --enable-checking=release --enable-threads=posix --disable-sjlj-exceptions --disable-libunwind-exceptions --disable-serial-configure --disable-bootstrap --enable-host-shared --enable-plugin --disable-default-ssp --disable-rpath --disable-libstdcxx-debug --disable-version-specific-runtime-libs --with-stabs --disable-symvers --enable-languages=c,c++,fortran,lto,objc,obj-c++ --disable-gold --disable-nls --disable-stage1-checking --disable-win32-registry --disable-multilib --enable-ld --enable-libquadmath --enable-libada --enable-libssp --enable-libstdcxx --enable-lto --enable-fully-dynamic-string --enable-libgomp --enable-graphite --enable-mingw-wildcard --enable-libstdcxx-time --enable-libstdcxx-pch --with-mpc=/d/Prog/winlibs64ucrt_stage/custombuilt --with-mpfr=/d/Prog/winlibs64ucrt_stage/custombuilt --with-gmp=/d/Prog/winlibs64ucrt_stage/custombuilt --with-isl=/d/Prog/winlibs64ucrt_stage/custombuilt --disable-libstdcxx-backtrace --enable-install-libiberty --enable-__cxa_atexit --without-included-gettext --with-diagnostics-color=auto --enable-clocale=generic --with-libiconv --with-system-zlib --with-build-sysroot=/R/winlibs64ucrt_stage/gcc-13.2.0/build_mingw/mingw-w64 CFLAGS='-I/d/Prog/winlibs64ucrt_stage/custombuilt/include/libdl-win32 -march=nocona -msahf -mtune=generic -O2' CXXFLAGS='-Wno-int-conversion -march=nocona -msahf -mtune=generic -O2' LDFLAGS='-pthread -Wl,--no-insert-timestamp -Wl,--dynamicbase -Wl,--high-entropy-va -Wl,--nxcompat -Wl,--tsaware' LD=/d/Prog/winlibs64ucrt_stage/custombuilt/share/binutils/bin/ld.exe
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 13.2.0 (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8)
I have uninstalled both Strawberry perl and ImageMagick and installed the latest versions of each.
I installed the latest Visual C Runtime version and rebooted:
v14.44.35211.0
cpan
continues to fail.
It seems like it doesn't know where to look for the compiler.
In other places I see a lot of people have trouble installing this module.
With the help of chatGPT, I wrote this little checker exe:
#include <iostream>
int main() {
#ifdef __cplusplus
std::cout << "__cplusplus defined: " << __cplusplus << std::endl;
#endif
#ifdef c_plusplus
std::cout << "c_plusplus defined" << std::endl;
#endif
}
which when executed in PS spits out:
PS C:\Users\USERNAME\Code\C> .\check.exe
__cplusplus defined: 201703
Some other checks:
PS C:\Users\USERNAME\Code\C> perl -V:cc
cc='gcc';
PS C:\Users\USERNAME\Code\C> perl -V:ccflags
ccflags='-std=c99 -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DMULTIPLICITY -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fwrapv -fno-strict-aliasing -mms-bitfields';
PS C:\Users\USERNAME\Code\C> perl -V:cppflags
cppflags='-DWIN32';
PS C:\Users\USERNAME\Code\C> perl -V:ld
ld='g++';
PS C:\Users\USERNAME\Code\C>
Off the back of this (and again with help) I changed the Perl Config.pm
line 91 from:
cc => 'gcc',
to
cc => 'g++',
so now I see:
PS C:\Users\USERNAME\Code\C> perl -V:cc
cc='g++';
but now I get the following install error:
>cpan install Image::Magick
CPAN: CPAN::SQLite loaded ok (v0.220)
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
CPAN: HTTP::Tiny loaded ok (v0.090)
CPAN: Net::SSLeay loaded ok (v1.94)
CPAN: IO::Socket::SSL loaded ok (v2.089)
Fetching with HTTP::Tiny:
https://cpan.org/authors/01mailrc.txt.gz
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Fetching with HTTP::Tiny:
https://cpan.org/modules/02packages.details.txt.gz
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Fetching with HTTP::Tiny:
https://cpan.org/modules/03modlist.data.gz
Database was generated on Thu, 28 Aug 2025 09:26:10 GMT
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Updating database file ...
Starting with version 2.29 of the cpan shell, a new download mechanism
is the default which exclusively uses cpan.org as the host to download
from. The configuration variable pushy_https can be used to (de)select
the new mechanism. Please read more about it and make your choice
between the old and the new mechanism by running
o conf init pushy_https
Once you have done that and stored the config variable this dialog
will disappear.
Done!
Running install for module 'Image::Magick'
CPAN: Digest::SHA loaded ok (v6.04)
CPAN: Compress::Zlib loaded ok (v2.213)
Checksum for C:\STRAWB~1\cpan\sources\authors\id\J\JC\JCRISTY\Image-Magick-7.1.1-28.tar.gz ok
CPAN: Archive::Tar loaded ok (v3.04)
CPAN: YAML::XS loaded ok (vv0.904.0)
CPAN: CPAN::Meta::Requirements loaded ok (v2.143)
CPAN: Parse::CPAN::Meta loaded ok (v2.150010)
CPAN: CPAN::Meta loaded ok (v2.150010)
CPAN: Module::CoreList loaded ok (v5.20250421)
Configuring J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz with Makefile.PL
Checking if your kit is complete...
Looks good
Warning (mostly harmless): No library found for -lMagickCore-7.Q16HDRI
Warning (mostly harmless): No library found for
Generating a gmake-style Makefile
Writing Makefile for Image::Magick
Writing MYMETA.yml and MYMETA.json
JCRISTY/Image-Magick-7.1.1-28.tar.gz
C:\Strawberry\perl\bin\perl.exe Makefile.PL -- OK
Running make for J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz
cp Magick.pm blib\lib\Image\Magick.pm
AutoSplitting blib\lib\Image\Magick.pm (blib\lib\auto\Image\Magick)
Running Mkbootstrap for Magick ()
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e chmod -- 644 "Magick.bs"
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command::MM -e cp_nonempty -- Magick.bs blib\arch\auto\Image\Magick\Magick.bs 644
"C:\Strawberry\perl\bin\perl.exe" "C:\Strawberry\perl\lib\ExtUtils/xsubpp" -typemap C:\STRAWB~1\perl\lib\ExtUtils\typemap -typemap C:\STRAWB~1\cpan\build\Image-Magick-7.1.1-4\typemap Magick.xs > Magick.xsc
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e mv -- Magick.xsc Magick.c
g++ -c -I/usr/local/include/ImageMagick-7 -DMAGICKCORE_HDRI_ENABLE=1 -DMAGICKCORE_QUANTUM_DEPTH=16 -I/usr/include/libxml2 -I"C:\STRAWB~1\c\include/ImageMagick-7" -std=c99 -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DMULTIPLICITY -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fwrapv -fno-strict-aliasing -mms-bitfields -I/usr/include/freetype2 -g -O2 -Wall -pthread -DMAGICKCORE_HDRI_ENABLE=1 -DMAGICKCORE_QUANTUM_DEPTH=16 -O2 -DVERSION=\"7.1.1\" -DXS_VERSION=\"7.1.1\" "-IC:\STRAWB~1\perl\lib\CORE" -D_LARGE_FILES=1 -DHAVE_CONFIG_H Magick.c
cc1plus.exe: warning: command-line option '-std=c99' is valid for C/ObjC but not for C++
Magick.xs:56:10: fatal error: MagickCore/MagickCore.h: No such file or directory
56 | #include <MagickCore/MagickCore.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
gmake: *** [makefile:353: Magick.o] Error 1
JCRISTY/Image-Magick-7.1.1-28.tar.gz
C:\STRAWB~1\c\bin\gmake.exe -- NOT OK
Stopping: 'install' failed for 'Image::Magick'.
I then looked into how to specify which standard to use but it is getting beyond me. I am not sure where to change the arguments that are used by the cpan install process.
I am aware other people have had the same kind of issue and asked the same kind of question here, but they have not had the same kind of result and I have not been able to use those fixes. I needed to explicitly write what I have tried and the multiple different errors I have seen along the way.
I also note that the latest version of Image::Magick on metacpan references ImageMagick 7.1.1-28, whereas the latest version of ImageMagick as of this date is 7.1.2-2. So something the metacpan perl module is behind version-wise.
Any ideas as to what other things I can try to get the perl Image::Magick module to successfully install? I have been at this for days now...
Published by /u/RichardInTexas on Thursday 28 August 2025 18:17
I was a longtime Perl "tinkerer" from about 2000-2010, and during that time I wrote a bunch of CGI web apps that I use in my (totally-non-IT-related) business. Mostly a bunch of CRUD database stuff that is very specific to my business. Hits a MySQL database. Uses PDFlib Lite to generate PDF reports. It all lives on a dedicated server at a hosting company (something I do not have root access to). I still tweak bits of the code now and again, but suffice to say that I have already forgotten more than I ever knew about how this all works. But work it does, and we have been using these web apps in my business ever since.
Every now and again, my hosting company changes something, and some part of these apps break. Usually it's something simple...they forgot to install a library that I need, or something now has a different path. I open a ticket with them, and they help me unravel the problem. I am in the middle of one of those times now, and for whatever reason they are not being as responsive as they once were. I am in hopes that someone here can at least give me a push in the right direction. I'm sure whatever is broken here is a simple fix, but it's beyond my capabilities at this point to troubleshoot this.
My particular pain point this time around is the PDF-generation aspect of my scripts. There is a library called "PDFlib Lite" installed (supposedly) on the server. And I am using the pdflib_pl module to interface with it. Here's an example Hello World PDF generator that worked before, but now is not:
#!/usr/bin/perl use lib "/usr/home/{my username}/usr/local/lib/site_perl"; use strict; use CGI; use pdflib_pl; my $q = new CGI; # create the PDF my $p = PDF_new(); PDF_open_file($p, ""); # put some text in it my $fontb = PDF_findfont($p, "Helvetica-Bold", "host", 0); PDF_setfont($p, $fontb, 12); PDF_show_boxed($p, 'Hello World!', 200, 200, 300, 20, 'left', ""); # close the PDF PDF_close($p); # spew it out! print $q->header('application/pdf'); while (my $output = PDF_get_buffer($p)) { print $output; }
The script compiles (perl -c from the command line) just fine. But it craps out when it calls the PDF_open_file() subroutine. Web server says:
www.{mydomain}.com [Thu Aug 28 13:09:02 2025] [error] [pid 2151361] cgi_common.h(74): [client 99.99.99.99:58661] AH01215: stderr from /usr/wwws/users/blah/blah/blah/pdftest.cgi: Undefined subroutine &main::PDF_open_file called at /usr/wwws/users/blah/blah/blah/ pdftest.cgi line 9.
The module is in place, in the use lib directory. Other custom modules in that directory still work fine.
Any idea where to start? Anything I should try? Any help/ideas greatly appreciated.
Thanks
Hello everyone, I hope you're doing well.
I was wondering if anyone happens to have a PDF copy of Perl For Dummies?
Thanks in advance!
Published by U. Windl on Thursday 28 August 2025 07:08
I wrote a CGI script that should redirect the browser to an error page if an error had been detected by the script.
The script runs under Apache 2.4.51 with these options inside a VirtualHost
:
<Location /test>
SetHandler perl-script
PerlResponseHandler ModPerl::Registry
PerlOptions +ParseHeaders
Options +ExecCGI
</Location>
However, it does not work:
Edge silently saves the result into the download directory, and Firefox asks where to save the file.
When I try to use the network debugging tool in Firefox, I cannot see the redirect response; instead I see a failed request (file not found) for the page that triggers the redirect.
When I run the CGI script on command line, I get this output:
HTTP/1.0 302 Found
Server: cmdline
Status: 302 Found
Date: Wed, 27 Aug 2025 11:08:20 GMT
Location: /test/error?s=K;em=Fehlende%20oder%20ung%C3%BCltige%20Parameter;es=406%20;et=Parameterfehler
The code that created the redirect looks like this:
use CGI qw(-nosticky);
use constant URL_BASE => "/test"; # base URL
use constant PATH_ERROR => "/error"; # error path
use constant URL_ERROR => URL_BASE . PATH_ERROR; # error URL
# redirect to error message
sub error($$;$$$)
{
my ($query, $message, $title, $code, $reason) = @_;
#...set parameters
print $query->redirect(
{
'-nph' => 1,
'-uri' => URL_ERROR . '?' . $query->query_string(),
});
exit;
}
error($query, 'Fehlende oder ungĂźltige Parameter', 'Parameterfehler', 406);
If you can't tell me what's wrong with the code, please tell me how to debug this.
If I leave out the nph
, the test result is this:
Status: 302 Found
Location: /test/error?s=K;em=Fehlende%20oder%20ung%C3%BCltige%20Parameter;es=406%20;et=Parameterfehler
I found out that removing PerlOptions +ParseHeaders
causes the output to be saved, but when I re-add it, I get the "file not found" error.
When the file is saved, it looks like a valid HTTP redirect header.:
HTTP/1.1 302 Found
Server: Apache
Status: 302 Found
Date: Thu, 28 Aug 2025 06:59:16 GMT
Location: http://www-tel/test/error?em=Fehlende%20oder%20ung%C3%BCltige%20Parameter;es=406%20;et=Parameterfehler
Published by Brett Estrade on Wednesday 27 August 2025 20:28
We are moving full steam ahead. The Journals are not so easy to put out 2x a year we are finding, but the editing process for Issue #2 is moving ahead nonetheless. We are now collecting papers for inclusion for Issue #3. But our hybrid conferences are proving to be very successful endeavors. We hope you will consider submitting a Science Track paper or regular Perl talk to this 2 day hybrid conference in sunny ole Austin, TX, USA.
See more:
Published by Mose on Wednesday 27 August 2025 20:25
I'm trying to build an array of hashes that contains only differences between 2 other arrays of hashes (Similar to array_minus via Array::Utils) but am struggling to do so.
The code I have thus far is as follows:
#!/bin/perl
my $hash1 = {
name => '/',
authorized => {
'@group1' => 'rw',
'@group2' => 'r'
}
};
my $hash2 = {
name => '/test',
authorized => {
'@group2' => 'rw'
}
};
my (@array1,@array2);
push(@array1, $hash1);
push(@array1, $hash2);
push(@array2, $hash2);
push(@array2, $hash2);
sub diffresources {
my ($first,$second) = @_;
my $result;
for my $f (@{$first}) {
while (my ($index, $s) = each @{$second}) {
if ($f->{name} eq $s->{name}){
my $test = &diffelements($f->{authorized},$s->{authorized});
if (defined $test) {
my $elem = $f;
$elem->{authorized} = $test;
print Dumper $elem;
push(@{$result},$elem);
}
last;
}
}
}
return $result;
}
sub diffelements {
my ($first,$second) = @_;
my $result;
for (keys %{$first}) {
if (exists $second->{$_}) {
$result->{$_} = $first->{$_} unless ($first->{$_} eq $second->{$_});
} else {
$result->{$_} = $first->{$_};
}
}
print Dumper $first,$second,$result;
return $result;
}
I'm struggling to figure out how to build a function that returns nothing if they match, or only an array of hashes that contain the differences if they don't.
Published by /u/StrayFeral on Wednesday 27 August 2025 16:07
Yes, I know some of you totally dislike the topic, but still I need to know. I want to create a virtual environment for Perl with local versions of the libraries.
So far I saw two things, one being called "local::lib" and other being called Carton. I am now reading about them, but not sure if these two are used together or each of them do the same thing.
So far I don't need to keep different versions of Perl (yes, I saw I can do this too, but I don't need it), but for now I just need local versions of the modules, so I don't mess up with the modules installed by the operating system.
I am on Lubuntu, so consider anything working on Debian/Ubuntu.
(and yes, I know I can create a container and keep it totally separated which is a great option, but still I want to know if we have a "venv" analog in Perl)
Thanks!
Published by Mohammad Sajid Anwar on Tuesday 26 August 2025 17:38
Caching in Perl using memcached.
Please check out the link for more information:
https://theweeklychallenge.org/blog/caching-using-memcached
I consider myself successful.
I’m 45, with a sportscar, a house, a family, and a small business now 30 years old.
I made good decisions.
My car is 15-years old, my monitors are 20-years old, my chair is 25-years old, my desk is 25-years old.
My computers historically last 10-years; I just tossed a mouse that survived 8-years.
Perl.
My first venture into server-side web development was Lotus Notes…that lasted 5 days.
Perl has lasted me nearly 30 years, and we sure as hell ain’t done yet!
I didn’t meet the perl community until the Toronto conference in 2023.
That’s where I saw the faces.
That’s when I saw the humanity.
That’s why I felt the guilt.
I’d paid for my car, for my house, for my computers, my desk, and my chair.
Perl came for free.
I didn’t pay for it; I didn’t work for it.
But at the conference, before me stood the people who did.
I’ve worked hard for my success.
It’s clear to me now that I wasn’t the only one working hard for my success.
How much of your success is because of all of their hard work?
And how much have you contributed in return?
Me? I contributed absolutely nothing.
That’s my guilt.
But this is the very best kind of guilt.
Because it’s not too late!
In fact, now is the perfect time.
I’ve hired members of the perl community.
That’s a start.
I’m donating directly to the foundation.
And I intend to continue doing so.
If my success depends on perl, then perl depends on my success.
Perl was always a perfect fit for me.
As a syntax, it was concise, yet flexible.
My code’s form could mirror its function.
How perfectly splendid.
Perl is the basis of Holophrastic’s web development platform: Pipelines.
As new popular languages have come along, they’re touted as the best new amazing modern.
And then they vanish, supplanted by the very next amazing.
Had I invested in each, I’d have more archives of code than formats of music albums.
Instead, I rest easy in the knowledge that perl will always keep up.
xls, xlsx, pdf, mysql, mariadb, imagemagick, json, curl
There will always be a cpan module waiting for me when I need it.
My clients have no idea the power that I wield with my fingertips.
AI translations, image processing craziness, survey systems built for 100'000 concurrent test-takers…
And while all of that definitely took some expertise on my end,
Perl created exactly zero hurdles,
It never got in the way at all.
Longevity.
Perl’s got it.
The Perl Foundation’s got it.
The Perl Community’s got it.
I’ve got it.
The lines between? All blurry.
Custom Business Web Development
I work as what I’d call an inside-contractor.
I’m a speed-dial (is that still a thing?) phone call away.
Occasionally, a client will call me a dozen times in a day.
I’m closer than their colleague in the next office.
It usually starts with “a website”
the typical sales- or marketing-oriented something-pretty
Then business takes over.
A product-configuration wizard
A sales-commission calculator
Can you connect to our accounting software and provide the reports that it can’t?
Can you replace our warehouse-production backend?
What about a portal?
Yes; yes I can.
One business obstacle at a time.
Now, 30 years later, I often encounter code & comments, decades old.
The feeling I get from seeing a line that’s now older than the me who wrote it…
I used to feel alone.
I now feel that thanks to the perl community, I was never alone.
Holophrastic is proud to be the first sponsor of the 2026 Perl and Raku Conference.
Published by DragosTrif on Monday 25 August 2025 21:50
This post assumes that you know you way around Mojolicious web framework. If not you should start here:
Testing web applications be can tricky because it involves many different components like the html, java script, css, databases and last but not least the Perl code.
Fortunately the Perl ecosystem is very rich in test libraries.
For the purpose of this article I will use the following libraries:
Test2::V0
Test::Mojo
DBD::Mock::Session::GenerateFixtures
Sub::Override
Playwright
Proc::Background
File::Path
File::Copy
For our first test, weâll use Test::Mojo, the official testing helper for Mojolicious. It provides a rich set of methods for simulating HTTP requests and making assertions about web applications. Best of all, it can run tests without requiring a Mojolicious server to be running.
Now let's break down what we need to do. First, we need to use some sort of test database. There are many options here. Personally, I like to create fixtures using DBD::Mock::Session::GenerateFixtures module, a thin wrapper over DBD::Mock that generates fixtures by copying them into a JSON file when a real dbh is provided. Aftter the mocked is generated I Sub::Override to make sure the real dbh is overridden by the mocked one.
use strict;
use warnings;
use Mojo::Base -strict;
use Test2::V0;
use Test::Mojo;
my $override = Sub::Override->new();
my $db = MyApp::DB->new(domain => 'development', type => 'main');
# when dbh attribute is set it copies data in t/db_fixtures/02_login.t.json
# when dbh attribute is not set will use the test name to relove the fixture file
my $mock_dumper = DBD::Mock::Session::GenerateFixtures->new({dbh => $db->dbh()});
# override in the app the dbh
$override->replace('Rose::DB::dbh' => sub {return $mock_dumper->get_dbh})
With this approach after the fixtures file is generated we can start using Test::Mojo with the web server down.
No lets add some tests in our code:
subtest 'test login with correct user and password' => sub {
$t->get_ok('/')->status_is(200);
my $login_url = $t->tx->res->dom->find('a')->grep(qr/login/)->map(attr => 'href')->first;
ok($login_url, "Found Login link pointing to $login_url");
my $response = $t->post_ok($login_url => form => { username => 'Diana', password => 'password' });
$response->status_is(302);
my $cookie = $response->tx->res->cookies->[0];
is($cookie->name, 'mojolicious', "The Mojolicious session cookie is set");
ok(length $cookie->value, "Cookie has a value");
my $redirect_url = $response->tx->res->headers->location();
my $followed = $t->get_ok($redirect_url)->status_is(200);
my $log_out_url = $response->tx->res->dom->find('a')->grep(qr/logout/)->map(attr => 'href')->first;
ok($log_out_url, "Found login out link pointing to $log_out_url");
};
Save everything in a file t/login.t and run:
carton exec -- prove -v t/login.t
This should generate a similar output:
[2025-08-24 14:35:14.51780] [643942] [trace] [tMiiGo7o61v9] GET "/"
[2025-08-24 14:35:14.51799] [643942] [trace] [tMiiGo7o61v9] Routing to controller "SkinCare::Controller::Welcome" and action "landing_page"
[2025-08-24 14:35:14.51848] [643942] [trace] [tMiiGo7o61v9] Rendering template "example/welcome.html.ep"
[2025-08-24 14:35:14.51933] [643942] [trace] [tMiiGo7o61v9] Rendering template "layouts/default.html.ep"
[2025-08-24 14:35:14.52067] [643942] [trace] [tMiiGo7o61v9] 200 OK (0.002865s, 349.040/s)
ok 1 - test login with correct user and password {
ok 1 - GET /
ok 2 - 200 OK
ok 3 - Found Login link pointing to /login
ok 4 - POST /login
ok 5 - 302 Found
ok 6 - GET /welcome/1
ok 7 - 200 OK
ok 8 - Found login out link pointing to /logout
ok 9 - The Mojolicious session cookie is set
ok 10 - Cookie has a value
1..10
}
So what just happened:
loaded the landing and tested the status:
$t->get_ok('/')->status_is(200)
Then we searched for the login url inside the dom using Mojo::Dom:
$t->tx->res->dom->find('a')->grep(qr/login/)->map(attr => 'href')->first
Submitted the login form and tested the response
$response->status_is(302);
my $cookie = $response->tx->res->cookies->[0];
is($cookie->name, 'mojolicious', "The Mojolicious session cookie is set");
ok(length $cookie->value, "Cookie has a value");;
my $redirect_url = $response->tx->res->headers->location();
my $followed = $t->get_ok($redirect_url)->status_is(200);
my $log_out_url = $response->tx->res->dom->find('a')->grep(qr/logout/)->map(attr => 'href')->first;
ok($log_out_url, "Found login out link pointing to $log_out_url");
This small tests hit all my code and illustrates the advantages of Test::Mojo
:
If you need to test java script and Test::Mojo
quickly reaches its limits. A good alternative to it is the Perl client for Playwright.
Because Playwright requires the web server to be up an running several tweaks are required:
nodejs
and friends:
apt-get install -y nodejs
npm install -g playwright
npx playwright install --with-deps chromium
npm install -g express
if ($ENV{FIXTURES_PATH}) {
if ($ENV{GENERATE_FIXTURES}) {
my $override = Sub::Override->new();
my $mock_dumper = DBD::Mock::Session::GenerateFixtures->new({dbh => __PACKAGE__->new(domain => 'development', type => 'main')->dbh()});
$override->replace('Rose::DB::dbh' => sub {return $mock_dumper->get_dbh});
move "script/db_fixtures/skin_care.json", $ENV{FIXTURES_PATH};
} elsif($ENV{USE_FIXTURES}) {
my $override = Sub::Override->new();
my $mock_dumper = DBD::Mock::Session::GenerateFixtures->new(file => $ENV{GENERATE_FIXTURES_PATH});
$override->replace('Rose::DB::dbh' => sub {return $mock_dumper->get_dbh});
}
}
my $proc = Proc::Background->new("carton exec -- morbo ../skin_care/script/skin_care") or die "$!";
sleep 3;
...
if ($proc->alive) {
$proc->terminate;
$proc->wait;
}
use strict;
use warnings;
use Test2::V0;
use English qw ( -no_match_vars );
use Data::Dumper;
use Playwright;
use Proc::Background;
use File::Path qw( rmtree );
use feature 'say';
my $handle = Playwright->new();
my $browser = $handle->launch( headless => 0, type => 'chrome' );
my $page = $browser->newPage();
$ENV{GENERATE_FIXTURES} = 0;
my ($volume, $directory, $test_file) = File::Spec->splitpath($PROGRAM_NAME);
$ENV{FIXTURES_PATH} = "t/db_fixtures/$test_file.json";
$ENV{USE_FIXTURES} = 1;
my $proc = Proc::Background->new("carton exec -- morbo ../skin_care/script/skin_care") or die "$!";
sleep 3;
This above sets the env vars used by the application to determine if it generate or use a fixture file for or test and sets up the browser.
my $res = $page->goto('http://127.0.0.1:3000/login', { waitUntil => 'networkidle' });
$page->screenshot({path => '01_load_form.png'});
is($res->status(), 200, 'login form is ok');
$page->fill('input[name="username"]', 'Diana');
$page->fill('input[name="password"]', 'password');
$page->screenshot({path => '02_populate_form_from.png'});
$page->click('input[type="submit"].btn-primary');
my $cookies = $page->context->cookies();
is($cookies->[0]->{name}, 'mojolicious', "The Mojolicious session cookie is set");
ok(length $cookies->[0]->{value}, "Cookie has a value");
$page->screenshot({path => '03_user_home.png'});
my $create_routine_locator = $page->locator('#create_routine');
my $create_routine_menu = $create_routine_locator->allInnerTexts()->[0];
is($create_routine_menu, 'Create a routine', 'Create routine menu is ok');
$create_routine_locator->click();
$page->waitForSelector('#btn-1', { state => 'visible', timeout => 5000 });
$page->click('#btn-1');
sleep 3;
$page->screenshot({path => '04_user_sub_menu.png'});
my $create_routine_content_locator = $page->locator('#content-1');
ok($create_routine_content_locator->count() > 0, 'Found #content-1 in the DOM');
So what just happened:
click
or locator
to interact and test the page.Published by Gabor Szabo on Monday 25 August 2025 09:37
Originally published at Perl Weekly 735
Hi there,
In the recent couple of weeks Mohammad and some others got really excited about the TIOBE index indicating that Perl has got a lot more popular recently. I doubt that. It is way more likely to be a measurement error. I mean they say that Perl jumped from 25th place to the 9th place in one year. Also Ada jumped from the 30th place to the 13th place. Who is writing in Ada and what?
Anyway, let's assume there is interest in Perl. Wouldn't it be a good idea to convert that to meetings and presentations? I went overt the list of Perl Monger groups. I recall seeing more than 200 groups in that list. Now there are only 22 groups and most of those don't seem to have any activity. The events I found I added to our events page. Their events should show up in our calendar. I also reached out to some of them asking them to update our calendar (which is generated from a JSON file on GitHub) and I also asked some if they would be interested organizing online events.
Online events: I don't have any Perl-related business any more (that is, no training requests and no contract work, not even for moving away from Perl), but I have a few books and giving presentations related to those topics help me update the books. So maybe we can organize a few of those. I also hope that some other people will be interested in giving online presentations. Nothing fancy. Think about 'explaining this stuff to my co-worker' level of presentation.
Till then, enjoy your week!
--
Your editor: Gabor Szabo.
Not only that, but it was a bug on Windows!
GTC = Graphics::Toolkit::Color - calculate color (sets), IO many spaces and formats
A lot of symbol-heavy code looks unclear, until you understand it.
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
Welcome to a new week with a couple of fun tasks "Equal Group" and "Final Score". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Common Characters" and "Find Winner" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
The post delivers well-crafted, efficient solutions for both problems, demonstrating smart algorithmic thinking and sound Perl technique.
Solutions demonstrate expert-level Raku programming and serve as excellent educational resources. The solutions are not only correct but also showcase Raku's unique features and strengths effectively.
Elegant and concise, also the use of non-core module Set::Bag. The solutions demonstrate advanced Perl programming techniques.
Elegant use of Perlâs standard toolkit, clear results, and deterministic output. Standout solutionâmathematical, robust and demonstrating PDLâs power.
Solutions demonstrate thoughtful problem analysis and clean implementation, particularly the innovative line-tracking approach for Task 2 that avoids unnecessary board state analysis.
Solutions represent a solid, practical approach to problem-solving using fundamental programming techniques. It also demonstrate that sometimes the most straightforward approach is the most effective, particularly for educational purposes and maintainable code.
Solutions demonstrate strong problem-solving skills with particular excellence in the binary representation approach for Task 2, which is both computationally optimal and elegantly implemented.
Solutions demonstrate strong programming fundamentals with particular emphasis on clarity and completeness, making them excellent educational resources while remaining practical for real-world use.
The solutions represent a thoughtful, well-engineered approach to both problems. The Task 2 solution in particular stands out for its elegant pattern-based approach to checking winning conditions which is both efficient and easy to understand.
The solutions serve as excellent examples of Pythonic problem-solving and would be valuable references for developers working in either language.
Great CPAN modules released last week;
MetaCPAN weekly report.
August 27, 2025
September 9, 2025
September 10, 2025
September 19, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
There are SPVM standard modules on CPAN.
SPVM::EqualityChecker::Address
SPVM::Error::FieldNotSpecified
SPVM::Error::MethodCallNotPermitted
Published by prz on Sunday 24 August 2025 00:09
Published by prz on Sunday 24 August 2025 00:01
This is the weekly favourites list of CPAN distributions. Votes count: 50
Week's winners (+2): Time::Left & Path::Tiny
Build date: 2025/08/23 22:00:52 GMT
Clicked for first time:
Increasing its reputation:
Graphic::Toolkit::Color 1.9 brought several big new features which I will write about when 2.0 comes out - just to sum up what changed since 1.0. This time I want to describe the internal changes, since this release completed an in-depth rewrite. So this will be about software engineering, architecture and coding style. TLDR: simple, clear, DDD, OO by composition and arg and a color space DSL!
GTC is a compact but not too small project (~180kB pure code) without dependencies - my chance to create something perfect - right ? At least I could test my ideals, which are:
Architecture made of few ideas and pieces, that can contain further substructures.
compact file form: small, single purpose packages, methods, blocks; files structured from general to the specific, starting with one line description
telling, consistent names and naming schemes across all types of identifiers and package.
These three points support and reinforce each other. For instance:
And once you settled for a name (and written it down in a project wide glossary, which is documentation) you use this name for that thing every in all categories of names (packages, methods, vars, handles, comments, documentation). They instantly start to cross reference each other and are much more understandable and way more helpful. (You ever wished the code talk to you in the same language as the documentation?). And if you code start to look like a textbook for children - good. You show how much you are a genius by having little trouble with you code. To make this clear here some examples from GTC (which is not perfect - yet):
For instance: a variable containing a color space name is always named "space_name" - in any method - in any package. And there is a class: G.::Toolkit::Color::Space, which has a method called "name", which returns the same thing a variable "space_name" contains. Makes sense?
Second example: Since color is the main topic of the library its kinda implied as a context everywhere unless stated otherwise. That is why i call everywhere a tuple (ARRAY ref) with the values of one color "values" because these are the color - values. This might sound like a bad idea since its too broad of a term. But I never use it anywhere for anything else. It is a stable marker and it refers the the package GTC::Values the home of the values of one color.
Also having short well structured files, with short methods turned out to be very helpful. I already posted about it elsewhere and will be also here. I found to be a sure sign for code smell if there is a lack of focus in a method. The content has to reflect the name and vice versa. If you have always a stern eye on that, you will catch bad code very early. Plus you make it easier to write libraries one layer above, when they combine functions with a clear purpose. Sounds obvious - but your only good if you are actually doing it. The other nice side effect of small and clear methods / subs is: much less dirty corners where bugs can hide. We all heard the spiel during conference talks: if you practice TDD it will cost time - much less time than hunting bugs. Guess what - same principles apply here - so no lame excuses: "my boss doesn't give me the time to make it pretty". And in contrast to TDD looking into nice code is good for mental health. In contrast tests are like an attic, mostly seen from the outside.
GTC has three important API - the whole structure is just a byproduct of them.
The first is of course the package Graphics::Toolkit::Color itself. It is just the public facing API presenting all the functionality in a compact, perly way (GTC = swiss army knife for color computing). I hide a lot of the power by having not that many methods and reusing some argument names (in case they mean the same) throughout the methods (small footprint in the mind). Just by combining arguments you get what normally different methods do. There are entire modules on CPAN doing as much as one argument of one method in GTC. As Mark Overmeer says: small modules are incomplete. I go a step further by proclaiming: if you are able to stack methods, choose and combine their arguments, you can write one liners that are immensely powerful (just like perl). This works best if you have only one kind of color object, that has all the methods - hence the unified public API. There we have a lot of documentation (in POD and return messages) and the argument cleaning in the GTC main package. This is all the knowledge about communicating with the outside world in one place - good for maintainance and already more than enough, while try to follow the policy: small files! . So all the real functionality is in the iceberg below, which can be chopped in handy slices.
The most important way to slice complexity is to separate out all the color space specific code: each space into one file. GTC supports 15 different color space and with version 1.93 it will be 18. Some module authors prefer to offer a plugin API for that, but i don't think this is the best idea. Writing code for a color space is fire and forget, since color space definitions almost never change. So why creating the burden for 10 people to maintain 20 modules (because CPAN or perl requirements do change over time). They would have to maintain distributions without touching the productive code. I went the other route and made it as easy as possible to write the code for a color space, which will become part of the normal code base (all bases belong..). You can write a patch with 10 or 20 lines of perl (if it gets complicated) and never have to think about it again.
The class Graphics::Toolkit::Color::Space provides basically a DSL that lets you define everything you need. The name, an alias, axis names, value ranges, value precision, axis types (linear or angular), converter code (between normalized values), additional space constraints, additional formats, value formats. You can even add a value converter. We needed that for the NCol color definitions, which are very friendly to the human eye - but you still want to calculate with the values, so you need a translation between both. So you get every option you would dream of to tweak the space behaviour - but still keep the common problems already solved and handled inside the GTC::Space class.
Because this class is so big - it clearly violates the small files policy. But no problem - I cut it in 5 parts. One package has the utilities I need also elsewhere. Space::Basis handles the axis names, axis short names, looking if a tuple has right amount of values or a HASH color definition has the right value names to fit this space. Then we have the Space::Shape, which cares about the value ranges, normalisation, rounding, clamping of values and so on. And finally there is package with the IO for all the supported formats. The latter three objects are just attributes of the color space object which mostly only contains the delegation to the attributes and the conversion stuff. OO by composition for the win!
The particular color spaces are then instances of the Color::Space. Even the file/package name tells you that: GTC::Space::Instance::CMYK. Such a package has code that runs on start time, creating one GTC::Space object, giving him all the data and algorithms and return it as the return value of that package (instead of the notorious 1). These instances are held by the GTC::Space::Hub (Thank Ovid for the inspiration to the name!). The Hub has not much code either, because it is a glorified iterator over all space objects, well and it tracks dependencies for multi hop color conversions. Well there is another side pocket in the architecture: the name space: GTC::Name which translates values into color names and vice versa.
But generally speaking GTC::Space + Space::Hub is the mid level API. All packages that actually calculate colors are not concerned by the gory details of a color space or how to convert something, which makes them too more readable. And in case you missed it, the low level API (for those gory details) is the mentioned color space DSL, which is AFAIK complete now - hence we approaching 2.0 and I am happy. Hope sincerely you too.
Published by Ron Savage on Saturday 23 August 2025 06:06
Published by scify on Friday 22 August 2025 13:08
I'm editing a bunch of old text files for upload to a site, and, due to their age, they're all formatted with extraneous carriage returns in the middle of paragraphs, like so:
Lorem ipsum dolor sit amet, consectetur
adipiscing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna
aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris
nisi ut aliquip ex ea commodo consequat.Duis aute irure dolor in reprehenderit
in voluptate velit esse cillum dolore
eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident,
sunt in culpa qui officia deserunt
mollit anim id est laborum.
I would like to use the regex replace function to get rid of the carriage returns in the middle of the paragraphs while preserving the breaks between paragraphs, so I can simply run it and end up with:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
My naive thought was to use find: \r[a-z] replace: [a-z], but while that does get rid of the correct carriage returns, it also (unsurprisingly, in retrospect) replaces the first letter of each line with [a-z]. Is there a simple way to do this, or should I stick to doing it manually like I have been?
Published by Clean Compiler on Thursday 21 August 2025 13:37
Every few years, developers like to declare programming languages “dead.” Cobol is dead. Perl is dead. PHP is dead. Java is dead.
Published by Harishsingh on Monday 18 August 2025 02:12
Last week, I watched a senior developer spend fifteen minutes trying to decode a single line of Perl during our team’s code archaeology…
Published by prz on Friday 15 August 2025 18:22
Published by kqr on Thursday 14 August 2025 11:37
In my quest to get to know Nix and NixOS better, I'm going to try moving a small utility I have onto my NixOS server. There are two components to this utility:
A Perl script, which requires both (a) the Perl module IPC::Run
and (b) the command openssl
to be present on the system when it runs.
A shell script, which runs the aws
CLI, but also runs the aforementioned Perl script (and thus also has its run-time dependencies.)
My mental model says "Oh but I just need Nix to set up a shell where all the above things are present and then run the shell script inside that. Easy!" but I have a feeling this is the wrong mental model for how to deploy software to NixOS. I should probably instead think in terms of a build step that sets up all the run-time dependencies and then chucks the scripts in the nix store, so that when they run they bring with them their dependencies.
But as for actually accomplishing this? I'm at a loss.
I have tried the helper utilities writePerlBin
and feeding the output of that to writeShellApplication
. When I invoked Perl from the shell script, it was not a perl with include IPC::Run
.
I have tried manually creating a derivation containing both scripts with mkDerivation
, but when I specify the dependencies as build inputs, naturally they aren't also considered run-time dependencies.
When I search the internet for this, I get suggestions to either (a) create a new script that wraps my existing script and modifies environment variables (PATH
, PERL5LIB
?) to refer to dependencies, or (b) perform a substituteInPlace
on my script to change the string "perl"
into ${perl.withPackages(p: [ p.IPCRun ])}
. These solutions seem a little ... hacky! I'd like to confirm with someone knowledgeable that they are the right approach before going for it. Seems to me like their ought to be an easier way to declare which run-time dependencies a script has.
Published by alh on Thursday 14 August 2025 04:59
Dave writes:
I spent last month mainly continuing to work on rewriting and modernising perlxs.pod, Perl's reference manual for XS. The first draft is now about 80% complete. The bits that have been reworked so far have ended up having essentially none of the original text left, apart from section header titles (which are now in a different order). So it's turning into a complete rewrite from scratch.
It's still a work-in-progress, so nothing's been pushed yet.
Other than that, I successfully diagnosed an issue where DBI was emitting warnings on recent perls under FreeBSD.
Summary:
Total: * 42:01 (HH::MM)
Published by alh on Monday 11 August 2025 05:21
Tony writes:
``` [Hours] [Activity] 2025/07/01 Tuesday 0.93 #23390 review behaviour, testing, review associated PR 23392 and approve 0.65 #23326 review discussion, add fix to 5.38. 5.40 votes files, mark closable with comment 0.47 #23384 review discussion, testing and comment
3.48
2025/07/02 Wednesday 0.90 #23385 more review, comments
1.70
2025/07/03 Thursday 1.32 #23150 review, review discussion, comments 0.08 #23385 brief follow-up 0.43 #23384 review discussion and decide not follow-up 0.15 #22120 follow-up 1.15 #23340 read through discussion, think about solutions
4.65
2025/07/07 Monday 0.23 github notifications 0.65 #23358 review, research
1.76
2025/07/09 Wednesday 2.17 #23326 follow-up, work on a fix 0.10 #1674 rebase and re-push PR 23219 0.03 #1674 check CI and apply to blead 0.62 #23326 fix non-threaded, testing and re-push 1.02 #23375 review, testing and approve with comment
4.39
2025/07/10 Thursday 2.17 #23226 testing and follow-up, work on a more extensive test, testing, push for CI/smoking 0.43 #23416 review and comment 0.88 #23419 review and comment 0.57 #23326 look into CI failures (alas Windows), fixes and
4.05
2025/07/11 Friday
0.13
2025/07/14 Monday 0.22 #23349 review updates and approve 1.08 #23433 review and comment, work on PR for SLU to re- introduce apos in names upstream PR#141 1.18 #23226 look into openbsd test failures. debugging
3.05
2025/07/15 Tuesday 0.37 #23433 follow-up on SLU PR#141 1.80 #23226 debugging
2.99
2025/07/16 Wednesday 1.80 #23226 follow-up, testing, push with a workaround, work on minor clean up, comments
2.17
2025/07/17 Thursday 1.28 #23429 review, comments
1.41
2025/07/21 Monday 0.60 #23301 review updates and comments 0.12 #23312 followup 0.88 #23429 review, testing, research and comment 1.27 #23202 review
4.89
2025/07/22 Tuesday 0.78 check coverity scan report, reasonable errors though none apply in the circumstances reported 0.40 #23301 testing, comment 0.20 #23460 review and comment
2.46
2025/07/23 Wednesday 0.17 #23301 review updates and approve 0.08 #23460 comment, review and approve 0.35 #23461 review upstream ticket and the change, comment 0.40 #23447 manage to break it, comment 0.28 jkeenanâs pthread thread on p5p/#23306 testing 0.40 #23462 review, comments 0.08 #23392 re-check and apply to blead 0.47 #23464 review issue, reproduce, review code, test a fix and make PR 23465 0.45 #23178 re-check and apply to blead 0.55 #23414 review, comment 0.48 #23462 look into CI failure, review some more, comment
4.53
2025/07/24 Thursday 1.03 #22125 rebase, testing freebsd case suggested by Dave, comment, more testing 0.20 #23468 review, research and approve 0.10 #23467 review, research and approve 0.42 #23463 research, testing and comment
3.20
2025/07/28 Monday 0.30 #23481 review and comment 1.23 #23367 review, testing and approve 0.23 #23462 review updates 0.17 #23479 review and approve 1.07 #23477 review, testing 0.52 #23477 more testing, approve 0.42 #23459 review. research and comment
4.52
2025/07/29 Tuesday 0.60 #23459 testing and comment 0.73 #23323 research and comment 0.08 #23481 review updates and approve
1.73
2025/07/30 Wednesday 0.20 #23433 link SLU ticket #141 follow-up 0.15 #23499 follow-up 0.27 #23489 review and comment 0.18 #23491 review and approve 0.15 #23494 review and approve 0.08 #23495 review and approve 0.08 #23496 review and approve 0.08 #23498 review and approve 0.15 #23501 review, research and comment 0.38 #23503 review test results, testing without the builtin math on Linux and comment 0.47 #23508 review, try to break it and approve 0.32 #23506 review, comment
3.81
2025/07/31 Thursday 0.08 #23501 review and approve 0.22 #23500 review and approve 0.22 #23499 review and apply to blead 0.90 #23509 review and approve 0.33 #23514 review and approve 0.38 #23513 review and comment 1.20 #22125 try to reproduce reported freebsd failure, manage
3.33
Which I calculate is 58.25 hours.
Approximately 60 tickets were reviewed or worked on, and 4 patches were applied. ```
Published on Thursday 07 August 2025 09:55
The Perl and Raku Foundation (TPRF) is thrilled to announce a substantial $11,500 donation from SUSE, one of the world’s leading enterprise Linux and cloud-native and AI solutions providers. This generous contribution bolsters the Perl 5 Core Maintenance Fund and demonstrates SUSE’s commitment to the open-source ecosystem.
This donation from SUSE is actually made up of two parts. $10,000 is being donated by SUSE LLC and an additional $1,500 is being provided by The SUSE Open Source Network, to support the development and sustainability of Perl. This aligns with the network’s mission to empower and support open source communities.
“At SUSE, Perl is a fundamental component and member of our ecosystem,” explains Miguel PĂŠrez Colino, Director of Operations, Linux Product Management & Marketing. “We provide it as part of our Linux offerings by actively supporting Perl packages in SUSE Linux Enterprise and openSUSE. We use it extensively in our toolset, powering among others OpenQA and Open Build Service, this last one is used to build not just Linux packages but also Kubernetes.”
SUSE’s OpenQA project is an automated testing framework that ensures quality across countless hardware configurations and software combinations. At its heart is Perl, orchestrating complex test scenarios with the reliability that system administrators have come to expect.
Similarly, Open Build Service is running on many services written in Perl. represents the modern evolution of package management, creating not just traditional Linux packages but also container images and Kubernetes distributions.
SUSE’s donation is a demonstration of digital stewardshipâthe recognition that the tools we rely upon require active investment to remain secure, efficient, and relevant.
“We are proudly donating to The Perl and Raku Foundation (TPRF) to ensure Perl’s continued development and health, which is vital to the open-source world, we are part of, and we champion,” Colino continues.
This investment addresses some critical aspects of language maintenance:
Security Vigilance: In an era of increasing cyber threats, timely security patches aren’t optionalâthey’re essential. TPRF’s maintenance fund ensures that vulnerabilities can be addressed promptly, protecting countless systems worldwide.
Performance Evolution: Modern computing demands continue to evolve. The fund supports ongoing optimisation efforts that keep Perl competitive in today’s performance-conscious environment.
Platform Diversity: As computing platforms proliferateâfrom traditional servers to edge devices to cloud containersâPerl must remain compatible and efficient across this expanding landscape.
Community Responsiveness: Bug reports and feature requests from the global Perl community require careful evaluation and implementation. This fund ensures these contributions don’t languish unaddressed.
SUSE’s contribution represents more than financial supportâit’s a blueprint for sustainable open-source stewardship. When organisations that build upon open-source foundations reinvest in those foundations, they create a virtuous cycle that benefits everyone. It’s a recognition that the digital commons we all depend upon flourish only through collective stewardship.
Published by alh on Wednesday 06 August 2025 10:07
Paul writes:
I didn't get any P5P work done in June, instead working on some other projects while awaiting the 5.42 release.
In July I've managed to continue some work on sub signatures improvements
Total: 8 hours
Published by Marco Pessotto on Tuesday 05 August 2025 00:00
In my programming career centered around web applications I’ve always used dynamic, interpreted languages: Perl, JavaScript, Python, and Ruby. However, I’ve always been curious about compiled, strongly typed languages and if they can be useful to me and to my clients. Based on my recent findings, Rust would be my first choice. It’s a modern language, has excellent documentation and it’s quite popular. However, it’s very different from the languages I know.
I read most of the book a couple of years ago, but given that I didn’t do anything with it, my knowledge quickly evaporated. This time I read the book and immediately after that I started to work on a non-trivial project involving downloading XML data from different sources, database operations, indexing and searching documents, and finally serving JSON over HTTP. My goal was to replace at least part of a Django application which seemed to have performance problems. The Django application uses Xapian (which is written in C++) via its bindings to provide the core functionality. Indexing documents would be delegated to a Celery task queue.
Unfortunately Xapian does not have bindings for Rust so far.
My reasoning was: I could use the PostgreSQL full text search feature instead of Xapian, simplifying the setup (updating a row would trigger an index update, instead of delegating the operation to Celery).
After reading the Rust book I truly liked the language. Its main feature is that it (normally) gives you no room for nasty memory management bugs which plague languages like C. Being compiled to machine code, it’s faster than interpreted languages by an order of magnitude. However, having to state the type of variables, arguments, and return values was at first kind of a culture shock, but I got used to it.
When writing Perl, I’m used to constructs like these:
if (my $res = download_url($url)) {
...
}
which are not possible any more. Instead you have to use the match
construct and extract values from Option
(Some
/âNone
) and Result
(Ok
, Err
) enumerations. This is the standard way to handle errors and variables which may or may not have values. There is nothing like an undef
and this is one of the main Rust features. Instead, you need to cover all the cases with something like this:
match download_url(url.clone()) {
Ok(res) => {
...
},
Err(e) => println!("Error {url}: {e}"),
}
Which can also be written as:
if let Ok(res) = download_url(url.clone()) {
...
}
You must be consistent with the values you are declaring and returning, and take care of the mutability and the borrowing of the values. In Rust you can’t have a piece of memory which can be modified in multiple places: for example, once you pass a string or a data structure to a function, you can’t use it any more. This is without a doubt a good thing. When in Perl you pass a reference of hash to a function, you don’t know what happens to it. Things can be modified as a side effect, and you are going to realize later at debugging time why that piece of data is not what you expect.
In Rust land, everything feels under control, and the compiler throws errors at you which most of the time make sense. It explains to you why you can’t use that variable at that point, and even suggests a fix. It’s amazing the amount of work behind the language and its ability to analyze the code.
The string management feels a bit weird because it’s normally anchored to the UTF-8 encoding, while e.g. Perl has an abstract way to handle it, so I’m used to thinking differently about it.
The async
feature is nice, but present in most of the modern languages (Perl included!), so I don’t think that should be considered the main reason to use Rust.
Bottom line: I like the language. It’s very different to what I was used to, but I can see many advantages. The downside is that you can’t write all those âquick and dirtyâ scripts which are the daily bread of the sysadmin. It lacks that practical, informal approach I’m used to.
Once I got acquainted with the language, I went shopping for âcratesâ (which is what modules are called in Rust) here: https://www.arewewebyet.org/.
Lately I have a bit of a dislike for objectârelational mappings (ORM), so I didn’t go with diesel nor sqlx, but I went straight for tokio_postgres.
This saved me quite a bit of documentation reading and gave me direct access to the database. Nothing weird to report here. It feels like using any other DB driver in any other language, with a statement, the placeholders and the arguments. The difference, of course, is that you need to care about the data types which are coming out of the DB (again the Option
Enum is your friend and the error messages are helpful).
To get data from the Internet, reqwest did the trick just fine without any surprise.
For XML deserialization, serde was paired with quick-xml. This is one of the interesting bits.
You start defining your data structures like this:
use serde::Deserialize;
#[derive(Debug, Deserialize)]
struct OaiPmhResponse {
#[serde(rename = "responseDate")]
response_date: String,
request: String,
error: Option<ResponseError>,
#[serde(rename = "ListRecords")]
list_records: Option<ListRecords>,
}
// more definitions follow, to match the structure we expect
Then you feed the XML string to the from_str
function like this:
use quick_xml::de::from_str;
fn parse_response (xml: &str) -> OaiPmhResponse {
match from_str(xml) {
Ok(res) => res,
// return a dummy one with no records in it in case of errors
Err(e) => OaiPmhResponse {
response_date: String::from("NOW"),
request: String::from("Invalid"),
error: Some(ResponseError {
code: String::from("Invalid XML"),
message: e.to_string(),
}),
list_records: None,
},
}
}
which takes care of the parsing and gives you back either an Ok
with the data structure you defined inside and the tags properly mapped, or an error. The structs can have methods attached so they provide a nice OOP-like encapsulation.
Once the data collection was successful, I moved to the web application itself.
I chose the Axum framework, maintained by the Tokio project and glued all the pieces together.
The core of the application is something like this:
#[derive(Serialize, Debug)]
struct Entry {
entry_id: i32,
rank: f32,
title: String,
}
async fn search(
State(pool): State<ConnectionPool>,
Query(params): Query<HashMap<String, String>>,
) -> (StatusCode, Json<Vec::<Entry>>) {
let conn = pool.get().await.expect("Failed to get a connection from the pool");
let sql = r#"
SELECT entry_id, title, ts_rank_cd(search_vector, query) AS rank
FROM entry, websearch_to_tsquery($1) query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 10;
"#;
let query = match params.get("query") {
Some(value) => value,
None => "",
};
let out = conn.query(sql, &[&query]).await.expect("Query should be valid")
.iter().map(|row|
Entry {
entry_id: row.get(0),
title: row.get(1),
rank: row.get(2),
}).collect();
tracing::debug!("{:?}", &out);
(StatusCode::OK, Json(out))
}
Which simply runs the query using the input provided by the user, runs the full text search, and returns the serialized data as JSON.
During development it felt fast. The disappointment came when I populated the database with about 30,000 documents of various sizes. The Django application, despite returning more data and the facets, was still way faster. With the two applications running on the same (slow) machine I got a response in 925 milliseconds from the Rust application, and in 123 milliseconds for the Django one!
Now, most of the time is spent in the SQL query, so the race here is not Python vs. Rust, but Xapian vs. PostgreSQL’s full text search, with Xapian (Python is just providing an interface to the fast C++ code) winning by a large measure. Even if the Axum application is as fast as it can get, because it’s stripped to the bare minimum (it has no sessions, no authorization, no templates), the time saved is not enough to compensate for the lack of a dedicated and optimized full text search engine like Xapian. Of course I shouldn’t be too surprised.
To actually compete with Django + Xapian, I should probably use Tantivy, instead of relying on the PostgreSQL full text search. But that would be another adventure…
The initial plan turned out to be a failure, but this was really a nice and constructive excursion, as I could learn a new language, using its libraries to do common and useful tasks like downloading data, building small web applications, and interfacing with the database. Rust appears to have plenty of quality crates.
Beside the fact that this was just an excuse to study a new language, it remains true that rewriting existing, working applications is extremely unrewarding and most likely ineffective. Reaching parity with the current features requires a lot of time (and budget), and at the end of the story the gain could be minimal and better achieved with optimization (here I think about all our clients running Interchange).
However, if there is a need for a microservice doing a small task where speed is critical and where the application overhead should be minimal, Rust would be a viable option.
Published on Tuesday 05 August 2025 00:00
Published by The CS Engineer on Saturday 02 August 2025 11:11
From Perl 5.42 to OpenSilver 3.2, July saw meaningful updates for developers. Here are 6 that are actually worth knowing.