Published by Mike on Sunday 08 September 2024 07:59
I am using Strawbery Perl and trying to apply the image::Magick package on Windows 11 OS but keep getting errors:
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>cpanm --force Image::Magick --> Working on Image::Magick Fetching http://www.cpan.org/authors/id/J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz ... OK Configuring Image-Magick-v7.1.1 ... OK Building and testing Image-Magick-v7.1.1 ... FAIL ! Installing Image::Magick failed. See C:\Users\miki.cpanm\work\1725780811.5804\build.log for details. Retry with --force to force install it.
Log file of "C:\Users\miki.cpanm\work\1725780811.5804\build.log":
cpanm (App::cpanminus) 1.7047 on perl 5.040000 built for MSWin32-x64-multi-thread
Work directory is C:\Users\miki/.cpanm/work/1725780811.5804
You have make C:\Strawberry\c\bin\gmake.exe
You have LWP 6.77
Falling back to Archive::Tar 3.02_001
Searching Image::Magick () on cpanmetadb ...
--> Working on Image::Magick
Fetching http://www.cpan.org/authors/id/J/JC/JCRISTY/Image-Magick-7.1.1-28.tar.gz
-> OK
Unpacking Image-Magick-7.1.1-28.tar.gz
Entering Image-Magick-7.1.1
Checking configure dependencies from META.json
Checking if you have ExtUtils::MakeMaker 6.58 ... Yes (7.70)
Configuring Image-Magick-v7.1.1
Running Makefile.PL
Gonna create 'libMagickCore.a' from 'C:\Program Files\ImageMagick-7.1.1-Q16\CORE_RL_MagickCore_.dll'
Checking if your kit is complete...
Looks good
Generating a gmake-style Makefile
Writing Makefile for Image::Magick
Writing MYMETA.yml and MYMETA.json
-> OK
Checking dependencies from MYMETA.json ...
Checking if you have parent 0 ... Yes (0.241)
Checking if you have ExtUtils::MakeMaker 0 ... Yes (7.70)
Building and testing Image-Magick-v7.1.1
cp Magick.pm blib\lib\Image\Magick.pm
AutoSplitting blib\lib\Image\Magick.pm (blib\lib\auto\Image\Magick)
Running Mkbootstrap for Magick ()
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e chmod -- 644 "Magick.bs"
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command::MM -e cp_nonempty -- Magick.bs blib\arch\auto\Image\Magick\Magick.bs 644
"C:\Strawberry\perl\bin\perl.exe" "C:\Strawberry\perl\lib\ExtUtils/xsubpp" -typemap C:\STRAWB~1\perl\lib\ExtUtils\typemap -typemap C:\Users\miki\.cpanm\work\1725780811.5804\Image-Magick-7.1.1\typemap Magick.xs > Magick.xsc
"C:\Strawberry\perl\bin\perl.exe" -MExtUtils::Command -e mv -- Magick.xsc Magick.c
gcc -c -I"C:\Program Files\ImageMagick-7.1.1-Q16\include" -std=c99 -DWIN32 -DWIN64 -DPERL_TEXTMODE_SCRIPTS -DMULTIPLICITY -DPERL_IMPLICIT_SYS -DUSE_PERLIO -D__USE_MINGW_ANSI_STDIO -fwrapv -fno-strict-aliasing -mms-bitfields -Os -falign-functions -falign-jumps -falign-labels -falign-loops -freorder-blocks -freorder-blocks-algorithm=stc -freorder-blocks-and-partition -DVERSION=\"7.1.1\" -DXS_VERSION=\"7.1.1\" "-IC:\STRAWB~1\perl\lib\CORE" -D_LARGE_FILES=1 -DHAVE_CONFIG_H Magick.c
In file included from C:\Program Files\ImageMagick-7.1.1-Q16\include/MagickCore/magick-config.h:25,
from C:\Program Files\ImageMagick-7.1.1-Q16\include/MagickCore/MagickCore.h:29,
from Magick.xs:56:
C:\Program Files\ImageMagick-7.1.1-Q16\include/MagickCore/magick-baseconfig.h:274:6: error: #error ImageMagick was build with a 64 channel bit mask and that requires a C++ compiler
274 | # error ImageMagick was build with a 64 channel bit mask and that requires a C++ compiler
| ^~~~~
gmake: *** [makefile:350: Magick.o] Error 1
-> FAIL Installing Image::Magick failed. See C:\Users\miki\.cpanm\work\1725780811.5804\build.log for details. Retry with --force to force install it.
Seems that both Perl and image Magick version are 64-bit:
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>perl -v This is perl 5, version 40, subversion 0 (v5.40.0) built for MSWin32-x64-multi-thread*
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>perl -v This is perl 5, version 40, subversion 0 (v5.40.0) built for MSWin32-x64-multi-thread
I tried to install several versions of ImageMagick (e.g. ImageMagick-7.1.1-Q16) And checked all options:
Added the path to root to PATH (no bin subfolder exists).
I also installed Visual studio Build tools 2022 and retry to install from cpan via developer command prompt.
Running cl
seems to work fine:
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>cl Microsoft (R) C/C++ Optimizing Compiler Version 19.41.34120 for x86 Copyright (C) Microsoft Corporation. All rights reserved. usage: cl [ option... ] filename... [ /link linkoption... ]
Running magick --version
seems to work fine:
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools>magick --version Version: ImageMagick 7.1.1-38 Q16 x64 b0ab922:20240901 https://imagemagick.org Copyright: (C) 1999 ImageMagick Studio LLC License: https://imagemagick.org/script/license.php Features: Channel-masks(64-bit) Cipher DPC Modules OpenCL OpenMP(2.0) Delegates (built-in): bzlib cairo flif freetype gslib heic jng jp2 jpeg jxl lcms lqr lzma openexr pangocairo png ps raqm raw rsvg tiff webp xml zip zlib Compiler: Visual Studio 2022 (194134120)
Any idea what else I can try here?
Thanks! Miki
Published by mauke on Sunday 08 September 2024 06:11
perlvar: remove indirect object syntax - remove the `method HANDLE EXPR` example - mention that an explicit `use IO::Handle` has not been required since perl v5.14 (that was 13 years ago) - remove the advice to prefer built-in variables over IO::Handle methods because: - `STDOUT->autoflush(1);` is about 200x more readable than `$|++;` or similar nonsense - loading IO::Handle isn't actually that expensive (and if it is, we should figure out how to speed up `$fh->autoflush` in core, not discourage programmers from using it) - the performance advice is from 1999 and hasn't been updated since (commits 14218588221b, 19799a22062e) - as far as I can tell, this advice is mostly Tom Christiansen's personal opinion and not the general consensus of the Perl community - move a comma from one paragraph to the next (technically unrelated, but it's in the general vicinity of the preceding changes)
Published by mauke on Sunday 08 September 2024 06:10
github PR template: add a horizontal line separator I think it looks better if there is visual separation between the pull request description proper and the perldelta checkboxes at the bottom. (In Markdown, a row of hyphens renders as an HTML `<hr>` (horizontal rule) element, which looks like a nice section separator.)
Published by /u/OODLER577 on Sunday 08 September 2024 02:33
This project is moving along just fine. Below is the current leaderboard. It's not about personal module count, it's creating more awareness about Perl. Also, some of these APIs are actually pretty neat! E.g., there's a "card deck" API for card playing programs.
All are invited to participate. Please click here for the rules and to claim the API. This is a great way to get your first CPAN module published, which is a major milestone for any Perl programmer. It's also great for experienced devs to blow off some steam or hone their skills. If you're new to CPAN and need help, email me directly at [oodler@cpan.org.](mailto:oodler@cpan.org)
The runner of FreePublicAPIs has been extremely supportive of this effort. He even created a site API for us, and I obliged by creating a real Perl client for it!
I'd like to specifically request that anyone using any of the new Perl stuff like signatures or Corinna/class
to submit some as non-contrived examples of how they work or as proof of why people should use them. Here is a good summary of the new features in Perl 5.40 - give it a shot! I may even try something other than my Dispatch::Fu and Util::H2O::More modules, even though they makes writing commandline tools with subcommands and web API modules dead simple - TIMTODI!
Claimed | PAUSE | API Info | Module Name | Status | Completed |
---|---|---|---|---|---|
2024-08-28 | OODLER | kanyerest | Acme::Free::API::Ye | Completed | 2024-08-28 |
2024-08-29 | OODLER | chuck-norris-jokes-api | Acme::Free::API::ChuckNorris | Completed | 2024-08-29 |
2024-08-30 | OODLER | reddit-stocks | Acme::Free::API::Stonks | Completed | 2024-08-30 |
2024-08-31 | SANKO | advice-slip-api | Acme::Free::Advice::Slip | Completed? | 2024-09-03 |
2024-08-31 | SANKO | unsolicited-advice-api | Acme::Free::Advice::Unsolicited | Completed? | 2024-09-03 |
2024-09-01 | CAVAC | ip-geolocation-api | Acme::Free::API::Geodata::GeoIP | Completed | 2024-09-01 |
2024-09-03 | OODLER | dog-api | Acme::Free::Dog::API | Completed | 2024-09-04 |
2024-09-03 | SANKO | insult-api | Acme::Insult::Glax | Completed? | 2024-09-03 |
2024-09-03 | SANKO | evil-insult-generator | Acme::Insult::Evil | Completed? | 2024-09-03 |
2024-09-03 | OODLER | api | Acme::Free::Public::APIs | Completed | 2024-09-06 |
2024-09-06 | OODLER | keyval-api | WebService::KeyVal | Pending | |
2024-09-07 | HAX | ipify | Webservice::Ipify | Pending |
Published by capser on Saturday 07 September 2024 22:12
If you go to youtube and hit the transcribe button, for some videos, youtube has a transcription service, you can see the whole video text on the right side. If you download it to a text file, it comes in this format. I want to separate the timestamps from the text.
0:12
well good morning everybody thank you for joining us here at the National Shrine of the
Divine Mercy it is
0:18
Vietnamese day and uh we're glad that you could join us uh I have a strong tie
0:24
to the Vietnamese people my father obviously serving in Southeast Asia being in Vietnam and uh my uh Seminary
0:31
time I went to Seminary with a lot of the Vietnamese sisters so praise be to God uh we're glad you're with us and
0:38
today's topic really is so important and I'm coming from a aspect of a personal
I want is formatted like this so I can put it in to and excel sheet, and separate the timestamps if needed.
0:12, well good morning everybody thank you for joining us here at the National Shrine of the Divine Mercy it is
0:18, Vietnamese day and uh we're glad that you could join us uh I have a strong tie
0:24, to the Vietnamese people my father obviously serving in Southeast Asia being in Vietnam and uh my uh Seminary
0:31, time I went to Seminary with a lot of the Vietnamese sisters so praise be to God uh we're glad you're with us and
0:38, today's topic really is so important and I'm coming from a aspect of a personal
the time stamps will go out to this 1:35:35
#!/usr/bin/perl
use strict;
use warnings;
if (@ARGV != 2) {
die "Usage: $0 input_file output_file\n";
}
my ($input_file, $output_file) = @ARGV;
open(my $in, '<', $input_file) or die "Cannot open input file '$input_file': $!";
open(my $out, '>', $output_file) or die "Cannot open output file '$output_file': $!";
my $timestamp = '';
while (my $line = <$in>) {
chomp $line;
if ($line =~ /^[0-9:]+$/) {
# Line is a timestamp
$timestamp = $line;
} elsif ($line =~ /\S/) {
# Line is text and is not empty
print $out "$timestamp, $line\n";
}
}
close($in);
close($out);
print "Formatting complete. Output written to $output_file.\n";
I wrote the script above, however the file is coming out like this. and it should not be
, 0:12
, well good morning everybody thank you for joining us here at the National Shrine of the Divine Mercy it is
, 0:18
, Vietnamese day and uh we're glad that you could join us uh I have a strong tie
, 0:24
, to the Vietnamese people my father obviously serving in Southeast Asia being in Vietnam and uh my uh Seminary
, 0:31
, time I went to Seminary with a lot of the Vietnamese sisters so praise be to God uh we're glad you're with us and
, 0:38
, today's topic really is so important and I'm coming from a aspect of a personal
I tried this as well
sed 's/^\([0-9:]*\) \(.*\)$/\1\:\2/'
Published by /u/niceperl on Saturday 07 September 2024 20:16
Published by Unknown on Saturday 07 September 2024 22:16
Published by Hugo Habbema on Saturday 07 September 2024 18:11
Published by Hugo Habbema on Saturday 07 September 2024 16:18
Published by Hugo Habbema on Saturday 07 September 2024 15:18
Published by mauke on Saturday 07 September 2024 03:36
regen_perly.pl: remove mostly dead gather_tokens code The idea behind this code was to find the big enum block in perlytmp.h that looks like: enum yytokentype { YYEMPTY = -2, YYEOF = 0, /* "end of file" */ YYerror = 256, /* error */ YYUNDEF = 257, /* "invalid token" */ GRAMPROG = 258, /* GRAMPROG */ GRAMEXPR = 259, /* GRAMEXPR */ GRAMBLOCK = 260, /* GRAMBLOCK */ ... }; ... and append to it (in perly.h) a series of equivalent macro definitions, one for each (non-negative) enum symbol: #define YYEOF 0 #define YYerror 256 #define YYUNDEF 257 #define GRAMPROG 258 #define GRAMEXPR 259 #define GRAMBLOCK 260 ... However, due to slight formatting changes in the code generated by bison 3+, this code has been essentially dead because the starting regex didn't match anymore, so $gather_tokens was never set to a true value. The last time we had token macros in perly.h was before commit a9f5ab8de628 (in 2016), in which I generated perly.h using bison 3 for the first time (without noticing the disappearance of the token macros). We could try to fix the regex logic in regen_perly.pl, but given that no one has complained about the missing token macros in 8 years (as far as I know), let's just remove the code. (Yay, accidental scream tests?)
Published by Perl Steering Council on Friday 06 September 2024 23:55
All present, and this time the meeting actually ended on time.
Published by leonerd on Friday 06 September 2024 22:33
Neater intent of aux vector handling when dumping fields of OP_METHSTART
Published by leonerd on Friday 06 September 2024 22:33
Use %UVuf rather than %d for argcheck aux
Published by /u/ivan_linux on Friday 06 September 2024 18:28
Hey friends, a few weeks back we introduced SlapbirdAPM (an open-source Perl application performance monitor), and received some great feedback from the community! Today we'd like to announce that you are now able to track DBI queries in your applications (only available for Dancer2 and Mojolicious for now), regardless of your database, ORM, etc. Here's what it looks like! You can see the dancer2 code that generated these queries here. This is just one of the many monitoring features provided by SlapbirdAPM, hopefully you find them as useful as we do! And a reminder we have a *forever* free tier available for everyone! [link] [comments] |
Published by prabu on Friday 06 September 2024 15:04
There are multiple text files (basically Informatica Parameter files) inside a directory. I would like to parse through all the files in a directory and print the single output. I want to print the file name followed by the first occurence of $$WKF_TABLE_NM, $$WKF_SRC_TAB_NM, $$WKF_SRC2_TAB_NM, $$WKF_TGT_TAB_NM which should be found after the text "Production". Because, we will have the same variables for different environments. All of the variables might not be present in all the files. So, whichever is available should be printed.
I have tried something like below but didn't get through this.
awk '/$$WKF_TABLE_NM/ {print}' wf_param_file1.txt
awk 'FNR == 1{ print FILENAME } ' wf_param_file1.txt
Can you please help me with the Shell command to perform this? Thank you in advance!
Sample Input file: [wf_param_file1.txt]
------------------------------------ Development --------------------------------------------------
[Infa_folder_name.WF:Workflow1]
$DBConnection_TGT=TGT_CON1
-------------------------------------------------------------
- Environment Information for Command Task
-------------------------------------------------------------
$$WKF_ODS_USER_ID=USER1
$$WKF_PKG_NAME=ETL_PKG_1
$$WKF_PROC_NAME=ETL_PROC_1
$$WKF_DB_ENV=
$$WKF_TABLE_NM=TBL1
$$WKF_SRC_TAB_NM=SCHEMA1.SRC_TABLE1
$$WKF_SRC2_TAB_NM=SCHEMA1.SRC_TABLE2
$$WKF_TGT_TAB_NM=SCHEMA1.TGT_TABLE
$$WKF_ERR_MSG=FAILED
$$WKF_ERR_NM=FAILURE
$$WKF_WRK_ENV=DEV
------------------------------------ Production --------------------------------------------------
[Infa_folder_name.WF:Workflow1]
$DBConnection_TGT=TGT_CON1
-------------------------------------------------------------
- Environment Information for Command Task
-------------------------------------------------------------
$$WKF_ODS_USER_ID=USER1
$$WKF_PKG_NAME=ETL_PKG_1
$$WKF_PROC_NAME=ETL_PROC_1
$$WKF_DB_ENV=
$$WKF_TABLE_NM=TBL1
$$WKF_SRC_TAB_NM=SCHEMA1.SRC_TABLE1
$$WKF_SRC2_TAB_NM=SCHEMA1.SRC_TABLE2
$$WKF_TGT_TAB_NM=SCHEMA1.TGT_TABLE
$$WKF_ERR_MSG=FAILED
$$WKF_ERR_NM=FAILURE
$$WKF_WRK_ENV=PROD
------------------------------------ Acceptance --------------------------------------------------
[Infa_folder_name.WF:Workflow1]
$DBConnection_TGT=TGT_CON1
-------------------------------------------------------------
- Environment Information for Command Task
-------------------------------------------------------------
$$WKF_ODS_USER_ID=USER1
$$WKF_PKG_NAME=ETL_PKG_1
$$WKF_PROC_NAME=ETL_PROC_1
$$WKF_DB_ENV=
$$WKF_TABLE_NM=TBL1
$$WKF_SRC_TAB_NM=SCHEMA1.SRC_TABLE1
$$WKF_SRC2_TAB_NM=SCHEMA1.SRC_TABLE2
$$WKF_TGT_TAB_NM=SCHEMA1.TGT_TABLE
$$WKF_ERR_MSG=FAILED
$$WKF_ERR_NM=FAILURE
$$WKF_WRK_ENV=ACP
Expected output:
wf_param_file1.txt
$$WKF_TABLE_NM=TBL1
$$WKF_SRC_TAB_NM=SCHEMA1.SRC_TABLE1
$$WKF_SRC2_TAB_NM=SCHEMA1.SRC_TABLE2
$$WKF_TGT_TAB_NM=SCHEMA1.TGT_TABLE
wf_param_file2.txt
$$WKF_TABLE_NM=TBL2
$$WKF_SRC_TAB_NM=SCHEMA1.SRC_TABLE2
$$WKF_TGT_TAB_NM=SCHEMA1.TGT_TABLE1
Published by laurent_r on Friday 06 September 2024 02:56
These are some answers to the Week 285, Task 2, of the Perl Weekly Challenge organized by Mohammad S. Anwar.
Spoiler Alert: This weekly challenge deadline is due in a few days from now (on September 8, 2024, at 23:59). This blog post provides some solutions to this challenge. Please don’t read on if you intend to complete the challenge on your own.
Compute the number of ways to make change for given amount in cents. By using the coins e.g. Penny, Nickel, Dime, Quarter and Half-dollar, in how many distinct ways can the total value equal to the given amount? Order of coin selection does not matter.
A penny (P) is equal to 1 cent.
A nickel (N) is equal to 5 cents.
A dime (D) is equal to 10 cents.
A quarter (Q) is equal to 25 cents.
A half-dollar (HD) is equal to 50 cents.
Example 1
Input: $amount = 9
Ouput: 2
1: 9P
2: N + 4P
Example 2
Input: $amount = 15
Ouput: 6
1: D + 5P
2: D + N
3: 3N
4: 2N + 5P
5: N + 10P
6: 15P
Example 3
Input: $amount = 100
Ouput: 292
I first thought of populating a hash which, for each coin value, would provide the number of ways such value could be made of smaller change. But this turned out to be too complicated by hand. So, I thought we could use a recursive subroutine to build that hash, but, at this point, we can use a similar recursive subroutine to compute directly the number of ways to construct the input amount.
Here, make-change
is the recursive subroutine. It loops on the coin values (in descending order) and subtract the value from the input amount. If the amount left ($rest
) is equal to zero, then we've found a new combination of coins and increment the count. Otherwise, we call recursively the make-change
subroutine with the value of $rest
.
The initial program did not work as expected because it found duplicate combinations (in different orders). For example, for in input value of 11, it might find the following combinations:
1 10
1 5 5
5 1 5
5 5 1
etc.
To prevent this, a second parameter, $limit
, was added to the make-change
subroutine by which we forbid the program to use coins with a value larger than the current one.
my @coins = 50, 25, 10, 5, 1;
my $count;
sub make-change ($amount, $limit) {
for @coins -> $coin {
next if $coin > $amount;
# Prevent duplicate combinations in different orders
next if $coin > $limit;
my $rest = $amount - $coin;
if $rest == 0 {
$count++;
} else {
make-change($rest, $coin);
}
}
return $count;
}
my @tests = 9, 15, 100;
for @tests -> $test {
$count = 0;
printf "%-5d => ", $test;
say make-change $test, 50;
}
This program displays the following output:
$ raku ./make-change.raku
9 => 2
15 => 6
100 => 292
Note that I initially had some concerns about performance with all these recursive calls, but it turned out that the program ran quite fast:
$ time raku ./make-change.raku
9 => 2
15 => 6
100 => 292
real 0m0.830s
user 0m0.000s
sys 0m0.015s
This is a port to Perl of the above Raku program. Please refer to the above section if you need explanations.
Note that I had to add the following pragma:
no warnings 'recursion';
to disable a warning about deep recursion, presumably when computing the combinations of pennies with an input value of 100. BTW, with today's computer hardware, the built-in recursion depth limit could be raised to a significantly higher level.
use strict;
use warnings;
no warnings 'recursion';
use feature 'say';
my @coins = (50, 25, 10, 5, 1);
my $count;
sub make_change {
my ($amount, $limit) = @_;
for my $coin (@coins) {
next if $coin > $amount;
# Prevent duplicate combinations in different orders
next if $coin > $limit;
my $rest = $amount - $coin;
if ($rest == 0) {
$count++;
} else {
make_change($rest, $coin);
}
}
return $count;
}
my @tests = (9, 15, 50, 100);
for my $test (@tests) {
$count = 0;
printf "%-5d => ", $test;
say make_change $test, 50;
}
This program displays the following output:
$ perl ./make-change.pl
9 => 2
15 => 6
100 => 292
I also benchmarked this Perl implementation, and it shows that, at least for such highly recursive programs, Perl is still much faster than Raku:
$ time perl ./make-change.pl
9 => 2
15 => 6
100 => 292
real 0m0.035s
user 0m0.000s
sys 0m0.030s
The next week Perl Weekly Challenge will start soon. If you want to participate in this challenge, please check https://perlweeklychallenge.org/ and make sure you answer the challenge before 23:59 BST (British summer time) on September 15, 2024. And, please, also spread the word about the Perl Weekly Challenge if you can.
(Picture from Erda Estremera)
I'm sometimes doing Front End dev.
Or sometimes the best tool for the job is only installable via npm
.
It can be scripts to "uglify" or "beautify" css/js, optimize svg files (svgo
) or clients to SaaS platforms (wrangler
).
Actually, it's not that important if it's part of javascript ecosystem, what I want is just to execute them!
The usual process: I start testing it locally, quickly and in a "trash-able" way, then install in a continuous integration pipeline, then forget 😄
So far, do you feel some resemblance with some periodical process or yours?
For this purpose, I'm using frequently npx (now part of npm).
Do you see now where I'm going to? 🤔
npx
-like for CPAN
I just uploaded App::cpx for this purpose.
Give cpx
a binary and it will find it in CPAN, install it for you then execute it.
$ cpx hr -s 40
🎯 Found [bin/hr]
📦 Release to install [https://cpan.metacpan.org/authors/id/W/WO/WOLDRICH/App-term-hr-0.11.tar.gz]
🔧 Will install into /home/tib/cpx-test/.cpx
DONE install Term-ExtendedColor-0.504
DONE install App-term-hr-0.11
2 distributions installed.
=======================================
(purpose of hr
is to draw horizontal lines)
Or another example with mlocate
:
$ cpx mlocate Redis Moo
🎯 Found [bin/mlocate]
📦 Release to install [https://cpan.metacpan.org/authors/id/C/CE/CELOGEEK/App-Module-Locate-0.7.tar.gz]
🔧 Will install into /home/tib/cpx-test/.cpx
DONE install Module-Locate-1.80
DONE install Module-Build-0.4234
DONE install App-Module-Locate-0.7
3 distributions installed.
/usr/local/share/perl/5.34.0/Redis.pm
/usr/local/share/perl/5.34.0/Moo.pm
mlocate
is a script that lives in App::Module::Locate. It's an utility to "find a module by it's name".
cpx
saves you from some frustrating tries:
cpm install App::mlocate
? cpm install App::Mlocate
, I know some authors do thatlocate
like this cpm install App::locate
cpm install App::Locate
cpm install App::Module::Locate
but now it's installed locally, how do I set local::lib? Maybe I should install globallyAnd at start were you 100% sure mlocate
was something that exists in CPAN? 🤔
cpx
saves you from this pain and hides you the internals of installing the module, if missing.
When running cpx
again, it won't reinstall but reuse the already installed binary:
$ cpx mlocate Redis Moo
⚓ Found executable already installed
/usr/local/share/perl/5.34.0/Redis.pm
/usr/local/share/perl/5.34.0/Moo.pm
$ curl -sL https://git.io/cpm | sudo perl - install -g App::cpx
$ cpx hr -s 40
In GitHub Actions, it would give something like this:
name : Test cpx
on: push
jobs:
cpx:
runs-on: ubuntu-latest
steps:
- name: Install cpx
run: curl -sL https://git.io/cpm | sudo perl - install -g App::cpx
- name: cpx hr
run: cpx hr -s 40
As of now, the code for this small utility is ridiculously simple (source), but sometimes good ideas (yes, all glory to myself 😀) are simple to implement.
App::cpm::CLI
Published by user3183111 on Thursday 05 September 2024 14:08
Hi I have a parent class called DedicatedToServers with a child class called Blog. Blog has 3 children, User, Post, Category, Comment
In my code If I do this
package Blog;
use DBI;
use Data::Dumper;
use Moose;
extends 'DedicatedToServers';
my $dbh = DedicatedToServers->DbConnect();
All is good, however if i then create an object of User like this
package User;
use DBI;
use Data::Dumper;
use Moose;
extends 'Blog';
I get a recursive inheritance error
So how do I avoid that, do I have to extend the base class each time, or is there away to extend a class that is extended from another class
Published by mabalenk on Thursday 05 September 2024 12:31
I'm struggling to pass command-line options to a systemx
command from Perl's IPC::System::Simple
module. Please find my minimal example below:
#!/usr/local/bin/perl
use v5.40;
use strict;
use utf8;
use warnings;
use IPC::System::Simple qw(systemx);
# set executable name
my $exe = 'hostname';
# set GNU time command
my $time = "/opt/local/bin/gtime";
# set log file name
my $ofile = $exe . '.log';
# set GNU time command options
my $opt = ' -a -f %e\t%U\t%S\t%P\t%M\t%c\t%w\t%F\t%R -o ' . $ofile;
systemx($time, $opt, $exe);
When I call systemx
without options to gtime
, it works well:
# systemx($time, $exe);
lobachevsky
0.00user 0.00system 0:00.01elapsed 0%CPU (0avgtext+0avgdata 1152maxresident)k
0inputs+0outputs (1major+189minor)pagefaults 0swaps
When I call systemx
with options to gtime
, my script fails:
# systemx($time, $opt, $exe);
gtime: cannot run -a -f %e\t%U\t%S\t%P\t%M\t%c\t%w\t%F\t%R -o hostname.log: No such file or directory
Command exited with non-zero status 127
0.00user 0.00system 0:00.00elapsed 50%CPU (0avgtext+0avgdata 1040maxresident)k
0inputs+0outputs (0major+83minor)pagefaults 0swaps
"/opt/local/bin/gtime" unexpectedly returned exit value 127 at ./profiler_mini.pl line 21.
I must be missing something obvious here.
Published by Ron Savage on Thursday 05 September 2024 12:04
Hi All
Why aren't my links appearing....
Note: The first 3 links point to my personal page: http://savage.net.au
The Perl wiki has been renamed from Perl.html - which was too generic - to Perl.Wiki.html:
https://savage.net.au/misc/Perl.Wiki.html
The Mojolicious wiki is at:
https://savage.net.au/misc/Mojolicious.Wiki.html
The Debian wiki is at:
https://savage.net.au/misc/Debian.Wiki.html
Note: The next 2 links point to my new website: https://symboliciq.au
This will accompany my upcoming Youtube channel
The Symbolic Language wiki is at:
https://symboliciq.au/misc/Symbolic.Language.Wiki.html
The Personal Security wiki is at:
https://symboliciq.au/misc/Personal.Security.Wiki.html
Hi All
Why aren't my links appearing... I'm trying Extended rather than Body this time...
Note: The first 3 links point to my personal page
The Perl wiki has been renamed from Perl.html - which was too generic - to Perl.Wiki.html:
Note: The next 2 links point to my new website which accompanies my upcoming Youtube channel
The Symbolic Language wiki is at:
The Personal Security wiki is at:
https://symboliciq.au/misc/Personal.Security.Wiki.html
Published by Ron Savage on Thursday 05 September 2024 11:58
Hi All
Note: The first 3 links point to my personal page
The Perl wiki has been renamed from Perl.html - which was too generic - to Perl.Wiki.html:
Note: The next 2 links point to my new website which accompanies my upcoming Youtube channel
The Symbolic Language wiki is at:
The Personal Security wiki is at:
https://symboliciq.au/misc/Personal.Security.Wiki.html
Published by /u/J_Stach on Thursday 05 September 2024 02:12
Since the language formerly known as Perl 6 has officially gone off on its own, has there been any effort to implement a true Perl 5 successor?
In my opinion, Raku tried to do too much with the syntax itself, scaled Perl's flexibility to infinity, and made itself unusable.
Perl 5 does not need much for it to become a "modern" language. Instead of extending the flexibility of the syntax, the direction for Perl 6 should emphasize standardizing core utilities to facilitate integration with modern workflows.
- Package/module management and import/export could benefit from streamlining
- Stronger LSP and debug/error tooling (Rust has spoiled me)
- "Prettier" auto-formatting for source code (For those 30yo system scripts, you know the ones I mean)
What would be on your wishlist?
Published by /u/scottchiefbaker on Thursday 05 September 2024 02:10
Check out the latest version of String::Util and let me if you have any suggestions for other string based funcions I can add.
Published by Jonas Brømsø on Wednesday 04 September 2024 18:22
Today I was inspecting some Git version tags. The repositories in question are using semantic versioning and they did not come in the order I expected.
This is an example from an open source repository of mine:
╰─ git tag
0.1.0
0.2.0
0.3.0
0.3.1
0.4.0
0.5.0
0.6.0
0.6.1
v0.10.0
v0.11.0
v0.12.0
v0.13.0
v0.13.1
v0.7.0
v0.8.0
v0.9.0
I attempted to pipe it to sort
, expecting it to fail.
╰─ git tag | sort
0.1.0
0.2.0
0.3.0
0.3.1
0.4.0
0.5.0
0.6.0
0.6.1
v0.10.0
v0.11.0
v0.12.0
v0.13.0
v0.13.1
v0.7.0
v0.8.0
v0.9.0
So I fired up my editor and wrote a small Perl script, since my experience with sorting in Perl is that it is pretty powerful.
#!perl
use warnings;
use strict;
use Data::Dumper;
use Getopt::Long;
my $reverse;
GetOptions ('reverse' => \$reverse) # flag
or die("Error in command line arguments\n");
# read all from standard input
my @tags = <STDIN>;
# remove newlines
chomp @tags;
my @sorted_tags = sort {
($b =~ /v?(\d+)\.\d+\.\d+/)[0] <=> ($a =~ /v?(\d+)\.\d+\.\d+/)[0]
||
($b =~ /v?\d+\.(\d+)\.\d+/)[0] <=> ($a =~ /v?\d+\.(\d+)\.\d+/)[0]
||
($b =~ /v?\d+\.\d+\.(\d+)/)[0] <=> ($a =~ /v?\d+\.\d+\.(\d+)/)[0]
||
fc($a) cmp fc($b)
} @tags;
# print sorted tags to standard output on separate lines
if ($reverse) {
print join("\n", reverse @sorted_tags), "\n";
} else {
print join("\n", @sorted_tags), "\n";
}
exit 0;
So now I can do:
╰─ git tag | sort_semantic_version_numbers.pl
v0.13.1
v0.13.0
v0.12.0
v0.11.0
v0.10.0
v0.9.0
v0.8.0
v0.7.0
0.6.1
0.6.0
0.5.0
0.4.0
0.3.1
0.3.0
0.2.0
0.1.0
And I even through in a --reverse
(-r
is the shortform):
╰─ git tag | sort_semantic_version_numbers.pl --reverse
0.1.0
0.2.0
0.3.0
0.3.1
0.4.0
0.5.0
0.6.0
0.6.1
v0.7.0
v0.8.0
v0.9.0
v0.10.0
v0.11.0
v0.12.0
v0.13.0
v0.13.1
There might a better approach, but this was done pretty fast. Yes the script possible lacks support for all sorts of tag formats, but it works for semantic version numbers and that works for my use-case.
Script available as a Gist.
Feedback, suggestions etc. most welcome.
Published by London Perl Workshop on Wednesday 04 September 2024 11:40
The schedule for this year's London Perl & Raku Workshop is now visible: https://act.yapc.eu/lpw2024/schedule. Please please please (please please) make a point to mark those talks that you plan to attend as this will allow us to tweak the schedule a bit if some talks are more heavily favourited than others.
The venue opens for attendees at 9am and we plan to start talks at 9:30am. Drinks will be available throughout the day, hence no need to have dedicated breaks there. Lunch will be at midday for an hour, there are plenty of options close to the venue to grab something to eat. If we receive a couple more sponsors we can have lunch at the venue, but currently that won't be possible.
There are a couple of spots left for talks, otherwise the schedule is full. Please don't let this put you off submitting a talk as it's possible some speakers may have to change their plan.
You'll note that there are three "Open Sessions" in the The Study. This room can hold 15 people seating with room for a presenter. We can host a breakout, BOF, hackspace, whatever you suggest. There will be no recording facility in this room, but if no suggestions are made we will likely use it to schedule some of the remaining talks.
The London Perl and Raku Workshop will take place on 26th Oct 2024. Thanks to this year's sponsors, without whom LPW would not happen:
Published by Zapwai on Monday 02 September 2024 16:38
Task One: No Connection
Given a list of routes, find the destination with no outgoing connection.
For example, given [B, C] [D, B] [C, A] we have routes
B -> C -> A
D -> B -> C -> A
C -> A
Output: A
Task Two: Making Change
Compute the number of distinct ways to make change for a given amount. (Using Pennies, Nickels, Dimes, Quarters, and Half-Dollars.)
There are two ways to make change for 9 cents (N + 4P or 9P).
There are three ways to make change for 10 cents (2N, N + 5P, or 10P).
Solution to task one
I make two lists, in-routes and out-routes, and then I see if there is an out that is not also an in.
use v5.38;
my @routes = (["B","C"], ["D","B"], ["C","A"]);
proc(@routes);
@routes = (["A","Z"]);
proc(@routes);
sub proc(@routes) {
print "Input: ";
print join(",", @{$routes[$_]})," " for (0 .. $#routes);
print "\n";
my @in;
my @out;
foreach (@routes) {
push @in, ${$_}[0];
push @out, ${$_}[1];
}
my $ans = "a";
for my $needle (@out) {
my $found = 0;
for my $hay (@in) {
if ($needle eq $hay) {
$found = 1;
last;
}
}
if ($found == 0) {
$ans = $needle;
}
}
say "Output: $ans";
}
Solution to task two
This challenge was inspired by an analysis book of Pólya. (Problems and Theorems in Analysis) The first question is to count the number of ways to make change for a dollar, which is arduous. It's a trivial task for a computer, though.
use v5.38;
my $amt = $ARGV[0] // 100;
my $cnt = 0;
for my $h (0 .. $amt/50) {
for my $q (0 .. $amt/25) {
for my $d (0 .. $amt/10) {
for my $n (0 .. $amt/5) {
for my $p (0 .. $amt) {
if (tally($p, $n, $d, $q, $h) == $amt) {
$cnt++;
}
}
}
}
}
}
say "There are $cnt ways to make change for $amt cents";
sub tally($p, $n, $d, $q, $h) { $p + 5*$n + 10*$d + 25*$q + 50*$h
I have used a for loop for each coin, in a stunning display of brute force.
Published on Sunday 01 September 2024 23:12
Published on Sunday 01 September 2024 23:12
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
You are given an array of integers, @ints
.
Write a script to find the lucky integer if found otherwise return -1
. If there are more than one then return the largest.
A lucky integer is an integer that has a frequency in the array equal to its value.
This task is relatively straight forward, so doesn't require much explanation. I create a dict (hash in Perl) of the frequency of each integer, called freq
. I then iterate through the keys of freq
(highest value first). If the frequency of the integer is the same as the value, I return that number. If the iterator is exhausted, I return -1
.
def lucky_integer(ints: list) -> str:
freq = Counter(ints)
for i in sorted(freq, reverse=True):
if i == freq[i]:
return i
return -1
$ ./ch-1.py 2 2 3 4
2
$ ./ch-1.py 1 2 2 3 3 3
3
$ ./ch-1.py 1 1 1 3
-1
You are given two list of integers, @list1
and @list2
. The elements in the @list2
are distinct and also in the @list1
.
Write a script to sort the elements in the @list1
such that the relative order of items in @list1
is same as in the @list2
. Elements that is missing in @list2
should be placed at the end of @list1
in ascending order.
While Python does have the index
function for lists, it will raise a ValueError
exception if the item is not in the list. Therefore I have created a function called find_index
that will return the position of val
in lst
if found, or return the length of the list if not.
def find_index(lst: list, val: int):
return lst.index(val) if val in lst else len(lst)
The Perl solution also has the same function, and uses List::MoreUtil's first_index to find the position.
sub find_index( $lst, $val ) {
my $idx = first_index { $_ == $val } @$lst;
return $idx == -1 ? scalar(@$lst) : $idx;
}
I use the sorted function to sort the first list. Each item is sorted by a tuple of the index position and the integer. This ensures that the first part of the resulting list are sorted in accordance with the position in the second list, while the remaining values are sorted numerically.
def relative_sort(list1: list, list2: list) -> list:
return sorted(list1, key=lambda i: (find_index(list2, i), i))
The Perl solution uses the same logic, but completely different syntax.
sub main ($lists) {
my $list1 = $lists->[0];
my $list2 = $lists->[1];
my @solution = sort {
find_index( $list2, $a ) <=> find_index( $list2, $b )
or $a <=> $b
} @$list1;
say '(', join( ', ', @solution ), ')';
}
For input via the command line, I take a JSON string that should be a list of list of integers.
$ ./ch-2.py "[[2, 3, 9, 3, 1, 4, 6, 7, 2, 8, 5],[2, 1, 4, 3, 5, 6]]"
(2, 2, 1, 4, 3, 3, 5, 6, 7, 8, 9)
$ ./ch-2.py "[[3, 3, 4, 6, 2, 4, 2, 1, 3],[1, 3, 2]]"
(1, 3, 3, 3, 2, 2, 4, 4, 6)
$ ./ch-2.py "[[3, 0, 5, 0, 2, 1, 4, 1, 1],[1, 0, 3, 2]]"
(1, 1, 1, 0, 0, 3, 2, 4, 5)
Published by Unknown on Saturday 31 August 2024 21:30
Published by Yuki Kimoto - SPVM Author on Thursday 29 August 2024 23:30
GitPrep has some new features and bug fixes.
GitPrep is portable Github-like system, which can be installed and hosted on your own Unix/Linux server.
We need programmers who like to play on the bleading edge. By trying out new features, they are able to report on problems that they find – and, in doing so, improve the experience for the many people who follow them.
I’m not usually much of a bleading edge programmer. But I’ve been enjoying Perl’s new object-oriented programming features, so I’ve been using them a lot. And, in the process, I’ve found a few issues that I’ve reported (or, in a couple of cases, will report) to the relevant people.
Often, the problem that the bleading edgers come across are problems with the feature itself. That’s not the case with me. I’ve been finding problems with how Perl’s infrastructure deals with the new feature.
And please note, it would be easy to interpret this blog post as me complaining about these tools being “broken” because they aren’t keeping up with the development of the language. That’s not the case at all. I realise that these infrastructure projects are all run by volunteers and I’m grateful for all these people do – working for free, keeping these systems (systems that we often tend to take for granted) running. In cases where I think I would be at all useful I have, of course, offered my helping in implementing these fixes.
So what are the problems?
The first CPAN module I wrote that used the new class syntax was Amazon::Sites. As soon as I uploaded it, I knew something was awry. I got an email from the PAUSE indexer saying that it couldn’t understand my distribution tarball. I wasn’t sure what the problem was, but within an hour I got a follow-up email from Neil Bowers pointing out that PAUSE couldn’t find a package statement in my module. That’s not surprising, as the new class syntax uses class as a replacement for package. And PAUSE hadn’t been updated to recognise that syntax. Before emailing me, Neil had taken the time to raise an issue in the PAUSE repo and he suggested that the upcoming Perl Toolchain Summit would be a good opportunity to fix the problem. He also suggested that added a (strictly speaking, spurious) package line to the code would be a good workaround. I did that and uploaded a new version – which worked fine. And PAUSE was updated at the PTS. In the intervening time, I released a couple more modules that used the new syntax – so they also have the extra package line.
The next problem is one that probably only affects me. Back in January, I wrote about some reusable GitHub Actions that I had developed for Perl code. Although it’s not mentioned in the blog post, I had added an action that uses Perl::Metrics::Simple to report on the complexity of my Perl code. I noticed that it was showing strange results for my modules that used the new syntax. Specifically, it wasn’t correctly reporting the complexity of code in methods. The reason is obvious, when you think about it. It’s just that Perl::Metrics::Simple doesn’t recognise the method keyword that is used in place of sub in the new OO syntax. I raised an issue in the repo for the module – optimistically promising a pull request in a few days. That didn’t happen as the problem is actually in PPI – which Perl::Metrics::Simple uses to parse the code. And there’s already a ticket to add all of the new keywords to PPI. Sadly, I don’t think my Perl is up to taking on this fix for the PPI team.
Given that the PAUSE issue I mentioned before has now been fixed, when I came to release App::LastStats recently I didn’t add the extra package line that had become my habit. It turns out that was a mistake. While my new module sailed past PAUSE, it seems that the lack of a package definition confuses MetaCPAN too. While my new module was being indexed by PAUSE and ending up in the 02Packages file correctly (so it was installable using tools like cpanm), it wasn’t appearing in MetaCPAN search or on my author page. Chatting with Olaf Alders on the #metacpan IRC channel, he spotted that the status of the release wasn’t being set to “latest” by the MetaCPAN ingestion code. Adding the same package line to the code soon fixed that problem too. Hopefully I’ll be able to work out where to fix the MetaCPAN code so it recognises class as a synonym for package. But, until that happens, anyone uploading a module to CPAN that uses the new syntax (is that really only me?) will need to add the package line.
There’s one more class of problem that I’m still trying to work out. And that’s down to my use of Feature::Compat::Class to make these modules compatible with versions of Perl that don’t support the new syntax. Part of the problem here is that we now have two versions of Perl that support the new syntax – 5.38 and 5.40. But they support slightly different versions of the syntax – that’s to be expected, of course; it’s how the new feature is being written.
The way that Feature::Compat::Class works is that it checks the version of Perl and if it is running on a version less than 5.38, then it loads another module called Object::Pad – which is a test bed for the new class syntax. Object::Pad supports more of the planned new syntax that has been actually released yet. So when Feature::Compat::Class loads Object::Pad, it uses a flag which tells Object::Pad to only allow the syntax that has been released in a Perl release. But which syntax? From which release? I guess it depends on which version of Object::Pad I’m using. Presumably, a version that was released after Perl 5.40 will support all of 5.40’s new syntax. And if I write code that uses the newest syntax, what happens when someone tries to run it on Perl 5.38? Currently, I’m only using 5.38’s syntax, so I’m not sure yet. And this is a problem that will get worse as future versions of Perl add more features to the class syntax.
I don’t think my new modules have many users – they’re very niche, so this is probably only a problem that I need to solve for myself. And I’m solving it by running the code in Docker containers that have the latest version of Perl installed. But it’s something I’ll need to think about more deeply if any of these modules become more widely used. Maybe I just encourage people to use them via the Docker images.
Oh, one final thing. The new class syntax is experimental. Some people would, I suppose, say that’s a good reason not to use it in CPAN module – but, hey, bleading edge But that means it produces loads of “experimental” warnings if you don’t explicitly add code to suppress then. That code is no warnings 'experimental::class'. But that doesn’t compile on a Perl earlier than 5.38 (because it’s not a recognised warning category on a version of Perl where the feature is unimplemented). So I need to look at using if to only turn off those warnings on the correct versions of Perl.
I don’t want to put anyone off using the new class syntax. I think it’s a great new tool and I’m looking forward to seeing it become more powerful as each new version of Perl is released. I just want people to realise that you will hit certain speedbumps by being an early adopter of features like this.
Have you tried the new syntax? What do you think of it?
The post On the [b]leading edge first appeared on Perl Hacks.
Welcome to “What’s new on CPAN”, a curated look at last month’s new CPAN uploads for your reading and programming pleasure. Enjoy!
Published by Stig Palmquist on Monday 26 August 2024 00:00
App::cpanminus (cpanm
) is a popular
and lighweight alternative to the official CPAN client for downloading and
installing Perl modules from CPAN.
In its default configuration cpanminus uses insecure HTTP to download and install code from CPAN.
The lack of a secure HTTPS default results in a CWE-494: Download of Code Without Integrity Check weakness, enabling code execution for network attackers.
There is currently no patch available upstream yet. Users can mitigate with one of the following options.
The easiest way is to configure cpanminus to use a HTTPS mirror using the
--from
command-line argument. This can be configured as a CLI option,
replacing DISTNAME in the command below with the name of the distribution
you want to install:
$ cpanm --from https://www.cpan.org DISTNAME
Alternatively, you can set the --from
option via the PERL_CPANM_OPT
environment variable:
$ export PERL_CPANM_OPT="--from https://www.cpan.org"
And use cpanm as you normally would.
Please note that setting a
--from
option will disable support for downloading old releases from BackPan and development (TRIAL) releases.
Another option is to patch the http://
endpoints in the executable. This
retains support for BackPan and TRIAL-releases.
App::cpanminus
is distributed as a fatpacked executable with dependencies minified and inlined, so a.patch
file is not convenient.
To patch the executable, you can run the following oneliner:
$ perl -pi -E 's{http://(www\.cpan\.org|backpan\.perl\.org|cpan\.metacpan\.org|fastapi\.metacpan\.org|cpanmetadb\.plackperl\.org)}{https://$1}g' /path/to/cpanm
cpan
) 2.35 or later will use HTTPS with certificate verification if TLS support is availablecpm
) uses HTTPS sources by default--from
cpanm option explanation.Published by Unknown on Saturday 24 August 2024 12:19
Published by Unknown on Saturday 24 August 2024 12:12
This is the weekly favourites list of CPAN distributions. Votes count: 104
Week's winner (+4): Data::Checks
Build date: 2024/08/24 10:11:53 GMT
Clicked for first time:
Increasing its reputation:
These are the five most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2024-08-24 10:08:38 GMT
Published by alh on Monday 19 August 2024 10:56
Dave writes:
This is my monthly report on work done during July 2024 covered by my TPF perl core maintenance grant.
I spent most of last month continuing to work on understanding XS and improving its documentation, as a precursor to adding reference-counted stack (PERL_RC_STACK) abilities to XS.
I finished working through the Extutils::ParseXS module's code line line trying to understand it.
SUMMARY:
Total:
Published by alh on Monday 19 August 2024 10:53
Paul writes:
July ended up being a rush to finish some commercial work I had going elsewhere, but I did manage a few core commits. Nothing directly user-visible but they add some safety for -DDEBUGGING builds that often helps track down mismatched pointers when making API calls and can help find a lot of subtle bugs.
Hours:
4 = Pointer cast safety improvements
Total: 4 hours
Published on Thursday 15 August 2024 00:00
Sometimes the unexpected happens and must be shared with the world … this one is such a case.
Recently, I’ve started experimenting with Perl for workflow management and high-level supervision of low level code for data science applications. A role I’d reserve for Perl in this context is that of lifecycle management of memory buffers, using the Perl application to “allocate” memory buffers and shuttle it between computing components written in C, Assembly, Fortran and the best hidden gem of the Perl world, the Perl Data Language. There at least 3 ways that Perl can be used to allocate memory buffers:
The following Perl code implements these three methods (pack , string and malloc in C) and allows one to experiment with different buffer sizes, initial values and precision of the results (by averaging over many iterations of the allocation routines)
#!/home/chrisarg/perl5/perlbrew/perls/current/bin/perl
use v5.38;
use Inline (
C => 'DATA',
cc => 'g++',
ld => 'g++',
inc => q{}, # replace q{} with anything else you need
ccflagsex => q{}, # replace q{} with anything else you need
lddlflags => join(
q{ },
$Config::Config{lddlflags},
q{ }, # replace q{ } with anything else you need
),
libs => join(
q{ },
$Config::Config{libs},
q{ }, # replace q{ } with anything else you need
),
myextlib => ''
);
use Benchmark qw(cmpthese);
use Getopt::Long;
my ($buffer_size, $init_value, $iterations);
GetOptions(
'buffer_size=i' => \$buffer_size,
'init_value=s' => \$init_value,
'iterations=i' => \$iterations,
) or die "Usage: $0 --buffer_size <size> --init_value <value> --iterations <count>\n";
my $init_value_byte = ord($init_value);
my %code_snippets = (
'string' => sub {
$init_value x ( $buffer_size - 1 );
},
'pack' => sub {
pack "C*", ( ($init_value_byte) x $buffer_size );
},
'C' => sub {
allocate_and_initialize_array( $buffer_size, $init_value_byte );
},
);
cmpthese( $iterations, \%code_snippets );
__DATA__
__C__
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
SV* allocate_and_initialize_array(size_t length, short initial_value) {
// Allocate memory for the array
char* array = (char*)malloc(length * sizeof(char));
char initial_value_byte = (char)initial_value;
if (array == NULL) {
fprintf(stderr, "Memory allocation failed\n");
exit(1);
}
// Initialize each element with the initial_value
memset(array, initial_value_byte, length);
return newSVuv(PTR2UV(array));
}
calling the script as:
./time_mem_alloc.pl -buffer_size=1000000 -init_value=A -iterations=20000
yielded the surprising result :
Rate pack C string
pack 322/s -- -92% -99%
C 4008/s 1144% -- -92%
string 50000/s 15417% 1147% --
with the Perl string method outperforming C by 10 fold.
Not believing the massive performance gain, and thinking I am dealing with a bug in Inline::C, I recoded the allocation in pure C (adding the usual embellishments for commandline processing/timing etc) :
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
char* allocate_and_initialize_array(size_t length, char initial_value) {
// Allocate memory for the array
char* array = (char*)malloc(length * sizeof(char));
if (array == NULL) {
fprintf(stderr, "Memory allocation failed\n");
exit(1);
}
// Initialize each element with the initial_value
memset(array, initial_value, length);
return array;
}
double time_allocation_and_initialization(size_t length, char initial_value) {
clock_t start, end;
double cpu_time_used;
start = clock();
char* array = allocate_and_initialize_array(length, initial_value);
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
/* This rudimentary loop prevents the compiler from optimizing out the
* allocation/initialization with the de-allocation
*/
for(size_t i = 1; i < length; i++) {
array[i]++;
if(i % 100000 == 0) {
printf("array[%zu] = %c\n", i, array[i]);
}
}
free(array); // Free the allocated memory
return cpu_time_used;
}
int main(int argc, char *argv[]) {
if (argc != 3) {
fprintf(stderr, "Usage: %s <length> <initial_value>\n", argv[0]);
return 1;
}
size_t length = strtoull(argv[1], NULL, 10);
char initial_value = argv[2][0];
double time_taken = time_allocation_and_initialization(length, initial_value);
printf("Time taken to allocate and initialize array: %f seconds\n", time_taken);
printf("Initializes per second: %f\n", 1/time_taken);
return 0;
}
/*
Compilation command:
gcc -O2 -o time_array_allocation time_array_allocation.c -std=c99
Example invocation:
./time_array_allocation 10000000 A
*/
Invoking the C program as the comment in the C code says to do, I obtained the following result:
Time taken to allocate and initialize array: 0.000203 seconds
Initializes per second: 4926.108374
which is practically executes in the same order of magnitude as the equivalent allocation as the Inline::C malloc/C approach. After researching the issue further, I discovered that the malloc I grew to admire trades speed in memory allocation for generality, and there is a plethora of faster memory allocators out there. It seems that Perl is using one such allocator for its strings, and kicks C’s butt in this task of allocating buffers.
This article was originally published at Perl Hacks.
Back in May, I wrote a blog post about how I had moved a number of Dancer2 applications to a new server and had, in the process, created a standardised procedure for deploying Dancer2 apps. It’s been about six weeks since I did that and I thought it would be useful to give a little update on how it all went and talk about a few little changes I’ve made.
I mentioned that I was moving the apps to a new server. What I didn’t say was that I was convinced my old server was overpowered (and overpriced!) for what I needed, so the new server has less RAM and, I think, a slower CPU than the old one. And that turned out to be a bit of a problem. It turned out there was a time early each morning when there were too many requests coming into the server and it ran out of memory. I was waking up most days to a dead server. My previous work meant that fixing it wasn’t hard, but it really wasn’t something that I wanted to do most mornings.
So I wanted to look into reducing the amount of memory used by the apps. And that turned out to be a two-stage approach.
You might recall that the apps were all controlled using a standardised driver program called “app_service”. It looked like this:
#!/usr/bin/env perl
use warnings;
use strict;
use Daemon::Control;
use ENV::Util -load_dotenv;
use Cwd qw(abs_path);
use File::Basename;
Daemon::Control->new({
name => ucfirst lc $ENV{KLORTHO_APP_NAME},
lsb_start => '$syslog $remote_fs',
lsb_stop => '$syslog',
lsb_sdesc => 'Advice from Klortho',
lsb_desc => 'Klortho knows programming. Listen to Klortho',
path => abs_path($0),
program => '/usr/bin/starman',
program_args => [ '--workers', 10, '-l', ":$ENV{KLORTHO_APP_PORT}",
dirname(abs_path($0)) . '/app.psgi' ],
user => $ENV{KLORTHO_OWNER},
group => $ENV{KLORTHO_GROUP},
pid_file => "/var/run/$ENV{KLORTHO_APP_NAME}.pid",
stderr_file => "$ENV{KLORTHO_LOG_DIR}/error.log",
stdout_file => "$ENV{KLORTHO_LOG_DIR}/output.log",
fork => 2,
})->run;
We’re deferring most of the clever stuff to Daemon::Control. But we’re building the parameters to pass to the constructor. And two of the parameters (“program” and “program_args”) control how the service is run. You’ll see I’m using Starman. The first fix was obvious when you look at my code. Starman is a pre-forking server and we always start with 10 copies of the app. Now, I’m very proud of some of my apps, but I think it’s optimistic to think my Klortho server will ever need to respond to 10 simultaneous requests. Honestly, I’m pleasantly surprised if it gets 10 requests in a month. So the first change was to make it easy to change the number of workers.
In the previous article, I talked about using ENV::Util to load environment variables from a “.env” file. And we can continue to use that approach here. I rewrote the “program_args” code to be this:
program_args => [ '--workers', ($ENV{KLORTHO_APP_WORKERS} // 10),
'-l', ":$ENV{KLORTHO_APP_PORT}",
dirname(abs_path($0)) . '/app.psgi' ],
I made similar changes to all the “app_service” files, added appropriate environment variables to all the “.env” files and restarted all the apps. Immediately, I could see an improvement as I was now running maybe a third of the app processes on the server. But I felt I could do better. So I had a close look at the Starman documentation to see if there was anything else I could tweak. That’s when I found the “–preload-app” command-line option.
Starman works by loading a main driver process which then fires up as many worker processes as you ask it for. Without the “–preload-app” option, each of those processes loads a copy of the application. But with this option, each worker process reads the main driver’s copy of the application and only loads its own copy when it wants to write something. This can be a big memory saving – although it’s important to note that the documentation warns:
Enabling this option can cause bad things happen when resources like sockets or database connections are opened at load time by the master process and shared by multiple children.
I’m pretty sure that most of my apps are not in any danger here, but I’m keeping a close eye on the situation and if I see any problems, it’s easy enough to turn preloading off again.
When adding the preloading option to “app_service”, I realised I should probably completely rewrite the code that builds the program arguments. It now looks like this:
my @program_args;
if ($ENV{KLORTHO_WORKER_COUNT}) {
push @program_args, '--workers', $ENV{KLORTHO_WORKER_COUNT};
}
if ($ENV{KLORTHO_APP_PORT}) {
push @program_args, '-l', ":$ENV{KLORTHO_APP_PORT}";
}
if ($ENV{KLORTHO_APP_PRELOAD}) {
push @program_args, '--preload-app';
}
push @program_args, dirname(abs_path($0)) . '/bin/app.psgi';
The observant among you will notice that I’ve subtly changed the behaviour of the worker count environment variable. Previously, a missing variable would use a default value of 10. Now, it just omits the argument which uses Starman’s default value of 5.
I’ve made similar changes in all my “app_service” programs and set environment variables to turn preloading on. And now my apps use substantially less memory. The server hasn’t died since I implemented this stuff at the start of this week. So that makes me very happy.
But programming is the pursuit of minimisation. I’ve already seen two places where I can make these programs smaller and simpler.
That last code snippet looks too repetitive. It should be a loop iterating over a hash. The keys are the names of the environment variables and the values are references to arrays containing the values that are added to the program arguments if that environment variable is set.
I now have five or six “app_service” programs that look very similar. I must be able to turn them into one standard program. Do those environment variables really need to include the application name?
The Klortho service driver program is on GitHub. Can you suggest any more improvements?
Published by alh on Wednesday 14 August 2024 08:07
Tony writes:
``` [Hours] [Activity] 2024/07/01 Monday 0.42 #22333 review and approve 0.08 #22349 review update and approve 0.08 #22323 review update and approve 0.08 #22341 review update and approve 0.10 #22324 review update and approve 0.07 #22344 review update and approve 0.12 #22355 review and approve 0.37 #22356 review, research and approve 0.13 #22357 review, research, comment and approve 0.10 #22358 review and approve 0.08 #22359 review and approve 1.42 #22362 review, testing, debugging, research and comment 0.83 smartmatch removal: more testing, push for CI/smoke
5.15
2024/07/02 Tuesday 0.23 github notifications 0.22 smartmatch removal: check CI/smoke results 0.18 #22362 review updates and approve 0.08 #22363 review and approve 0.37 #22369 review and approve 0.38 #22364 review, testing and approve 0.28 #22366 review and comment 1.32 #22255 work up a fix, a test, testing, push for CI
3.69
2024/07/03 Wednesday 0.15 #22255 re-check, minor adjustments, testing, make PR 22371 2.63 #22329 re-work gv_fetchmeth_() docs as suggested, look into GV_NOUNIVERSAL history 1.37 #22169 fix mis-undefinition of “new”, testing, polish, push for CI 0.12 #22255 bump $XS::APItest::VERSION
4.44
2024/07/04 Thursday 0.08 github notifications 0.57 #22255 apply to blead, perldelta, porting fix 0.32 #22076 apply to blead, don’t think it needs perldelta 0.28 #22366 review and approve 0.32 #22367 review, research history of the code, approve 0.20 #22368 review and approve 0.15 #22373 review discussion, comment
4.59
2024/07/08 Monday 0.98 #22169 remove DOES TODO, rearrange the code, testing 0.87 #799 fixes for issues from mauke, other fixes 0.52 #799 more polish, testing, force push and comment 0.25 #22329 follow-up comment 0.18 #1200 review and mark closable 0.08 #1420 track back to PR #19125 0.67 #1496 review discussion, research and close 0.22 #1500 review discussion 0.12 #1509 review discussion 0.07 #1674 review discussion
4.09
2024/07/09 Tuesday 1.42 #799 re-work, testing, comment 0.87 #22380 review test failure, review UUID::FFI code, comment 0.45 #22380 research, comment 0.40 #22329 review update and approve 2.02 re-work dist modules CI, make test-dist-modules safer by
5.16
2024/07/10 Wednesday 0.47 re-work dist modules CI: look into test failures, re-work the CI labels, push for another CI run 0.27 #799 minor cleanup, re-push 2.57 re-work dist modules CI: fixes, fixes 1.52 re-work dist modules CI: fixes, Storable, try to get some
4.83
2024/07/11 Thursday 0.45 #22331 build to review docs, and approve 0.77 re-work dist modules CI: remove windows support for now, commit cleanup/rebase/squash, polish, testing, push for CI 1.13 Win32.pm PR #39 import, testing, push for CI 0.08 re-work dist modules CI: check CI results and make PR #22392 0.08 Win32.pm PR #39 import, check CI results and make PR #22393 0.10 #22384 review and comment 0.18 #22242 review, review discussion and approve 0.35 #22243 review, approve and comment 0.15 #22387 review, apply to blead, perldelta 0.10 #22388 review and apply to blead (no perldelta needed) 0.12 #22391 review and comment 0.53 #22380 consider change, comments, try the change and tests fail, review perlpacktut, I don’t think making it always SvPV_force() is reasonable.
5.04
2024/07/12 Friday
0.23
2024/07/15 Monday 0.57 #22385 research, follow-up comment 1.65 #22380 research, comment, work on a fix, push for CI 0.20 #22380 fix an issue found by CI, force push 1.55 #22393 work on CI update to test on 32-bit win32, testing 0.33 #22405 review, try a test, comments 0.92 #22393 local testing, research: can we do github CI
5.22
2024/07/16 Tuesday 0.92 #22392 updates, add some documentation, testing, rebase, force push 0.37 #799 look at b postpone subname 2.43 #799 more look at b postpone subname (which appears to have been broken for a long, long time), fix it, testing,
3.72
2024/07/17 Wednesday 0.18 #22392 review discussion, apply to blead, doesn’t seem to need a perldelta (has no effect on end users) 0.18 #22403 review and approve 0.18 #22273 review and approve 0.37 #22393 apply to blead and perldelta 1.30 #22232 work on fixes, testing 0.15 #22232 push fixes, comments 0.30 #22386 review, research and approve with comment
4.06
2024/07/18 Thursday 0.08 github notifications 0.08 ppc #50 review 0.58 #22373 review, research, comment 0.17 #22411 review, comment on CI failure 1.18 #22370 more split up commit
3.11
2024/07/22 Monday 0.45 #22415 review discussion, research and comment 1.90 #22370 fix issues from PR reviews 0.42 #22415 research and follow-up comment 1.53 #22377 partly review up to 885566
5.22
2024/07/23 Tuesday 0.10 #22394 check CI results, check messages, minor test nit fix, testing, push and make PR 22420 0.17 #799 apply to blead, perldelta 0.23 #22419 review and approve 0.55 #22418 review, research and briefly comment 0.17 #22421 review, comment on a typo 0.38 ppc #51 review 0.48 #21877 look into my remaining concern on this (possible SV leak?) 0.12 #21877 too much noise with DEBUG_LEAKING_SCALARS :/ 0.20 #22394 fix per review, testing, force push 0.18 #22370 check rebase 0.25 #22232 apply to blead, perldelta
4.30
2024/07/24 Wednesday 0.28 github notifications 0.75 #22421 review and approve 0.08 #22303 comment 1.53 #22423 debugging, briefly comment 0.53 #22422 review and approve 0.55 #22414 review, review discussion and comment 0.70 #22400 review, testing, comment
4.97
2024/07/25 Thursday 2.60 #22377 more review and finally approve
5.13
2024/07/29 Monday 0.25 #22427 review and approve 0.30 #22428 review, comment and approve 0.17 #22432 review and approve 0.13 #22433 review, review discussion and close 0.23 #22434 review and approve 0.15 #22435 review and approve 0.08 #22437 review and approve 0.28 #22420 apply to blead, perldelta 0.55 #22303 fix some issues 0.93 #22303 more fixing, testing
4.10
2024/07/30 Tuesday 0.70 #22415 review discussion and comment 0.70 #22442 testing and comment 0.35 #22443 review and comment 0.37 #22425 testing, research and comment 0.43 #1420 more fixing, testing, debugging
4.85
2024/07/31 Wednesday 0.25 #22444 review, review discussion and comments 0.35 #22443 review updates and approve 0.33 #22442 review discussion, testing 1.35 #22450 review and comments 0.45 #22445 review, comments
4.36
Which I calculate is 86.26 hours.
Approximately 79 tickets were reviewed or worked on, and 9 patches were applied. ```
Published on Monday 12 August 2024 14:51
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given a positive integer, $int, having 3 or more digits. Write a script to return the Good Integer in the given integer or -1 if none found.
The complete solution is contained in one file that has a simple structure.
For this problem we do not need to include very much. We’re just specifying to use the current version of Perl, for all the latest features in the language. This fragment is also used in Part 2.
A good integer is exactly three consecutive matching digits.
sub good_integer{
my($x) =
@_;
return qq/$1$2/ if $x =~ m/([0-9])(\1{2,})/ &&
length qq/$1$2/ == 3;
return -1;
}
◇
Fragment referenced in 1.
Now all we need are a few lines of code for running some tests.
MAIN:{
say good_integer q/12344456/;
say good_integer q/1233334/;
say good_integer q/10020003/;
}
◇
Fragment referenced in 1.
$ perl perl/ch-1.pl 444 -1 000
You are given an alphabetic string, $str, as typed by user. Write a script to find the number of times user had to change the key to type the given string. Changing key is defined as using a key different from the last used key. The shift and caps lock keys won’t be counted.
sub count_key_changes{
my($s) =
@_;
my $count = 0;
my
@s = split //, lc $s;
{
my $x = shift
@s;
my $y = shift
@s;
$count++ if $x && $y && $x ne $y;
unshift
@s, $y if $y;
redo if
@s;
}
return $count;
}
◇
Fragment referenced in 5.
Finally, here’s a few tests to confirm everything is working right.
MAIN:{
say count_key_changes(q/pPeERrLl/);
say count_key_changes(q/rRr/);
say count_key_changes(q/GoO/);
}
◇
Fragment referenced in 5.
$ perl ch-2.pl 3 0 1
Today we kick off a new series profiling the organizations that financially support MetaCPAN. Our goal is to showcase the diversity of teams supporting MetaCPAN and learn how they are using Perl.
We start things off with a look at OpenCage, which operates a widely-used geocoding API.
We’re a small company with a big goal - geocoding the world with open data.
Geocoding is the process of converting to and from geographic coordinates (latitude, longitude) to location information (address, but also other things). Geocoding is complex for two reasons: first of all the world is continually changing. Secondly, the way people have subdivided and labeled the world and think of location is highly variable, complicated, and (sadly) often illogical.
There are many different twists on geocoding in terms of how people search for locations, but also in the types of information customers want in response to their queries; many different use cases. We offer our service via an API, and one of the biggest challenges is trying to keep it as simple as possible while also addressing (pun intended!) all the different use cases. Some basic examples: one customer wants hyper-precise, current address information, while the next customer “just” wants to map coordinates to a time zone.
We spend a lot of time listening to customers and thinking about how to make the service easier to use. We have tutorials and libraries for about 30 different programming languages, including Perl via Geo::Coder::OpenCage, of course.
There are different commercial geocoding providers, but probably the main differentiator of our service (besides the stunning good looks of the founders, obviously) is that we rely on open data via sources like OpenStreetMap, a community we’re very active in. Just like open source software, open data offers all sorts of advantages. You can store the data as long as you need, for example, which many of the big commercial providers don’t allow. Another major advantage is the cost. Because the data is free, our service is highly affordable, especially at higher volumes.
We use a whole bunch of different technologies, but the core of our service is Perl.
Three main reasons:
Essentially what we are doing is manipulating and cleaning textual data, and Perl is absolutely excellent at that.
We provide a bunch of different types of data in the API response - referred to in our API as “annotations”. Many of these are essentially just different encoding schemes for latitude and longitude, and we’re able to rely on the wealth of great modules from CPAN.
Perl is rock solid and reliable, especially over time. We also offer a location auto-suggest widget written in Javascript, and, to be honest, the sheer pace at which the js universe evolves means maintenance becomes a real nightmare, especially for a small team. Perl moves forward, but predictably and without breaking the past.
Well, quite simply, as a way to give back to the technology and community we rely upon. We’re a small company, so our means are limited, but it’s very important for us to contribute back to the projects we depend on - be it financially, via software, or in other ways. Hopefully our efforts can be an example for others.
Many thanks to everyone in the Perl community - in whatever role - who contributes to keep the project thriving. Keep up the good work! A special salute to the other companies who are contributing financially. Hopefully more will join us.
If you have any geocoding needs, please check out our geocoding API.
Anyone who would like to learn more about what we’re up to can check out our blog and/or follow us on mastodon. We often post there about interesting bits of #geoweirdness
Geospatial is an endlessly fascinating technical topic. If you’re interested, we organize Geomob, a series of geo meetups (very similar in spirit to Perl Mongers - interesting talks followed by socializing over drinks) in various European cities. There is also a weekly Geomob podcast.
Published on Saturday 10 August 2024 16:52
The examples used here are from the weekly challenge problem statement and demonstrate the working solution.
You are given coordinates, a string that represents the coordinates of a square of the chessboard. Write a script to return true if the square is light, and false if the square is dark.
The complete solution is contained in one file that has a simple structure.
For this problem we do not need to include very much. We’re just specifying to use the current version of Perl, for all the latest features in the language. This fragment is also used in Part 2.
The color of a given square is determined by calculating its color number. If the color number is positive the square is dark. If the color number is negative then it is light.
We determine the color number in the following code fragment. Here we compute $n as -1 raised tot he power of the letter’s index. In this way we get alternating -1/1 starting with ’a’. We do the same with the second part of the co-rdinate to get an alternating -1/1 for the chessboard row. These are multiplied together to get the color number.
Now all we need are a few lines of code for running some tests.
MAIN:{
say check_color q/d3/;
say check_color q/g5/;
say check_color q/e6/;
say check_color q/b1/;
say check_color q/b8/;
say check_color q/h1/;
say check_color q/h8/;
}
◇
Fragment referenced in 1.
$ perl perl/ch-1.pl true false true true false true false
A Knight in chess can move from its current position to any square two rows or columns plus one column or row away. Write a script which takes a starting position and an ending position and calculates the least number of moves required.
The bulk of the code is just setting up the main data structure, a graph. For each square of the chessboard we add an edge to all the squares that are reachable by a knight.
sub build_graph{
my $graph = Graph->new();
do {
my $c = $_;
do {
my $r = $_;
my($s, $t);
##
# up
##
$s = $r + 2;
$t = chr(ord(qq/$c/) - 1);
⟨add edge if legal move 8 ⟩
$t = chr(ord(qq/$c/) + 1);
⟨add edge if legal move 8 ⟩
##
# down
##
$s = $r - 2;
$t = chr(ord(qq/$c/) - 1);
⟨add edge if legal move 8 ⟩
$t = chr(ord(qq/$c/) + 1);
⟨add edge if legal move 8 ⟩
##
# left
##
$s = $r - 1;
$t = chr(ord(qq/$c/) - 2);
⟨add edge if legal move 8 ⟩
$s = $r + 1;
⟨add edge if legal move 8 ⟩
##
# right
##
$s = $r - 1;
$t = chr(ord(qq/$c/) + 2);
⟨add edge if legal move 8 ⟩
$s = $r + 1;
⟨add edge if legal move 8 ⟩
} for 1 .. 8;
} for q/a/ .. q/h/;
return $graph;
}
◇
For convenience I use a little bit of nuweb hackery instead of a new subroutine to seperate out this code which is repeated in the final generated code file.
After we go through the work of setting up the graph the result can be easily gotten via use of Djikstra’s shortest path algorithm.
Finally, here’s a few tests to confirm everything is working right.
$ perl ch-2.pl g2 ---> a8 4: g2 -> e3 -> c4 -> b6 -> a8 g2 ---> h2 3: g2 -> e1 -> f3 -> h2
The Weekly Challenge 281
Generated Code
Graph.pm
Knight’s Tours