I have some code that feels like a bug in Perl, but am I missing something? Basically, when I localize a hash element in a shared hash, the element still exists (with the undef value) after it leaves the scope.

For example:

use threads;
use threads::shared;

my $hash1 = {};
{ local $hash1->{'desc'} = "blah" }
print exists($hash1->{'desc'}) ? "Hash: exists\n" : "Hash: does not exist\n";

my $hash2 = shared_clone({});
{ local $hash2->{'desc'} = "blah" }
print exists($hash2->{'desc'}) ? "Shared hash: exists\n" : "Shared hash: does not exist\n";
print "Shared hash: is undef\n" if !defined($hash2->{'desc'});

Which prints the following for perl v5.34.0:

Hash: does not exist
Shared hash: exists
Shared hash: is undef

I found a very similar bug that was apparently fixed in perl v5.8.0 for tied hashes. I'm wondering if shared hashes are something different than "tied" and therefore still have the bug?

The bug identified in perldoc perl58delta as being fixed:

  • Localised hash elements (and %ENV) are correctly unlocalised to not exist, if they didn't before they were localised.

    use Tie::Hash;
    tie my %tied_hash => 'Tie::StdHash';
    
    ...
    
    # Nothing has set the FOO element so far
    
    { local $tied_hash{FOO} = 'Bar' }
    
    # This used to print, but not now.
    print "exists!\n" if exists $tied_hash{FOO};
    

    As a side effect of this fix, tied hash interfaces must define the EXISTS and DELETE methods.

Perl Weekly #670 - Conference Season ...

dev.to #perl

Published by Gabor Szabo on Monday 27 May 2024 05:29

Originally published at Perl Weekly 670

Hi there,

Are you regulars to Perl conference?

If yes then you have two upcoming conferences The Perl and Raku Conference in Las Vegas and London Perl and Raku Conference. Depending on your availability and convenience, I would highly recommend you register your interest to your choice(s) of conference. And if you are attending then do take the plunge give your first talk if you have not done so before. It doesn't have to be long talk, you can try quick 5 minutes lightning talk to begin with.

How about become a sponsor to the conference?

Believe it or not, it is vital that we provide financial support in the form of sponsor. So if you know someone who is in a position to support these events then please do share this TPRC 2024 Sponsors and LPW 2024 Sponsors with them. It would be a big help to organise such events.

Keynote speakers for TPRC 2024...

I came across this post by Curtis Poe where it is announced that Curtis is going to be keynote speaker at the event. Well there is a bonus for all attending the event, Damian Conway would be giving a keynote remotely. I am sure, it is going to be a memorable moment to celebrate the 25th anniversary. Similarly, London Perl Workshop would be celebrating 20th anniversary this year. I wanted to attend the TPRC 2024 in Las Vegas but for personal reason I am unable to attend. What a shame but at least I am definitely going to be part of LPW 2024 as it is local to me. No need to book travel ticket or reserve hotel room.

How many of you know about Pull Request Club?

The Pull Request Club is run by Kivanc Yazan. It started in Jan 2019. I had the pleasure to be associated with it since the beginning. I never missed the assignment until the last I contributed in January 2022. Unfortunately I faced too much distraction and missed the fun ever since. I found this annual report by the creator himself. If you like contributing to opensource projects then you should join the club and have fun.

For all cricket fans in India, did you watch the final of IPL 2024? I did and happy to see my favourite team, Kolkatta Knight Riders lifting the trophy. Although, SRH, the loosing team was my favourite too but it didn't play to their capability. I am now looking forward to the T20 World Cup next. How about you?

Today is Bank Holiday in England, so relax day for me. Enjoy rest of the newsletter. Last but not least, please do look after yourself and your loved ones.

--
Your editor: Mohammad Sajid Anwar.

Sponsors

Getting sarted with Docker for Perl developers (Free Virtual Workshop)

In this virtual workshop you will learn why and how to use Docker for development and deployment of applications written in Perl. The workshop is free of charge thanks to my supporters via Patreon and GitHub

Announcements

Being a Keynote Speaker

TPRC 2024 keynote speaker is announced. I am jealous of those able to attend the event.

Articles

Pull Request Club 2021-2023 Report

Finally we have the long awaited annual report of Pull Request Club. Happy to see it is growing so fast. Congratulation to all contributors.

Deploying Dancer Apps

Being a fan of Dancer2 framework, I found this blog post very informative with plenty of handy tricks.

Perl Toolchain Summit 2024 in Lisbon

It is always pleasure to read about the success story of PTS 2024. Here we have another such from Kenichi. Thanks for sharing the report with us. It proves a point that Perl is in safe hand.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 271

Welcome to a new week with a couple of fun tasks "Maximum Ones" and "Sort by 1 bits". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 270

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Special Positions" and "Equalize Array" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Distribute Positions

Don't you love the pictorial representation of algorithm? It makes it so fun follow through the discussion. Highly recommended.

When A Decision Must Be Made

Labelled loop is not very popular among Perl fans but in certain situations it can be very handy. Check it out the reason in the post.

Special Levels

Classic use case of PDL, very impressive. Thanks for sharing the knowledge.

Perl Weekly Challenge 270: Special Positions

As always, we get to know any junction of Raku implementation in Perl. This is the beauty of the post every week, you don't want to skip.

no passion this week!

Compact solutions using the power of Raku is on show. Keep it up great work.

Perl Weekly Challenge 270

Not sure, I have seen Luis used PDL before, I may be wrong. For me, it is encouragung to see the wide use of PDL. Keep it up great work.

Hidden loops. Or no loops at all.

This is truly incredible work, no loops at all. I would suggest, you must take a closer look. Thanks for sharing.

Lonely ones and equalities

Well documented and crafted solutions in Perl and on top you get to play with it. Well done and keep it up great work.

The Weekly Challenge - 270: Special Positions

Clever use of CPAN module, Math::Matrix. I always encourage the use of CPAN. Well done.

The Weekly Challenge - 270: Equalize Array

Interesting tackling of use cases. It is fun getting to the finer details. Thanks for sharing.

The Weekly Challenge #270

Just one solution this week, and typically one line analysis. Keep it up great work.

Special Distribtions Position the Elements

Discussion of solution in Crystal is the highlight for me. It looks easy and readable even when I know nothing about Crystal language. Highly recommended.

Equalizing positions

For Python fans, the post is always dedicated to Python only but we do receive Perl solutions. I really enjoy the compact solution in Python, specially the return list type. I never knew before. Thanks for sharing.

Rakudo

2024.21 Curry Primed

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
StackOverflow Perl report.

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

The Weekly Challenge - Perl & Raku

The Weekly Challenge

Published on Monday 27 May 2024 02:11

The page you are looking for was moved, removed, renamed or might never existed.

Perl

The Weekly Challenge

Published on Monday 27 May 2024 02:11

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

Blog

The Weekly Challenge

Published on Monday 27 May 2024 02:11

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

Prolog

The Weekly Challenge

Published on Monday 27 May 2024 02:11

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.

Ruby

The Weekly Challenge

Published on Monday 27 May 2024 02:11

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.

Empty `vec()` is not False

Perl questions on StackOverflow

Published by Greg Kennedy on Monday 27 May 2024 00:31

Consider this example:

#!/usr/bin/env perl

my $vec0;

# set the first bit of vec0 to false
vec($vec0, 0, 1) = 0;

print "vec0 is " . length($vec0) . " bytes long\n";

if ($vec0) {
  # what
  print "vec0 is True!\n";
}

A vec used in evaluation seems to (almost) always be True - this is because a vec is really a string, and so $vec0 is a string containing "\0" which is not False according to Perl: only strings "" and "0" are False.

(aside: this is False, despite being non-zero:

vec($vec0, 5, 1) = 1;
vec($vec0, 4, 1) = 1;

because 0x30 is the ASCII code for 0)

Fine: what is the "correct" way to check for an "empty" vector (i.e. all bits set to 0)? Count bits with unpack, regex test m/^\0*$/, "normalize" all vectors by chopping empty bytes off the end? This seems like it should be a solved problem... and why does Perl not treat vec magically for true/false?

Perl Toolchain Summit 2024 in Lisbon

blogs.perl.org

Published by Kenichi Ishigaki on Sunday 26 May 2024 18:14

Last year at the Perl Toolchain Summit (PTS) in Lyon, I left three draft pull requests: one about the class declaration introduced in Perl 5.37, one about the PAUSE on docker, and one about multifactor authentication. I wanted to brush them up and ask Andreas König to merge some, but which should I prioritize this year?

I focused on the web UI in the past because other people tended to deal with the PAUSE backend, especially its indexer. But this year, when I was able to start thinking about my plan, Ricardo Signes and Matthew Horsfall had already expressed their plan about migrating the PAUSE to a new server. I was unsure if they would use my docker stuff, but I could safely guess I didn't need to touch it. I also thought we wouldn't have time to finish the multifactor authentication because it would need to change the PAUSE itself and the uploader clients, and Ricardo maintained the most favorite uploader module. The change for the new class detection was simple, but that didn't mean the result would also be predictable. I decided to investigate how the 02packages index would change first.

I needed to find a way to rebuild the index from scratch to see the differences. I wrote a script to gather author information from a CPAN mirror and filled the PAUSE's user-related tables with dummy data. I wrote another script to register my distributions in the mirror to my local PAUSE. The PAUSE would complain if I registered an older distribution after a newer one, so I had to gather all the information about my distributions and sort them by creation time. It seemed fine now, but it soon started hanging up when I increased the number of the distributions to register. The PAUSE daemon spawned too many child indexer processes and ate up all the memory I allocated to a virtual machine. After several trials and errors, I limited the number of child processes with Parallel::Runner, which I used for the CPANTS for years. Even if it weren't acceptable to Andreas for some reason, it should be easy to ask for the author's help because he (Chad Granum) would be at the PTS. I also had to fix a deadlock in the database due to the lack of proper indices. Matthew had already made a pull request last year, but I misread it and fixed the issue in a different (and inefficient) way.

Now that the script ran without hanging, I compared the generated 02packages index with the one in the mirror. I found more than four thousand lines of difference. I modified my local PAUSE clone to see why that happened. It looked like most of them were removed due to historical changes in the indexing policy, but instead of digging into it further, I decided to use what I got as a reference point and started changing the indexer. After several comparisons, I modified my local indexer to take care of the byte order mark and let it look for class declarations only when a few "use" statements were found. I applied the same changes to my Parse::PMFile module and made two releases before the PTS.

Day 1 of the PTS in Lisbon started with a discussion of the PAUSE migration. While the migration team was preparing, I asked Andreas to merge some of the existing small pull requests. The first one was to replace Travis CI with GitHub Actions by Ricardo. Unfortunately, it turned out that Test::mysqld and App::yath didn't work well in the GitHub Actions environment. I asked Chad for advice, but we couldn't make it work, so I tweaked the workflow file to use the good old "prove" command. The second was to improve password generation using Crypt::URandom by Leon Timmermans. I made another pull request to add it to the cpanfile for GitHub Actions. It might be better to modify our Makefile.PL to use ExtUtils::MakeMaker::CPANfile so that we wouldn't need to maintain both cpanfile and Makefile.PL. Maybe next time.

After dealing with a few more issues and pull requests, we moved on to class detection. As a starter, I asked Andreas to merge a years-old pull request by Ricardo to make the package detection stricter and then a pull request about the BOM I made. We discussed whether we could ignore class declarations by older modules such as MooseX::Declare. With Andreas' nod, I made another pull request and asked Ricardo and Matthew to review it.

I started day two by adding tests about the class detection with Module::Faker. I made another pull request to create a new 08pumpking index per Graham Knop's request, which MetaCPAN would eventually use. After merging them and a few more pull requests, I recreated a draft pull request on the multifactor authentication with pieces I couldn't implement last year (such as recovery codes). We also discussed the deadlock issue. In the end Andreas chose my pull request plus a commit from the one by Matthew. I was sorry we encountered a disk shortage while adding indices. Robert Spier helped us and optimized the database. By the end of the day, we had a few more pull requests merged, including the one for Parallel::Runner, with the help of Chad.

Day 3 was Deployment day. The migration team was busy, and there was no room for other stuff. I walked through the open issues, replied to some, and made a few small pull requests, hoping to revisit them in the future.

On day 4, I spent some time trying to figure out why uploading a large file to the new server didn't work, but in vain. I also attended a discussion about future PAUSE development. It would be nice to see the development continue after the offline event.

Many thanks to Breno Oliveira, Philippe Bruhat, and Laurent Boivin for organizing this event again and to our generous sponsors.

Monetary sponsors: Booking.com, The Perl and Raku Foundation, Deriv, cPanel, Inc Japan Perl Association, Perl-Services, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, Harald Joerg, Steven Schubiger.

In-kind sponsors: Fastmail, Grant Street Group, Deft, Procura, Healex GmbH, SUSE, Zoopla.

Let's see you have specified the second argument of open function like this:

open my $fh, ">:via(File::BOM):encoding(UTF-8)", $file or die "Cannot open $file: $!";

Here you specify the :via(File::BOM) first and :encoding(UTF-8)", Then, the output string will be cut in the middle.

Following script attempts to output the UTF-8 text file with BOM, The contents should be concatenation of strings from "xxx yyyy1" to "xxx yyyy100", incrementing the trailing number, and delimitted with " / ".

#!/usr/bin/perl

# bomTest.pl

use strict;
use warnings;
use File::BOM;
use feature 'say';

my $file = '/home/cf/Desktop/foo.txt';

open my $fh, ">:via(File::BOM):encoding(UTF-8)", $file or die "Cannot open $file: $!";

say $fh 'xxx yyyy1 / xxx yyyy2 / xxx yyyy3 / xxx yyyy4 / xxx yyyy5 / xxx yyyy6 / xxx yyyy7 / xxx yyyy8 / xxx yyyy9 / xxx yyyy10 / ' .
'xxx yyyy11 / xxx yyyy12 / xxx yyyy13 / xxx yyyy14 / xxx yyyy15 / xxx yyyy16 / xxx yyyy17 / xxx yyyy18 / xxx yyyy19 / xxx yyyy20 / ' .
... snip ...
'xxx yyyy71 / xxx yyyy72 / xxx yyyy73 / xxx yyyy74 / xxx yyyy75 / xxx yyyy76 / xxx yyyy77 / xxx yyyy78 / xxx yyyy79 / xxx yyyy80 / ' .
'xxx yyyy81 / xxx yyyy82 / xxx yyyy83 / xxx yyyy84 / xxx yyyy85 / xxx yyyy86 / xxx yyyy87 / xxx yyyy88 / xxx yyyy89 / xxx yyyy90 / ' .
'xxx yyyy91 / xxx yyyy92 / xxx yyyy93 / xxx yyyy94 / xxx yyyy95 / xxx yyyy96 / xxx yyyy97 / xxx yyyy98 / xxx yyyy99 / xxx yyyy100';

close $fh;

The output file foo.txt will be will be UTF-8 file with the BOM (0x EF BB BF) at the top of the file, but the string will be terminated at the middle as below:

xxx yyyy1 / xxx yyyy2 / xxx yyyy3 / xxx yyyy4 / xxx yyyy5 / xxx yyyy6 / xxx yyyy7 / xxx yyyy8 / xxx yyyy9 / xxx yyyy10 / xxx yyyy11 / xxx yyyy12 / xxx yyyy13 / xxx yyyy14 / xxx yyyy15 / xxx yyyy16 / xxx yyyy17 / xxx yyyy18 / xxx yyyy19 / xxx yyyy20 / ...snip... xxx yyyy71 / xxx yyyy72 / xxx yyyy73 / xxx yyyy74 / xxx yyyy75 / xxx yyyy76 / xxx yyyy77 / xxx yyyy78 / xxx yyyy79 / xxx yy

The output stops in the middle of "xxx yyyy80".

Now if you change the script like this:

open my $fh, ">:encoding(UTF-8):via(File::BOM)", $file or die "Cannot open $file: $!";

The changed point is the order of I-O Layer. You specified :encoding(UTF-8) first and :via(File::BOM) last.

Then the script run completely to "xxx yyyy100".

What is this phenomenon? Is it the bug of Perl encoding? Or else the one of File::BOM module? Or is it the reasonable specification of them?

I have in a ANSI file Perl script the following code:

use strict;
use warnings;
use Encode;

sub UnicodeDecHashNumsToText($)
{
    use open qw/:std :encoding(UTF-8)/; # Tell perl standard output etc. use utf-8

    my( $lText ) = @_;
    print "Orig Text = '$lText'\n";
    $lText =~ s/^#(\d+)/chr($1)/e; # e treats the replacement as code, not text
    print "Returning: (".length($lText).")'$lText'\n";
    return $lText;
}

# Comparing
# The 'got' value is obtained reading a normal file.
my $got = UnicodeDecHashNumsToText( "#8364" );

# The 'expected' value is read through a select statement 
# from a database in locale: en_US.819
my $expected = "€"; 

print "'$got' vs  '$expected' \n"; 
if( $got  ne $expected ) { print "FAILED\n"; } else { print "SUCCESS\n"; }

Output:

Orig Text = '#8364'
Returning: (1)'€'
'€' vs  ''
FAILED

The compare fails. What changes or additions needs to be done for the compare to be ok?

Equalizing positions

dev.to #perl

Published by Simon Green on Sunday 26 May 2024 12:29

Weekly Challenge 270

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Special Positions

Task

You are given a m x n binary matrix.

Write a script to return the number of special positions in the given binary matrix.

A position (i, j) is called special if $matrix[i][j] == 1 and all other elements in the row i and column j are 0.

My solution

For the input from the command line, I take a JSON string and convert that into a list of lists of integers.

This is a break down of the steps I take to complete the task.

  1. Set the special_position value to 0.
  2. Set rows and cols to the number of rows and columns in the matrix
  3. Create two lists (arrays in Perl) called row_count and col_count with zeros for the number of rows and columns respectively.
  4. Loop through each row and each column in the matrix. If the value is 1, increment the row_count for the row and col_count for the column by one. I also check that the number of items in this row is the same as the number of items in the first row.
  5. Loop through each row and each column in the matrix. If the value at that position is 1 and the row_count for the row is 1 (this would indicate that the other elements in the row are 0) and the col_count is 1, add one to the special_position variable.
  6. Return the special_position value.
def special_positions(matrix: list) -> int:
    rows = len(matrix)
    cols = len(matrix[0])
    special_position = 0

    row_count = [0] * rows
    col_count = [0] * cols

    for row in range(rows):
        if len(matrix[row]) != cols:
            raise ValueError("Row %s has the wrong number of columns", row)

        for col in range(cols):
            if matrix[row][col]:
                row_count[row] += 1
                col_count[col] += 1

    for row in range(rows):
        for col in range(cols):
            if matrix[row][col] and row_count[row] == 1 and col_count[col] == 1:
                special_position += 1

    return special_position

Examples

$ ./ch-1.py "[[1, 0, 0],[0, 0, 1],[1, 0, 0]]"
1

$ ./ch-1.py "[[1, 0, 0],[0, 1, 0],[0, 0, 1]]"
3

Task 2: Equalize Array

Task

You are give an array of integers, @ints and two integers, $x and $y.

Write a script to execute one of the two options:

  • Level 1: Pick an index i of the given array and do $ints[i] += 1.
  • Level 2: Pick two different indices i,j and do $ints[i] +=1 and $ints[j] += 1.

You are allowed to perform as many levels as you want to make every elements in the given array equal. There is cost attach for each level, for Level 1, the cost is $x and $y for Level 2.

In the end return the minimum cost to get the work done.

Known issue

Before I write about my solution, it will return the expected results for the two examples, but will not always give the minimum score.

For the array (4, 4, 2) with $x of 10 and $y of 1, it will return 20 (perform level 1 on the third value twice). However if you perform level 2 on the first and third value (5, 4, 3), and then on the second and third value (5, 5, 4), and finally level 1 on the last value (5, 5, 5), you'd get a score of 12.

File a bug in Bugzilla, Jira or Github, and we'll fix it later :P

My solution

For input from the command line, I take the last two values to be x and y, and the rest of the input to be ints.

The first step I take is to flip the array to be the number needed to reach the target value (maximum of the values).

def equalize_array(ints: list, x: int, y: int) -> str:
    score = 0
    # Calculate the needed values
    max_value = max(ints)
    needed = [max_value - i for i in ints]

I then perform level two only if y is less than twice the value of x. If it isn't, then I will always get the same or a lower score by performing level one on each value.

For level two, I sort the indexes (not values) of the needed list by their value, with the highest value first. If the second highest value is 0, it means there is no more level two tasks to perform, and I exit the loop. Otherwise I take one off the top two values in the needed array, and continue until the second highest value is 0. For each iteration, I add y to the score value.

    if len(ints) > 1 and y < x * 2:
        while True:
            sorted_index = sorted(
                range(len(ints)),
                key=lambda index: needed[index],
                reverse=True
            )

            if needed[sorted_index[1]] == 0:
                break

            needed[sorted_index[0]] -= 1
            needed[sorted_index[1]] -= 1
            score += y

Finally, my code perform the Level One operation. As level one takes one off each needed number, I simply multiple the sum of the remaining needed values by the x value and add it to score. I then return the value of score variable.

    score += sum(needed) * x
    return score

Examples

$ ./ch-2.py 4 1 3 2
9

$ ./ch-2.py 2 3 3 3 5 2 1
6

We generate large sequences of perl code and include it into a driver program. On a newer system version it takes a very long time to process the include. Any ideas where to start looking?

(cdxcvii) 8 great CPAN modules released last week

r/perl

Published by /u/niceperl on Saturday 25 May 2024 20:05

(cdxcvii) 8 great CPAN modules released last week

Niceperl

Published by Unknown on Saturday 25 May 2024 22:05

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.413 on 2024-05-23, with 14 votes
    • Previous CPAN version: 2.410 was 19 days before
    • Author: KUERBIS
  2. App::Netdisco - An open source web-based network management tool.
    • Version: 2.076005 on 2024-05-20, with 16 votes
    • Previous CPAN version: 2.076004 was 17 days before
    • Author: OLIVER
  3. Devel::CheckOS - a script to package Devel::AssertOS modules with your code.
    • Version: 2.04 on 2024-05-22, with 17 votes
    • Previous CPAN version: 2.02 was 7 days before
    • Author: DCANTRELL
  4. Dist::Zilla - distribution builder; installer not included!
    • Version: 6.032 on 2024-05-25, with 184 votes
    • Previous CPAN version: 6.031 was 6 months, 4 days before
    • Author: RJBS
  5. MCE - Many-Core Engine for Perl providing parallel processing capabilities
    • Version: 1.890 on 2024-05-24, with 103 votes
    • Previous CPAN version: 1.889 was 8 months, 11 days before
    • Author: MARIOROY
  6. MCE::Shared - MCE extension for sharing data supporting threads and processes
    • Version: 1.887 on 2024-05-24, with 15 votes
    • Previous CPAN version: 1.886 was 8 months, 11 days before
    • Author: MARIOROY
  7. Minion::Backend::mysql - MySQL backend
    • Version: 1.006 on 2024-05-22, with 13 votes
    • Previous CPAN version: 1.005 was 16 days before
    • Author: PREACTION
  8. Object::Remote - Call methods on objects in other processes or on other hosts
    • Version: 0.004004 on 2024-05-23, with 20 votes
    • Previous CPAN version: 0.004001 was 4 years, 5 months, 26 days before
    • Author: HAARG

I wrote some code to use the 1Password CLI

rjbs forgot what he was saying

Published by Ricardo Signes on Saturday 25 May 2024 12:00

Every time I store an API token in a plaintext file or an environment variable, it creates a lingering annoyance that follows me around whenever I go. Every year or two, another one of these lands on the pile. I am finally working on purging them all. I’m doing it with the 1Password CLI, and so far so good.

op

1Password’s CLI is op, which lets you do many, many different things. I was only concerned with two: It lets you read single fields from the vault and it lets you read entire items. For example, take this login:

a screenshot of my Pobox login

You can see there are a bunch of fields, like username and password and website. You can fetch all of them or just one. It’s a little weird, but it’s much easier to get a locator for one field than for the whole item. If you click the “Copy Secret Reference” option, you’ll get something like this on your clipboard:

"op://rjbs/Pobox/password"

You can pass that URL to op read and it will print out the value of the field. Here, that’s the password. Getting one field at a time can be useful if you only need to retrieve a password or TOTP secret or API token. Often, though, you’ll want to get the whole login at once. It would mean you could just store the item’s id rather than a cleartext username and a reference to the password field. Or worse, a reference to the password field and another one to the the TOTP field. Also, since each field needs to be retrieved separately with op read, it means more external processes and more possibility of weird errors.

The op item get command can fetch an entire item with all its fields. It can spit the whole item out as JSON. Here’s a limited subset of such a document:

{
  "fields": [
    {
      "id": "password",
      "type": "CONCEALED",
      "purpose": "PASSWORD",
      "label": "password",
      "value": "eatmorescrapple",
      "reference": "op://rjbs/Pobox/password",
      "password_details": {
        "strength": "DELICIOUS"
      }
    }
  ]
}

Unfortunately, 1Password doesn’t make it trivial to get the argument you need to pass op item get, but it’s not really hard. You can use “Copy Private Link”, which will get you a URL something like this (line breaks introduced by me):

https://start.1password.com/open/i?a=XB4AE5Q2ESODUTKETZB3BQGCM4
    &v=flk3x357inyiw22qpoiubhsgin
    &i=7wdr3xyzzym2xgorp4zx22zq3h
    &h=example.1password.com

The i= parameter is the item’s id. You can use that as the argument to op item get. Alternatively, given the URL like op://rjbs/Pobox/password you can extract the vault name (“rjbs”) and the item name (“Pobox”) and pass those as separate parameters that will be used to search for the item.

But why do either? You can just use Password::OnePassword::OPCLI!

Password::OnePassword::OPCLI

Here are two tiny examples of its use:

my $one_pw = Password::OnePassword::OPCLI->new;

# Get the string found in one field in your 1Password storage:
my $string = $one_pw->get_field("op://rjbs/Pobox/password");

# Get the complete document for an item, as a hashref:
my $pw_item = $one_pw->get_item("7wdr3xyzzym2xgorp4zx22zq3h");

Hopefully by now you can imagine what this is all doing. get_item returns the data structure that you’d get from op item get. You can look at its fields entry and find what you need. It does have one other trick worth mentioning. Because it’s a bit annoying to get the unique identifier for an item id, you can pass one of those op:// URLs, dropping off the field name, like this:

# Get the complete document for an item, as a hashref:
my $pw_item = $one_pw->get_item("op://rjbs/Pobox");

I’m currently imagining a world where I stick those URLs in place of API tokens and make my software smart enough to know that if it’s given an API token that string starts with op://, it should treat it as a 1Password reference. I haven’t implemented everything I need for that, but I did write something to use this with Dist::Zilla

Dist::Zilla and 1Password

The first thing I wanted to use all this for was my PAUSE password. Unfortunately for me, this was sort of complicated. Or, if not complicated, just tedious. I made a few false starts, but I’m just going to describe the one that I’m running with.

Dist::Zilla is the tool I use (and wrote) for making releases of my CPAN distributions. It’s usually configured with an INI file, like this one:

name    = Test-BinaryData
author  = Ricardo Signes <cpan@semiotic.systems>
license = Perl_5
copyright_holder = Ricardo Signes
copyright_year   = 2010

[@RJBS]
perl-window = long-term

Each section (the things in [...]) is a plugin of some sort. If the name starts with an @ it’s a bundle of plugins instead. But there’s another less commonly seen sigil for plugins: %. A percent sign means that the thing being loaded isn’t a plugin but a stash, which holds data for other plugins to use. These will more often be in ~/.dzil/config.ini than in each project.

The UploadToCPAN plugin, which actually uploads tarballs to the CPAN, looks in a few places for your credentials:

  • the %PAUSE stash (or another stash of your choosing)
  • ~/.pause, where CPAN::Uploader usually puts these credentials
  • user input when prompted

The %PAUSE stash was slightly overspecified in the code. It had to be a bit of configuration with the username and passwords given as text. What I did was relax that so that any stash implementing the (long-existing) Login role could be used. Then I wrote a new implementation of that role, Dist::Zilla::Stash::OnePasswordLogin. In that version of the stash, you only need to provide an item locator, and it will look up the username and password just in time. So, I have something like this in my global config now:

[%OnePasswordLogin / %PAUSE]
item = op://rjbs/PAUSE

Who cares if somebody steals this URL? They can’t read the credential unless I authenticate with 1Password at the time of reading. Putting other login credentials into your configuration for other plugins is similarly safe. Now, when I run dzil release, at the end I’m prompted to touch the fingerprint scanner to finish releasing. Not only is it more secure, but it feels very slightly like I’m in some kind of futuristic hacker movie.

What more could I want from my life as a computer programmer?

# Perl Weekly Challenge 270: Special Positions

blogs.perl.org

Published by laurent_r on Saturday 25 May 2024 02:52

These are some answers to the Week 270, Task 1, of the Perl Weekly Challenge organized by Mohammad S. Anwar.

Spoiler Alert: This weekly challenge deadline is due in a few days from now (on May 26, 2024 at 23:59). This blog post provides some solutions to this challenge. Please don’t read on if you intend to complete the challenge on your own.

Task 1: Special Positions

You are given a m x n binary matrix.

Write a script to return the number of special positions in the given binary matrix.

A position (i, j) is called special if $matrix[i][j] == 1 and all other elements in the row i and column j are 0.

Example 1

Input: $matrix = [ [1, 0, 0],
                   [0, 0, 1],
                   [1, 0, 0],
                 ]
Output: 1

There is only one special position (1, 2) as $matrix[1][2] == 1
and all other elements in row 1 and column 2 are 0.

Example 2

Input: $matrix = [ [1, 0, 0],
                   [0, 1, 0],
                   [0, 0, 1],
                 ]
Output: 3

Special positions are (0,0), (1, 1) and (2,2).

Special Positions in Raku

We use an array slice (with the any junction) to check rows and standard for loop to check columns.

sub special-positions (@mat) {
    my $row-max = @mat[0].end;
    my $count = 0;
    IND_I: for 0..$row-max -> $i {
        for 0..@mat.end -> $j {                            `
            next if @mat[$i][$j] != 1;
            next unless 
                (@mat[$i][0..^$j, $j^..$row-max]).any != 0;
            for 0..@mat.end -> $k {
                next if $k == $i;
                next IND_I unless @mat[$i][$k] == 0;
            }
            # say "$i, $j"; # uncomment to see the positions
            $count++;
        }
    }
    return $count;
}

my @tests = 
        [ [1, 0, 0],
          [0, 0, 1],
          [1, 0, 0],
        ],
        [ [1, 0, 0],
          [0, 1, 0],
          [0, 0, 1],
        ];
for @tests -> @test {
    printf "%-8s %-8s ... => ", "@test[0]", "@test[1]";
    say special-positions @test;
}

This program displays the following output:

$ raku ./special-positions.raku
1 0 0    0 0 1    ... => 1
1 0 0    0 1 0    ... => 3

Special Positions in Perl

This is a port to Perl of the above Raku program. Since Perl doesn't have any junction, we had to replace it with a for loop.

use strict;
use warnings;
use feature 'say';

sub special_positions {
    my $mat = shift;
    my $row_max = $#{$mat->[0]};
    my $col_max =  $#{$mat};
    my $count = 0;
    for my $i (0..$row_max) {
    IND_J: for my $j (0..$col_max) {
            next if $mat->[$i][$j] != 1;
            # check row
            for my $m (0..$row_max) {
                next if $m == $i;
                next IND_J unless $mat->[$m][$j] == 0;
            }
            # check column
            for my $k (0..$col_max) {
                next if $k == $j;
                next IND_J unless $mat->[$i][$k] == 0;
            }
            # say "$i, $j"; # uncomment to see the positions
            $count++;
        }
    }
    return $count;
}

my @tests = (
        [ [1, 0, 0],
          [0, 0, 1],
          [1, 0, 0],
        ],
        [ [1, 0, 0],
          [0, 1, 0],
          [0, 0, 1],
        ]
        );
for my $test (@tests) {
    printf "[%-8s %-8s ...] => ", "@{$test->[0]}", "@{$test->[1]}";
    say special_positions $test;
}

his program displays the following output:

$ perl ./special-positions.pl
[1 0 0    0 0 1    ...] => 1
[1 0 0    0 1 0    ...] => 3

Wrapping up

The next week Perl Weekly Challenge will start soon. If you want to participate in this challenge, please check https://perlweeklychallenge.org/ and make sure you answer the challenge before 23:59 BST (British summer time) on June 2, 2024. And, please, also spread the word about the Perl Weekly Challenge if you can.

final copyedits of perldelta

Perl commits on GitHub

Published by haarg on Friday 24 May 2024 20:35

final copyedits of perldelta

From Huh to Hero: Demystifying Perl in Two Easy Lessons (Part 2)

Perl on Medium

Published by Chaitanya Agrawal on Friday 24 May 2024 19:32

You’ve conquered the Perl basics in Part 1, but the adventure continues! In Part 2, we’ll delve deeper into the world of Perl, equipping…

Job offer for company that uses Perl, is this a good move?

r/perl

Published by /u/Roodiestue on Friday 24 May 2024 19:26

Hi, I recently got an offer for Senior SWE (current title at my company now) for a company that heavily utilizes Perl. I was wondering if folks from this community could offer some insight on what it's like working with Perl and also what, if any, potential long-term career implications are of becoming a Perl developer? Particularly I'm worried of pigeon-holing myself since Perl is not as heavily used in todays age and this company does not make use of modern cloud tools and deployments.

I am a Java developer (5 YOE) at a enterprise software company that is deployed in GCP. We are pretty regularly adopting new technologies so I'm gaining some valuable and relevant industry experience here but I am looking for a change and more opportunity to lead projects and mentor junior engineers.

The company seems good, great WLB, I liked the manager, and with the bonus (base is roughly the same) it would be about a ~8% TC increase plus a lot more stock (monopoly money, private RSUs).

Does anyone have experience transitioning from a Perl based company to a cloud based company with a more modern tech stack? Is this a backwards direction for me, should I continue with my Java development and instead look for opportunities that will offer more marketable skills?

Any input is appreciated, thank you for reading.

submitted by /u/Roodiestue
[link] [comments]

Deploying Dancer Apps

perl.com

Published on Friday 24 May 2024 18:25

This article was originally published at Perl Hacks.


Over the last week or so, as a background task, I’ve been moving domains from an old server to a newer and rather cheaper server. As part of this work, I’ve been standardising the way I deploy web apps on the new server and I thought it might be interesting to share the approach I’m using and talking about a couple of CPAN modules that are making my life easier.

As an example, let’s take my Klortho app. It dispenses useful (but random) programming advice. It’s a Dancer2 app that I wrote many years ago and have been lightly poking at occasionally since then. The code is on GitHub and it’s currently running at klortho.perlhacks.com. It’s a simple app that doesn’t need a database, a cache or anything other than the Perl code.

Dancer apps are all built on PSGI, so they have all of the deployment flexibility you get with any PSGI app. You can take exactly the same code and run it as a CGI program, a mod_perl handler, a FastCGI program or as a stand-alone service running behind a proxy server. That last option is my favourite, so that’s what I’ll be talking about here.

Starting a service daemon for a PSGI app is simple enough – just running “plackup app.psgi” is all you really need. But you probably won’t get a particularly useful service daemon out of that. For example, you’ll probably get a non-forking server that will only respond to a single request at a time. It’ll be good enough for testing, but you’ll want something more robust for production. So you’ll want to tell “plackup” to use Starman or something like that.  And you’ll want other options to tell the service which port to run on. You’ll end up with a quite complex start-up command line to start the server. So, if you’re anything like me, you’ll put that all in a script which gets added to the code repo.

But it’s still all a bit amateur. Linux has a flexible and sophisticated framework for starting and stopping service daemons. We should probably look into using that instead. And that’s where my first module recommendation comes into play – Daemon::Control. Daemon::Control makes it easy to create service daemon control scripts that fit in with the standard Linux way of doing things. For example, my Klortho repo contains a file called klortho_service which looks like this:

#!/usr/bin/env perl

use warnings;
use strict;
use Daemon::Control;

use ENV::Util load_dotenv;

use Cwd qw(abs_path);
use File::Basename;

Daemon::Control->new({
  name      => ucfirst lc $ENV{KLORTHO_APP_NAME},
  lsb_start => $syslog $remote_fs,
  lsb_stop  => $syslog,
  lsb_sdesc => Advice from Klortho,
  lsb_desc  => Klortho knows programming. Listen to Klortho,
  path      => abs_path($0),

  program      => /usr/bin/starman,
  program_args => [ ‘–workers, 10, -l, :$ENV{KLORTHO_APP_PORT},
                    dirname(abs_path($0)) . /app.psgi ],

  user  => $ENV{KLORTHO_OWNER},
  group => $ENV{KLORTHO_GROUP},

  pid_file    => /var/run/$ENV{KLORTHO_APP_NAME}.pid,
  stderr_file => $ENV{KLORTHO_LOG_DIR}/error.log,
  stdout_file => $ENV{KLORTHO_LOG_DIR}/output.log,

  fork => 2,
})->run;

This code takes my hacked-together service start script and raises it to another level. We now have a program that works the same way as other daemon control programs like “apachectl” that you might have used. It takes command line arguments, so you can start and stop the service (with “klortho_service start”, “klortho_service stop” and “klortho_service restart”) and query whether or not the service is running with “klortho_service status”. There are several other options, which you can see with “klortho_service status”. Notice that it also writes the daemon’s output (including errors) to files under the standard Linux logs directory. Redirecting those to a more modern logging system is a task for another day.

Actually, thinking about it, this is all like the old “System V” service management system. I should see if there’s a replacement that works with “systemd” instead.

And if you look at line 7 in the code above, you’ll see the other CPAN module that’s currently making my life a lot easier – ENV::Util. This is a module that makes it easy to work with “dotenv” files. If you haven’t come across “dotenv” files, here’s a brief explanation – they’re files that are tied to your deployment environments (development, staging, production, etc.) and they contain definitions of environment variables which are used to control how your software acts in the different environments. For example, you’ll almost certainly want to connect to a different database instance in your different environments, so you would have a different “dotenv” file in each environment which defines the connection parameters for the appropriate database in that environment. As you need different values in different environments (and, also, because you’ll probably want sensitive information like passwords in the file) you don’t want to store your “dotenv” files in your source code control. But it’s common to add a file (called something like “.env.sample”) which contains a list of the required environment variables along with sample values.

My Klortho program doesn’t have a database. But it does need a few environment variables. Here’s its “.env.sample” file:

export KLORTHO_APP_NAME=klortho
export KLORTHO_OWNER=someone
export KLORTHO_GROUP=somegroup
export KLORTHO_LOG_DIR=/var/log/$KLORTHO_APP_NAME
export KLORTHO_APP_PORT=9999

And near the top of my service daemon control program, you’ll see the line:

use ENV::Util -load_dotenv;

That looks to see if there’s a “.env” file in the current directory and, if it finds one, it is loaded and the contents are inserted in the “%ENV” hash – from where they can be accessed by the rest of the code.

There’s one piece of the process missing. It’s nothing clever. I just need to generate a configuration file so the proxy server (I use “nginx”) reroutes requests to klortho.perlhacks.com so that they’re processed by the daemon running on whatever port is configured in “KLORTHO_APP_PORT”. But “nginx” configuration is pretty well-understood and I’ll leave that as an exercise for the reader (but feel free to get in touch if you need any help).

So that’s how it works. I have about half a dozen Dancer2 apps running on my new server using this layout. And knowing that I have standardised service daemon control scripts and “dotenv” files makes looking after them all far easier.

And before anyone mentions it, yes, I should rewrite them so they’re all Docker images. That’s a work in progress. And I should run them on some serverless system. I know my systems aren’t completely up to date. But we’re getting there.

If you have any suggestions for improvement, please let me know.

link github issue in perldelta

Perl commits on GitHub

Published by haarg on Friday 24 May 2024 16:23

link github issue in perldelta

update META for 5.40.0-RC1

Perl commits on GitHub

Published by haarg on Friday 24 May 2024 16:04

update META for 5.40.0-RC1

add RC1 note to patchlevel.h

Perl commits on GitHub

Published by haarg on Friday 24 May 2024 16:04

add RC1 note to patchlevel.h

update perlhist.pod

Perl commits on GitHub

Published by haarg on Friday 24 May 2024 16:04

update perlhist.pod

From Huh to Hero: Demystifying Perl in Two Easy Lessons (Part 1)

Perl on Medium

Published by Chaitanya Agrawal on Thursday 23 May 2024 01:32

Have you ever stumbled across the word “Perl” and thought, “Huh? What’s that?” Or maybe you’ve heard whispers of its cryptic symbols and…

Help! Does anyone here have any experience with Net::DBus?

r/perl

Published by /u/Flashy_Boot on Wednesday 22 May 2024 21:10

I've been trying to debug an issue for 3 days now, am getting nowhere, and am about to headbutt my laptop. If anyone's done any heavy lifting with Net:DBus then, for the sake of my laptop, I'd really appreciate the help!

Problem description: I have a hash table with a bunch of keys. The values relating to those keys are of different types (as in, I've cast them to dbus types). So:

my $testhash = {}; $testhash->{"xesam:albumArtist"} = [dbus_string("Tom Waits")]; $testhash->{"xesam:album"} = dbus_string("Mule Variations"); $testhash->{"xesam:trackNumber"} = dbus_int32(1); $testhash->{"xesam:artist"} = [dbus_string("Tom Waits")]; $testhash->{"xesam:title"} = dbus_string("Big in Japan"); $testhash->{"mpris:artUrl"} = dbus_string("file://mnt/storage/Music/mp3/Tom Waits/Mule Variations/folder.jpg"); $testhash->{"mpris:length"} = dbus_int64(64182857); $testhash->{"mpris:trackid"} = dbus_object_path("/0"); $testhash->{"xesam:url"} = dbus_string("file://mnt/storage/Music/mp3/Tom Waits/Mule Variations/01 - Big in Japan.mp3"); 

I've created a DBus service, and have successfully implemented a method that returns that hash table ($IFACE is the interface name I'm using for all of my test methods):

dbus_method("ReturnDynamicHash", [], [["dict", "string", ["variant"]]], $IFACE); sub ReturnDynamicHash { my $self = shift; print "Object: ReturnDynamicHash called.\n"; my $return = {}; my @keys = keys(%{$testhash}); my $count = scalar(@keys); if ($count) { foreach my $key (@keys) { $return->{$key} = $testhash->{$key}; } } return $return; } 

As a DBus method, this works perfectly:

% dbus-send ....... .ReturnDynamicHash array [ dict entry( xesam:trackNumber variant int32 1 ) dict entry( mpris:trackid variant /0 ) dict entry( xesam:albumArtist variant array [ Tom Waits ] ) dict entry( xesam:album variant Mule Variations ) dict entry( mpris:length variant int64 64182857 ) dict entry( xesam:url variant file://mnt/storage/Music/mp3/Tom Waits/Mule Variations/01 - Big in Japan.mp3 ) dict entry( mpris:artUrl variant file://mnt/storage/Music/mp3/Tom Waits/Mule Variations/folder.jpg ) dict entry( xesam:artist variant array [ Tom Waits ] ) dict entry( xesam:title variant Big in Japan ) ] 

However, the interface I'm implementing requires that a DBus Property return that hashtable, not a method:

dbus_property("StaticHashProperty", [["dict", "string", ["variant"]]], "read", $IFACE); sub StaticHashProperty { print "Object: StaticHashProperty accessed.\n"; my $return = {}; my @keys = keys(%{$testhash}); my $count = scalar(@keys); if ($count) { foreach my $key (@keys) { $return->{$key} = $testhash->{$key}; } } return $return; } 

and this doesn't work.

From the dbus-send client I get

Error org.freedesktop.DBus.Error.NoReply: Remote peer disconnected 

and from the Perl server stderr i get:

dbus[93409]: Array or variant type requires that type array be written, but object_path was written. The overall signature expected here was 'a{sas}' and we are on byte 4 of that signature. D-Bus not built with -rdynamic so unable to print a backtrace Aborted (core dumped) 

Now, this error is coming from libdbus itself, not the Perl wrapper (though could of course still be a bug in the Perl module that's causing the error). It seems to have entirely the wrong signature ( a{sas}, not a{sv} as defined above the Property method) and therefore appears to be complaining that the type of one of the values is wrong (each time I run it I get a slightly different error; I think it's deducing the signature from the first key-value pair it pulls from the hash and assumes they should all be the same - so if the first pair it pulls has a uint64 value, then it complains that the next pair doesn't also have a uint64 value).

Since the Method works I know Net::DBus can handle these sorts of return values, but for some reason, as a property, it just isn't working. I also know that other applications do implement this interface, including this Property, successfully, so I know this isn't a limitation of DBus.

I've been looking at the code in Net::DBus that handles serialization, assuming there must be some difference between how Properties and Methods are handled, but can't see anything obvious.

Anyone? Any idea? Literally anything at all? Thank you!!!!!

submitted by /u/Flashy_Boot
[link] [comments]

Creating new Perl composite actions from a repository template

dev.to #perl

Published by Juan Julián Merelo Guervós on Wednesday 22 May 2024 12:06

So you want to create a quick-and-dirty GitHub actions that does only one thing and does it well, or glues together several actions, or simply to show off a bit at your next work interview. Here's how you can do it.
Let me introduce you to composite GitHub actions, one of the three types that are there (the other are JavaScript GHAs or container-based GHAs) and maybe one of the most widely unknown. However, they have several things going for them. First, they have low latency: no need to download a container or to set up some JS environment). Second, they are relatively easy to set up: they can be self-contained, with everything needed running directly on the description of the GitHub action. Third, you can leverage all the tools installed on the runner like bash, compilers, build tools... or Perl, which can be that and much more.
Even being easy, it is even easier if you have a bit of boilerplate you can use directly or adapt to your own purposes. This is what has guided the creation of the template for a composite GitHub action based on Perl. It is quite minimalistic. But let me walk you through what it has got so that you can use it easier

First, this action.yml describes what it does and how it does:

name: 'Hello Perl'
description: 'Perl Github Action Template'
inputs:
  template-input:  # Change this
    description: 'What it is about'
    required: false # or not
    default: 'World'
runs:
  using: "composite"
  steps:
    - uses: actions/checkout@v4
    - run: print %ENV;
      shell: perl {0}
    - run: ${GITHUB_ACTION_PATH}/action.pl
      shell: bash

You will have to customize inputs as well as outputs here (and, of course, name and description), but the steps are already baked in. It even includes the correct path to the (downloaded) Github action: when you're acting on a repository, the place where a GHA is is contained in an environment variable, GITHUB_ACTION_PATH. You can access it that way.

In general, that script might need external libraries, even your own, which you might have moved out of the script for testing purposes (everything must be tested). That is why the action also contains App::FatPacker as a dependency; that's a bundler that will put the source (action.src.pl), your library (lib/Action.pm) and every other module you might have used into a single file, the action.pl referenced above.

A Makefile is also provided, so that, after installing fatpack, typing make will process the source and generate the script.

And that's essentially it. Use the template and create your new (composite) action in just a few minutes!

The London Perl & Raku Workshop 2024

perl.com

Published on Wednesday 22 May 2024 11:45

LPW is Back

We’re happy to confirm the return of The London Perl & Raku Workshop after a five year break:

  • When: Saturday 26th October 2024
  • Where: The Trampery, 239 Old Street, London EC1V 9EY

This year’s workshop will be held at The Trampery, Old Street, a dedicated modern event space in central London. We have hired both The Ballroom and The Library; allowing us to run a main track for up to 160 attendees, and second smaller track for up to 35 attendees.

The Trampery is located a two minute walk from the Northern Line’s Old Street tube station in central London. The Northern Line has stops at most of the major train stations in London, or trivial links to others, so we recommend taking the tube to get to the venue.

Sign Up & Submit Talks

If you haven’t already, please sign up and submit talks using the official workshop site

We welcome proposals relating to Perl 5, Raku, other languages, and supporting technologies. We may even have space for a couple of talks entirely tangential as we will have two tracks.

Talks may be long (40 mins), short (20 mins), or very short (aka lightning, 5 mins) but we would prefer talks to be on the shorter side and will likely prioritise 20 minute talks. We would also be pleased to accept proposals for tutorials and discussions. The deadline for submissions is 30th September.

We would really like to have more first time speakers. If you would like help with a talk proposal, and/or the talk itself, let us know - we’ve got people happy to be your talk buddy!

Thanks to this year’s sponsors, without whom LPW would not happen:

If you would like to sponsor LPW then please have a look at the options here

Current status of using the OpenAI API in Perl: good news!

r/perl

Published by /u/OvidPerl on Wednesday 22 May 2024 07:08

The following is a quick ramble before I get into client work, but might give you an idea of how AI is being used today in companies. If you have an questions about Generative AI, let me know!

The work to make the OpenAI API (built on Nelson Ferraz's OpenAPI::Client::OpenAI module) is going well. I now have working example of transcribing audio using OpenAI's whisper-1 model, thanks to the help of Rabbi Veesh.

Using a 7.7M file which is about 16 minutes long, the API call takes about 45 seconds to run and costs $0.10 USD to transcribe. The resulting output has 2,702 words and seems accurate.

Next step is using an "instruct" model to appropriately summarize the results ("appropriate" varies wildly across use cases). Fortunately, we already have working examples of this. Instruct models tend to be more correct in their output than chat models, assuming you have a well-written prompt. Anecdotally, they may have smaller context windows because they're not about remembering a long conversation, but I can't prove that.

Think about the ROI on this. The transcription and final output will cost about 11 cents and take a couple of minutes. You'll still need someone to review it. However, think of the relatively thankless task of taking meeting minutes and producing a BLUF email for the company. Hours of expensive human time become minutes of cheap AI time. Multiply this one task by the number of times per year you have to do it. Further, consider how many other "simple tasks" can be augmented via AI and you'll see why it's becoming so powerful. A number of studies show that removing many of these simple tasks from people's plates, allowing them to focus on the "big picture," is resulting in greater morale and productivity.

When building AI apps, OpenAPI::Client::OpenAI should be thought of as a "low-level" module, similar to DBIx::Class. It should not be used directly in your code, but hidden behind an abstraction layer. Do not use it directly.

I tell my clients that their initial work with AI should be a tactical "top-down mandate/bottom-up implementation." This gives them the ability to start learning how AI can be used in different parts of their organization, given that marketing, HR, IT, and other departments all have different needs.

Part of this tactical approach is learning how to build AI data pipelines. With OpenAI publishing their OpenAPI spec, and with Perl using that, we can bring much of the power of enterprise-level AI needs to companies using Perl. It's been far too long that Perl has languished in the AI space.

Next, I need to investigate doing this with Gemini and/or Claude, but not now.


Note, if you're not familiar with the BLUF format, it's a style of writing email that is well-suited for email in a company that is sent to many people. It's "bottom-line up front" so that people can see the main point and decide if the rest of the email is relevant to them. It makes for very effiicient email communication.

submitted by /u/OvidPerl
[link] [comments]

GitHub Sponsors 💰 and Perl 🐫

dev.to #perl

Published by Gabor Szabo on Tuesday 21 May 2024 18:50

I was hoping to be able to write something more interesting about the GitHub Sponsors of various Perl developers, but so far I only found a few people and neither of them, well except myself, if I can still count in the group has any income via GitHub Sponsors.

Is it because they don't promote it enough?

Is it because the Explore GitHub Sponsors does not support the Perl/CPAN ecosystem?

Anyway, it would be really nice to see a few people starting to sponsors these people. That would be an encouragement to them and maybe also to others to support them.

trapd00r (magnus woldrich) · GitHub

linux, perl and inline skating enthusiast 🔌. trapd00r has 260 repositories available. Follow their code on GitHub.

favicon github.com

magnus woldrich

davorg (Dave Cross) · GitHub

Making things with software since 1984. davorg has 200 repositories available. Follow their code on GitHub.

favicon github.com

Dave Cross

giterlizzi (Giuseppe Di Terlizzi) · GitHub

IT Senior Security Consultant & Full Stack Developer - giterlizzi

favicon github.com

Giuseppe Di Terlizzi

nigelhorne (Nigel Horne) · GitHub

nigelhorne has 108 repositories available. Follow their code on GitHub.

favicon github.com

Nigel Horne

michal-josef-spacek (Michal Josef Špaček) · GitHub

michal-josef-spacek has 529 repositories available. Follow their code on GitHub.

favicon github.com

Michal Josef Špaček

szabgab (Gábor Szabó) · GitHub

Teaching Rust, Python, Git, GitHub, Docker, test automation. - szabgab

favicon github.com

Gábor Szabó

Follow me / Sponsor me

If you'd like to read more such posts, don't forget to upvote this one, to follow me here on DEV.to and to sponsor me via GitHub Sponsors.

Installing CPAN modules from git

dev.to #perl

Published by Tib on Tuesday 21 May 2024 18:18

(picture from elevate)

For various reasons, you might want to install CPAN modules from a git repository.

It can be because somehow a git repository is in advance against CPAN:

  • A fix was merged in official git repository but never released to CPAN
  • A branch or a fork contains some valuable changes (this very-little-but-absolutely-needed fix)

Or it can be because the modules are actually not in CPAN: not public and not in a alternative/private CPAN (see Addendum) or simply they are only "experiments"

But this post is not meant to discuss about the "why" but instead mainly share technically the "how" you could do that 😀

I tested various syntax and installers and will share now some working examples.

☝️ Before we continue, be sure to upgrade your installers (App::cpm and App::cpanminus) to their latest

Installing from command line with cpm

Installing with cpm is straighforward:

$ cpm install https://github.com/plack/Plack.git --verbose
33257 DONE fetch     (0.971sec) https://github.com/plack/Plack.git
33257 DONE configure (0.033sec) https://github.com/plack/Plack.git
33257 DONE resolve   (0.031sec) Clone -> Clone-0.46 (from MetaDB)
...
33257 DONE install   (0.364sec) URI-5.28
33257 DONE install   (0.046sec) https://github.com/plack/Plack.git
31 distributions installed.

It can also work the same with ssh git@github.com:plack/Plack.git:

$ cpm install git@github.com:plack/Plack.git --verbose
64383 DONE fetch     (2.498sec) git@github.com:plack/Plack.git
64383 DONE configure (0.039sec) git@github.com:plack/Plack.git
...
64383 DONE install   (0.045sec) git@github.com:plack/Plack.git
31 distributions installed.

Installing from command line with cpanminus

Installing with cpanm is not harder:

$ cpanm https://github.com/plack/Plack.git
Cloning https://github.com/plack/Plack.git ... OK
--> Working on https://github.com/plack/Plack.git
...
Building and testing Plack-1.0051 ... OK
Successfully installed Plack-1.0051
45 distributions installed

Installing from cpanfile

The correct syntax is the following (thank you @haarg):

requires 'Plack', git => 'https://github.com/plack/Plack.git', ref => 'master';

(ref => 'master' is optional)

And it would just work later with cpm:

$ cpm install --verbose
Loading requirements from cpanfile...
33257 DONE fetch     (0.971sec) https://github.com/plack/Plack.git
33257 DONE configure (0.033sec) https://github.com/plack/Plack.git
33257 DONE resolve   (0.031sec) Clone -> Clone-0.46 (from MetaDB)
...
33257 DONE install   (0.364sec) URI-5.28
33257 DONE install   (0.046sec) https://github.com/plack/Plack.git
31 distributions installed.

⚠️ Despite being a cpanfile, please note the use of cpm

Installing from cpmfile

Let's write our first cpmfile and save it as cpm.yml:

prereqs:
  runtime:
    requires:
      Plack:
        git: https://github.com/plack/Plack.git
        ref: master

And then it would just work with cpm:

$ cpm install --verbose
Loading requirements from cpm.yml...
66419 DONE resolve   (0.000sec) Plack -> https://github.com/plack/Plack.git@master (from Custom)
66419 DONE fetch     (1.695sec) https://github.com/plack/Plack.git
66419 DONE configure (0.034sec) https://github.com/plack/Plack.git
...
66419 DONE install   (0.023sec) https://github.com/plack/Plack.git
31 distributions installed.

Beware of "incomplete" repositories

Releases on CPAN are standardized and generally contain what is needed for installers, but distributions living in git repositories are more for development and very often not in a "ready to install" state.

(thank you @karenetheridge)

There's some limitations that you can encounter:

  • cpm would refuse to install if no META file is found (but cpanm would be OK with that)
  • cpm would refuse to install if no Makefile.PL nor Build.PL is found, except if x_static_install: 1 is declared in META (cpanm would still refuse)

Should I mention the repositories with only a dist.ini? (used by authors to generate everything else)

And you would get similar trouble with distributions using Module::Install but having not versioned it.

Conclusion

You should probably not rely too much on "install from git" method but still, it can provide an handy way to install modules to test fixes or experiments.

And now with this post you should have good examples of “how” you can achieve that.

Addendum

For alternative/private CPAN, several tools can come to your rescue:

Deploying Dancer Apps

Perl Hacks

Published by Dave Cross on Sunday 19 May 2024 17:39

Over the last week or so, as a background task, I’ve been moving domains from an old server to a newer and rather cheaper server. As part of this work, I’ve been standardising the way I deploy web apps on the new server and I thought it might be interesting to share the approach I’m using and talking about a couple of CPAN modules that are making my life easier.

As an example, let’s take my Klortho app. It dispenses useful (but random) programming advice. It’s a Dancer2 app that I wrote many years ago and have been lightly poking at occasionally since then. The code is on GitHub and it’s currently running at klortho.perlhacks.com. It’s a simple app that doesn’t need a database, a cache or anything other than the Perl code.

Dancer apps are all built on PSGI, so they have all of the deployment flexibility you get with any PSGI app. You can take exactly the same code and run it as a CGI program, a mod_perl handler, a FastCGI program or as a stand-alone service running behind a proxy server. That last option is my favourite, so that’s what I’ll be talking about here.

Starting a service daemon for a PSGI app is simple enough – just running “plackup app.psgi” is all you really need. But you probably won’t get a particularly useful service daemon out of that. For example, you’ll probably get a non-forking server that will only respond to a single request at a time. It’ll be good enough for testing, but you’ll want something more robust for production. So you’ll want to tell “plackup” to use Starman or something like that.  And you’ll want other options to tell the service which port to run on. You’ll end up with a quite complex start-up command line to start the server. So, if you’re anything like me, you’ll put that all in a script which gets added to the code repo.

But it’s still all a bit amateur. Linux has a flexible and sophisticated framework for starting and stopping service daemons. We should probably look into using that instead. And that’s where my first module recommendation comes into play – Daemon::Control. Daemon::Control makes it easy to create service daemon control scripts that fit in with the standard Linux way of doing things. For example, my Klortho repo contains a file called klortho_service which looks like this:

#!/usr/bin/env perl

use warnings;
use strict;
use Daemon::Control;

use ENV::Util -load_dotenv;
 
use Cwd qw(abs_path);
use File::Basename;
 
Daemon::Control->new({
  name      => ucfirst lc $ENV{KLORTHO_APP_NAME},
  lsb_start => '$syslog $remote_fs',
  lsb_stop  => '$syslog',
  lsb_sdesc => 'Advice from Klortho',
  lsb_desc  => 'Klortho knows programming. Listen to Klortho',
  path      => abs_path($0),
 
  program      => '/usr/bin/starman',
  program_args => [ '--workers', 10, '-l', ":$ENV{KLORTHO_APP_PORT}",
                    dirname(abs_path($0)) . '/app.psgi' ],
 
  user  => $ENV{KLORTHO_OWNER},
  group => $ENV{KLORTHO_GROUP},
 
  pid_file    => "/var/run/$ENV{KLORTHO_APP_NAME}.pid",
  stderr_file => "$ENV{KLORTHO_LOG_DIR}/error.log",
  stdout_file => "$ENV{KLORTHO_LOG_DIR}/output.log",
 
  fork => 2,
})->run;

This code takes my hacked-together service start script and raises it to another level. We now have a program that works the same way as other daemon control programs like “apachectl” that you might have used. It takes command line arguments, so you can start and stop the service (with “klortho_service start”, “klortho_service stop” and “klortho_service restart”) and query whether or not the service is running with “klortho_service status”. There are several other options, which you can see with “klortho_service status”. Notice that it also writes the daemon’s output (including errors) to files under the standard Linux logs directory. Redirecting those to a more modern logging system is a task for another day.

Actually, thinking about it, this is all like the old “System V” service management system. I should see if there’s a replacement that works with “systemd” instead.

And if you look at line 7 in the code above, you’ll see the other CPAN module that’s currently making my life a lot easier – ENV::Util. This is a module that makes it easy to work with “dotenv” files. If you haven’t come across “dotenv” files, here’s a brief explanation – they’re files that are tied to your deployment environments (development, staging, production, etc.) and they contain definitions of environment variables which are used to control how your software acts in the different environments. For example, you’ll almost certainly want to connect to a different database instance in your different environments, so you would have a different “dotenv” file in each environment which defines the connection parameters for the appropriate database in that environment. As you need different values in different environments (and, also, because you’ll probably want sensitive information like passwords in the file) you don’t want to store your “dotenv” files in your source code control. But it’s common to add a file (called something like “.env.sample”) which contains a list of the required environment variables along with sample values.

My Klortho program doesn’t have a database. But it does need a few environment variables. Here’s its “.env.sample” file:

export KLORTHO_APP_NAME=klortho
export KLORTHO_OWNER=someone
export KLORTHO_GROUP=somegroup
export KLORTHO_LOG_DIR=/var/log/$KLORTHO_APP_NAME
export KLORTHO_APP_PORT=9999

And near the top of my service daemon control program, you’ll see the line:

use ENV::Util -load_dotenv;

That looks to see if there’s a “.env” file in the current directory and, if it finds one, it is loaded and the contents are inserted in the “%ENV” hash – from where they can be accessed by the rest of the code.

There’s one piece of the process missing. It’s nothing clever. I just need to generate a configuration file so the proxy server (I use “nginx”) reroutes requests to klortho.perlhacks.com so that they’re processed by the daemon running on whatever port is configured in “KLORTHO_APP_PORT”. But “nginx” configuration is pretty well-understood and I’ll leave that as an exercise for the reader (but feel free to get in touch if you need any help).

So that’s how it works. I have about half a dozen Dancer2 apps running on my new server using this layout. And knowing that I have standardised service daemon control scripts and “dotenv” files makes looking after them all far easier.

And before anyone mentions it, yes, I should rewrite them so they’re all Docker images. That’s a work in progress. And I should run them on some serverless system. I know my systems aren’t completely up to date. But we’re getting there.

If you have any suggestions for improvement, please let me know.

The post Deploying Dancer Apps first appeared on Perl Hacks.

World Uncovered is cool

rjbs forgot what he was saying

Published by Ricardo Signes on Sunday 19 May 2024 12:00

Years ago, I found an iOS app called World Uncovered. I used it for a while, then forgot about it, then started using it again. It’s pretty cool, and I keep telling people about it, so I thought I’d write a post about it.

It’s like this: you let it track your movements the same way that a fitness app like RunKeeper would, and instead of telling you how many steps you’re getting in, it tells you where you’ve ever walked, ever. Keeping in mind that I sometimes have it turned off, and forgot about it for years, check out my map for Philadelphia:

Philly Uncovered

Sometimes I joke with my coworkers that I have never been to West Philadelphia. It’s not true, but… I mean, you can see my usual stomping grounds. You can also see the little excursions up toward Bethlehem, off to Camden, and down… well, I don’t know what I was doing down in South Philly, but that little diagonal below the Italian Market is definitely representative of a few trips to Milk Jawn.

Like I said, I’d forgotten all about the app until February, when I was in Vienna for a few days. For some reason I was flipping through my phone’s screens and saw it and thought, “I should log this trip!” It was great fun, and it made me realize all the trips I had failed to log for the past few years: Norway, Brussels, England, Australia, and others. I like to see new places, and these little marked-up maps are a fun memento. Here’s Vienna:

Vienna Uncovered

A few months later, I was in Lisbon, which you’ll know if you’ve been keeping up with my posts here. Lisbon was fun because I took a train off to the west coast, so I got a map with two hubs of activity with a long straight train ride between them:

Lisbon Uncovered

Lisbon was also where I finally got a “landmark” achievement. The app has its own (somewhat eccentric, I think) collection of landmarks, and by visiting them you can tick it off your list. If you visit enough, you get an achievement. Is this a good way to plan your travel? No. Is it fun if it happens anyway? Well, for me it was. I got Belém Tower.

Weirdly, there are no World Uncovered landmarks in Philly or in Melbourne. Still, I’ll keep an eye out for landmarks on future trips. I also wouldn’t mind getting a few of the “passport” achievements for visiting new countries. That’ll take some time, though!

As for the app itself, it feels sort of dated. It hasn’t had an updated in three or four years, and the UI is kind of clunky. The backup feature constantly needs to be reauthenticated with Dropbox, and unless you knew to turn on GPX trip mapping from day one, most of your backup will be in an encrypted zip file. So, I live in modest fear of losing all this data someday, and might look at some better way to do this. (Maybe log all my GPX data in some other app and then import it here later? I don’t know.)

I did have some email interaction with the developer recently, who told me that the app isn’t dead, just done. I can respect that, given all the software of mine that I feel is just done. I’ll keep enjoying it and not worry about its future.

Oh, and as for how I’m enjoying it, I should talk about one more thing: shorties.

Philadelphia is a grid. I like this, because it makes the city much easier for me to navigate and think about. The weird layout of Boston was sort of charming, but also kind of a total pain. I like the grid. In Center City, the grid runs from 30th Street at the west to Front Street (which is basically 1st Street) at the east. Right in the middle, Broad Street (which is basically 14th) divides the city in half. The other main north/south streets are numbered 2-29. Then the major east/west streets run from the Schuylkill River at the west to the Delaware River at the east. They’re mostly named after trees, and there are few enough that it’s pretty easy to name them all.

The thing is, there are lots of other streets in the grid. For example, between 11th and 12th is Marvine Street, running north/south. It’s not always there, though, just sometimes. It runs from Catharine north to Bainbridge, but then stops. It shows up again way further north at Race, running just one block to Vine, and then showing up again later. These streets have been dubbed “shorties” by my excellent colleague Lacey. They’re great, with lots of character and not much traffic. A hundred years ago, each length of shorty might have its own name, but they were rationalized in the 20th century. The street halfway between 8th and 9th is Darien. If it’s between 8th and 9th, but west of center, it’s Schell. If it’s east of center, it’s Mildred. Some blocks have one, two, or all three between them. (Marvine is a funny one. Between Lombard and Walnut, it’s Quince instead. I bet there’s history.)

Anyway, I would like to walk down every shorty in center city. This is not hard, it’s just going to take a lot of time, and a lot of consultation with the map. World Uncovered makes that easy. I think my next plan for tackling these is to start picking one block between home and work and on the way in, hit all of its shorties. Then another one on the way home. That will only get me about a quarter of the city, tops, but it’s a start.

The real treat is when I stop at a street that’s really rich in shorties, especially interior second-order ones, like this gem:

10th and Locust

Honestly, look at that tiny length of Irving Street, which often connects two numbered streets east to west. Here, it’s a tiny little alley inside the block, only reachable by another shorty. What a city!

(cdxcvi) 6 great CPAN modules released last week

Niceperl

Published by Unknown on Sunday 19 May 2024 09:06

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Devel::CheckOS - a script to package Devel::AssertOS modules with your code.
    • Version: 2.02 on 2024-05-15, with 17 votes
    • Previous CPAN version: 2.01 was 13 days before
    • Author: DCANTRELL
  2. Log::Contextual - Simple logging interface with a contextual log
    • Version: 0.009000 on 2024-05-15, with 13 votes
    • Previous CPAN version: 0.008001 was 6 years, 3 months, 27 days before
    • Author: HAARG
  3. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.032000 on 2024-05-15, with 25 votes
    • Previous CPAN version: 2.031001 was 2 months, 4 days before
    • Author: MICKEY
  4. Mojolicious - Real-time web framework
    • Version: 9.37 on 2024-05-13, with 497 votes
    • Previous CPAN version: 9.36 was 2 months, 5 days before
    • Author: SRI
  5. PDF::API2 - Create, modify, and examine PDF files
    • Version: 2.047 on 2024-05-18, with 30 votes
    • Previous CPAN version: 2.045 was 7 months, 23 days before
    • Author: SSIMMS
  6. Sub::Override - Perl extension for easily overriding subroutines
    • Version: 0.11 on 2024-05-14, with 15 votes
    • Previous CPAN version: 0.10 was 5 months, 9 days before
    • Author: MVSJES

MariaDB 10 and SQL::Translator::Parser

blogs.perl.org

Published by russbrewer on Saturday 18 May 2024 19:44

Following up on my previous post (MariaDB 10 and Perl DBIx::Class::Schema::Loader), I wanted to try the 'deploy' feature to create database tables from Schema/Result classes.

I was surprised that I could not create a table in the database when a timestamp field had a default of current_time(). The problem was that the generated CREATE TABLE entry placed quotes around 'current_timestamp()' causing an error and rejected entry.

As mentioned in a previous post, I had created file SQL/Translator/Parser/MariDB.pm as part of the effort to get MariaDB 10 clients to work correctly with DBIx::Class::Schema::Loader. Initially it was a clone of the MySQL.pm file with name substitutions. To correct the current_timestamp problem I added a search/replace in the existing create_field subroutine in the MariaDB.pm file to remove the quotes.

# current_timestamp ?
# current_timestamp (possibly as a default entry for a
# new record field), must not be quoted in the CREATE TABLE command
# provided to the database. Convert 'current_timestamp()'
# to current_timestamp() (no quotes) to prevent CREATE TABLE failure

if ( $field_def =~ /'current_timestamp\(\)'/ ) {
$field_def =~ s/'current_timestamp\(\)'/current_timestamp\(\)/;
}

This entry is made just before the subroutine returns $field_def. Now $schema->deploy(); works correctly to create the entire database.

The code shown below was tested satisfactorily to generate CREATE TABLE output (on a per table or multi-table basis) suitable for exporting (using tables Task and User as example table names):

My $schema = db_connect();

my $trans  = SQL::Translator->new (
     parser      => 'SQL::Translator::Parser::DBIx::Class',
     quote_identifiers => 1,
     parser_args => {
         dbic_schema => $schema,
         add_fk_index => 1,
         sources => [qw/
           Task User
         /],
     },
     producer    => 'MariaDB',
    ) or die SQL::Translator->error;

my $out = $trans->translate() or die $trans->error;

I believe the SQL/Translator/Parser/MySQL.pm file would benefit from this same code addition but I have not tested using a MySQL database and DBD::mysql.

MariaDB 10 and Perl DBIx::Class::Schema::Loader

blogs.perl.org

Published by russbrewer on Saturday 18 May 2024 18:15

Fixing DBIx::Class::Schema::Loader For Use With MariaDB 10 Client Software

I recently set up a virtual Server Running Rocky Linux 9 as a client from which to query a remote MariaDB database. I used perlbrew to install Perl 5.38.2. I installed client related RPMs for MariaDB 10.5, I installed DBIx::Class as a relational mapper that can create Perl Schema Result Classes for each table in the database. If you are new to DBIx::Class, you can review its purpose and features in DBIx::Class::Manual::Intro. The Result Classes used by Perl to query the database are stored on the client server in a schema directory. They are created with the DBIx::Class::Schema::Loader module.

I only work with databases as a home hobbyist, but I have successfully used DBIx::Class and related DBIx::Class::Schema::Loader from a CentOS 7 server running Perl 5.24.1 with MariaDB 5.5 client related RPMs. My intent was to replace a CentOS 7 virtual server with a Rocky 9 virtual server and upgrade from MariaDB 5 client to a MariaDB 10 client.

The CentOS 7 client used DBD::mysql which works fine with MariaDB 5 but would not install on the Rocky server which used MariaDB 10 RPMs. So I installed DBD::MariaDB and DBIx::Class::Storage::DBI::MariaDB on the client Rocky server.

To create my relationship classes, I ran the schema loader and was surprised that DBIx::Class::Schema::Loader did not produce correct Result Classes on the Rocky 9 server. Later I found that it did not work properly a on Rocky 8 server either. The common issue was MariaDB version 10 compatibility with DBIx::Class::Schema::Loader (and its dependencies).

I am not sure of everything that was wrong with the Result Classes, but the obvious problems were missing the primary keys entries, missing auto_increment entries and missing unsigned integer entries for all Result Classes which should have them.

I found the problem described (but not resolved) in November 2023 in this short Perl Monks thread.

Below I describe the steps I took to get DBIx::Class::Schema::Loader and its dependencies to produce Result Classes on the MariaDB 10 client (Rocky 8 and 9) servers that was identical to that being produced on the MariaDB 5 client CentOS 7 server, remembering that all of them where creating the Result Classes by connecting to the same remote database server. Although not significant for this issue, the remote database was running smoothly on a Rocky 8 server using MariaDB 10.3.

Essential Edits Directly Affecting Schema Loader

First I noticed that the DBIx/Class/Schema/Loader/DBI directory had a mysql.pm file but had no MariaDB.pm counterpart. So I made a copy of mysql.pm named MariaDB.pm and then I edited the new MariaDB.pm file to change references to mysql and MySQL to MariaDB. Note that almost all the edits can use MariaDB as the substitute for mysql and MySQL but entries in the "_extra_column_info" subroutine must use lowercase "mariadb". This is because they refer to lowercase terminology expected by DBD::MariaDB as can be seen in site_perl/5.38.2/x86_64-linux/DBD/MariaDB.pm.

In DBIx/Class/Schema/Loader/DBI/MariaDB.pm, for subroutine "_extra_column_info", keep references to mariadb in lowercase. For example:

  • mysql_is_auto_increment should become mariadb_is_auto_increment
  • $dbi_info->{mysql_type_name} should become $dbi_info->{mariadb_type_name}

In all other cases I used MariaDB (not mariadb) as the substitution.

DBIx/Class/SQLMaker/

I then realized that DBIx::Class::Schema::Loader depends on DBIx::Class::SQLMaker which supports a number of dependent modules such as MySQL.pm, MSSQL.pm, SQLite.pm and Oracle.pm, but did not have a corresponding MariaDB.pm file. So I made a copy of MySQL.pm named MariaDB.pm and then I edited the new MariaDB.pm file to change references to mysql and MySQL to MariaDB.

After the edits, the Schema loader started working correctly in that running Schema Loader created Schema Result Classes that matched the output of the older MariaDB 5 client.

Additional Edits not Directly Affecting Schema Result Classes Generation

Although apparently not directly related to the functioning of DBIx::Class::Schema:Loader, there are several other files that might need similar editing to provide full MariaDB functionality via DBIx::Class. Your Perl version may differ, but for me these modules are:

  • DBIx/Class/PK/Auto/
  • site_perl/5.38.2/SQL/Translator/Generator/DDL/
  • site_perl/5.38.2/SQL/Translator/Parser/
  • site_perl/5.38.2/SQL/Translator/Parser/DBI/
  • site_perl/5.38.2/SQL/Translator/Producer/

The directories contain references to other database types but do not provide a MariaDB.pm file. Inside each of the above directories, I copied the mysql.pm file and named it MariaDB.pm. Then I edited each new MariaDB.pm file to change mysql and MySQL entries to MariaDB entries. These additional MariaDB.pm files are not required to get the loader to simply create the Schema Result Classes from a database. I need to test to see what effect adding the MariaDB.pm files to these directories has on DBIx::Class functionality.

The following four files make internal references to mysql or MySQL but did not have a reference to MariaDB. Therefore, I edited each file:

DBIx::Class::Storage::DBI::MariaDB.pm

For file DBIx/Class/Storage/DBI/MariaDB.pm, which is provided from CPAN, in subdirectory sqlt_type, edit to return 'MariaDB' instead of returning 'MySQL'.

Also change this line:
__PACKAGE__->sql_maker_class('DBIx::Class::SQLMaker::MySQL');
to this line:
__PACKAGE__->sql_maker_class('DBIx::Class::SQLMaker::MariaDB');

SQL::Translator::Utils.pm

In the Utils.pm file there is a subroutine named ‘parse_mysql_version’. Copy this subroutine, and name it ‘parse_MariaDB_version’. Then edit subroutine parse_MariaDB_version to change references to MariaDB instead of MySQL. Finally, edit the Export_OK entry to include parse_MariaDB_version.

SQL/Translator/Producer/ClassDBI.pm

Add a MariaDB entry to the %CDBI_auto_pkgs hash

SQL/Translator/Producer/Dumper.pm

Add a MariaDB entry to the %type_to_dbd hash

Results

After making the essential edits I was able to produce Schema Result Classes that appear to be accurate and not missing information. After making the additional edits the Schema Loader still worked correctly.

Caveats and Unknowns:

I have not been able to do production level testing on these changes. I suspect the additional edits and some as yet unidentified edits are needed for full DBIx::Class functionality with MariaDB 10. For example, version control for using the Schema to modify the database (instead of reading the database to create the schema) is probably not working.

I have only tested the schema result class build with a command such as:

use DBIx::Class::Schema::Loader qw/ make_schema_at /;
make_schema_at(
    'MyApp::Schema',
    {
        debug          => 1,
        dump_directory => './lib/test',
        create         => 'static',
        components     => [ 'TimeStamp' ]
    },
    [
        'dbi:MariaDB:database=database_name;host=192.168.xxx.xxx', 'username',
        'secret_pw', { AutoCommit => '1' }
    ],
);

My criterion for success was that (after just the essential edits) the MariaDB 10 clients (on my Rocky 8 and 9 servers) produced identical schema result classes as my MariaDB 5 client running on my CentOS 7 server when querying the same Maria 10 database.

My target database consists of 46 tables containing an assortment on foreign keys defining has_many, belongs_to and many_to_many relationships. The tables contain unsigned integers, unique keys, combined primary keys, auto_increment, integers (signed and unsigned), timestamp, NULLS and not_NULLS, char, varchar, and text data types.

This information is part of a work in progress and my testing is not yet complete. Use these changes at your own risk. They are offered without warranty. You should test thoroughly before incorporating them in important work. Generate your test schemas to a separate directory to avoid harming a known good schema.

If you can provide additional information, corrections and improvements, please share them.

Perl Toolchain Summit 2024 - Lisbon Portugal

blogs.perl.org

Published by Chad 'Exodist' Granum on Saturday 18 May 2024 09:22

I just got back from the Perl Toolchain Summit 2024 in Lisbon Portugal!

Thank you to Grant Street Group for sponsoring my attendance at the event! Grant Street Group is an amazing place to work, and GSG is hiring! Contact me on irc.perl.org (Exodist) if you would like a referral.

This year I took a little side trip before the PTS to explore Lisbon with my wife. It is an amazing city, with a lot of history. I highly recommend visiting it and exploring the castles, palaces, and archaeological sights!

My goal for the PTS was to polish up Yath 2.0 and get it out the door. Spoiler alert: I did not achieve this goal, though I did make good progress. Instead several other things occurred that were even better as far as achieving things that require collaboration go!

Test2/Test2::Suite updates

I had several bug reports that built up over the last couple months. Most of my first day at the PTS was spent fixing these and minting new releases. See the changelog for Test-Simple for details. Without this event it would have been harder to find time to work all these. I also fixed a couple other modules, see my module upload list for all the modules I updated at the PTS.

PAUSE contribution

The PAUSE developers needed a way to manage concurrency. Charsbar (Kenichi Ishigaki) approached me about Parallel::Runner, which was exactly what they needed, but used some outdated modules as I have not touched it in almost a decade. I was able to mint a new release with better and more modern dependencies. Now Parallel::Runner is used under the hood for some PAUSE processes.

Using Yath to test modules on install

Additionally Garu approached me and Leon T. about using Yath as a better and universal way to test modules and upload results to cpanm. This resulted in a collaboration between myself and Leon where we made it will be possible to tell cpan, cpanm, etc to use Yath instead of prove! Once Yath 2.0 and a non-dev version of the new Test-Harness are both available you can do this.

  1. You need version 3.49 or greater of Test-Harness (Currently this is only available in a dev version).
  2. Install yath 2.0 (not released yet, but soon!)
  3. Run like this: HARNESS_SUBCLASS=TAP::Harness::Yath cpanm MODULE
  4. You can also set the TAP_HARNESS_YATH_ARGS env var to add comma separated args. (Example: "-j9,-B")

Yath working group (not an official name)

Garu, Preaction, Ilmari, and I came together to discuss Yath. A new goal for yath is to make it possible for Yath to send cpan testers reports. Garu wants a clear and easy way to make reports. Yath, specially with the above changes to Test-Harness should greatly simplify the process. (We also need a cpantesters plugin for Yath)

Ilmari helped me out with an Audit of the DBIc::Class components of Yath, as well as a short review of the schema where he fixed some index definitions. I am very grateful for this code review.

Preaction and I discussed creating a cpan central Yath server (formerly Yath::UI). This would be a very useful tool both for people running tests, and people trying to fix any issues the tests reveal.

Finally we are now trying to plan a Yath hackathon some time this year to all get together and improve Yath. If 2.0 is not released before the hackathon then the goal of the hackathon will be to get it there.

Other items

Many other things happened at the PTS. We had discussions about security and PAUSE/cpan. Metacpan and PAUSE both had significant work done to them, and both are much better today as a result.I did not actively participate in these improvements, so stay tuned for blogs from other attendees!

Monetary sponsors:

Booking.com, The Perl and Raku Foundation, Deriv, cPanel, Inc Japan Perl Association, Perl-Services, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, Harald Joerg, Steven Schubiger.

In kind sponsors:

Fastmail, Grant Street Group, Deft, Procura, Healex GmbH, SUSE, Zoopla.

The Perl and Raku Conference (now in its 26th year) would not exist without sponsors. Above, you’ll see a screen shot from Curtis Poe’s Perl’s new object-oriented syntax is coming, but what’s next? talk at last year’s conference in Toronto. You may be wondering how you can add your organization’s logo to this year’s list. In the spirit of transparency, we are making our sponsorship prospectus public. Please share this article freely with friends, colleagues and decision makers so that we can reach as many new sponsors as possible.

The Perl and Raku Conference 2024 Prospectus

This year the Perl and Raku Conference will be held in Las Vegas, Nevada on June 24-28, 2024. Conferences such as this provide tangible benefits to organizations which use Perl. Aside from the transfer of knowledge which attendees bring back to their place of work, on-site hackathons also contribute to the growth and maintenance of the software stack which so many companies have come to rely on. In 2022, for example, the hackathon focused on modernizing and improving support for Perl across various editors, allowing Perl developers to be even more productive than they have been in the past.

There are still opportunities to support this important, grassroots Open Source Software event. Events like these largely depend on sponsors in order to thrive.

This year, we are looking for corporate donations to offset the costs of feeding conference attendees. Each of these sponsorship opportunities comes with the following:

  • your logo, should you provide one, will appear on the banners which are displayed behind speakers and will subsequently appear in speaker videos
  • your logo, a short blurb and a long blurb about your company will appear on the event website
  • you will be listed as a sponsor in the materials handed out at the event
  • we will thank you publicly at the event
  • if you are able to provide some swag, we will gladly distribute it at the conference via our swag bags

Breakfast Sponsor (3 available)

Sponsor a catered breakfast during one of the conference days

Sponsorship commitment: $3,500

Snack Breaks (2 available)

Sponsor a catered snack break during one of the conference days.

Sponsorship commitment: $3,000

Coffee Break Sponsor (2 available)

Sponsor a coffee break during one of the conference days.

Sponsorship commitment: $2,500

Please do let me know at what level you might be interested in contributing and we can begin the process of getting you involved in this very special event.

Deadline

In order to get your logo on the “step and repeat” banner we would need to have finalized sponsorship and received logo assets by June 1st, so we’d love to help you start the process as soon as you’re ready.

Contact

For any questions or to begin the sponsorship process, please contact me via olaf@wundersolutions.com. I’ll be happy to answer any questions and walk you through the process. If you’d like to discuss sponsorship options which are greater or smaller than the offerings listed, I can also work with you on that. If you’re not ready to sponsor this event but would like to be included in future mailings, please reach out to me via email as well. I look forward to hearing from you!

Spaces are Limited

In 2024 we expect to host over 100 attendees, but there is a hard cap of 150. If you’re thinking of attending, it’s best to secure your ticket soon.

About Perl Programming

Perl on Medium

Published by Vcanhelpsu on Wednesday 15 May 2024 07:40

Data::Fake::CPAN (a PTS 2024 supplement)

rjbs forgot what he was saying

Published by Ricardo Signes on Sunday 12 May 2024 12:00

One of the things I wrote at the first PTS (back when it was called the Perl QA Hackathon) was Module::Faker. I wrote about it back then (way back in 2008), and again eleven years later. It’s a library that, given a description of a (pretend) CPAN distribution, produces that distribution as an actual file on disk with all the files the dist should have.

Every year or two I’ve made it a bit more useful as a testing tool, mostly for PAUSE. Here’s a pretty simple sample of how those tests use it:

$pause->upload_author_fake(PERSON => 'Not-Very-Meta-1.234.tar.gz', {
  omitted_files => [ qw( META.yml META.json ) ],
});

This writes out Not-Very-Meta-1.234.tar.gz with a Makefile.PL, a manifest, and other stuff. The package and version (and a magic true value) also appear in lib/Not/Very/Meta.pm. Normally, you’d also get metafiles, but here we’ve told Module::Faker to omit them, so we can test what happens without them. When we were talking about testing the new PAUSE server in Lisbon, we knew we’d have to upload distributions and see if they got indexed. Here, we wouldn’t want to just make the same test distribution over and over, but to quickly get new ones that wouldn’t conflict with the old ones.

This sounded like a job for Module::Faker and a random number generator, so I hot glued the two things together. Before I get into explaining what I did, I should note that this work wasn’t very important, and we really only barely used it, because we didn’t really need that much testing. On the other hand, it was fun. I had fun writing it and seeing what it would generate, and I have plans to have more fun with it. After a long day of carefully reviewing cron job logs, this work was a nice goofy thing to do before dinner.

Data::Fake::CPAN

Data::Fake is a really cool library written by David Golden. It’s really simple, but that simplicity makes it powerful. The ideas are like this:

  1. it’s useful to have a function that, when called, returns random data
  2. to configure that generator, it’s useful to have a function that returns the kind of function discussed in #1
  3. these kinds of functions are useful to compose

So, for example, here’s some sample code from the library’s documentation:

my $hero_generator = fake_hash(
    {
        name      => fake_name(),
        battlecry => fake_sentences(1),
        birthday  => fake_past_datetime("%Y-%m-%d"),
        friends   => fake_array( fake_int(2,4), fake_name() ),
        gender    => fake_pick(qw/Male Female Other/),
    }
);

Each of those fake... subroutine calls returns another subroutine. So, in the end you have $hero_generator as a code reference that, when called, will return a reference to a five-key hash. Each value in the hash will be the result of calling the generators given as values in the fake_hash call.

It takes a little while to get used to working with the code generators this way, but once you do, it comes very easy to snap together generators of random data structures. (When you’re done here, why not check out David Golden’s talk about using higher-order functions to create Data::Fake?) Helpfully, as you can see above, Data::Fake comes with generators for a bunch of data types.

What I did was write a Data::Fake plugin, Data::Fake::CPAN, that provides generators for version strings, package names, CPAN author identities, license types, prereq structures and, putting those all together, entire CPAN distributions. So, this code works:

use Data::Fake qw(CPAN);

my $dist = fake_cpan_distribution()->();

my $archive = $dist->make_archive({ dir => '.' });

When run, this writes out an archive file to disk. For example, I just got this:

$ ./nonsense/module-blaster
Produced archive as ./Variation-20010919.556.tar.gz (cpan author: MDUNN)
- Variation
- Variation::Colorless
- Variation::Conventional
- Variation::Dizzy

There are a few different kinds of date formats that it might pick. This time, it picked YYYYMMDD.xxx. That username, MDUNN, is short for Mayson Dunn. I found out by extracting the archive and reading the metadata. Here’s a sample of the prereqs:

{
  "prereqs" : {
    "build" : {
       "requires" : {
          "Impression::Studio" : "19721210.298"
       }
    },
    "runtime" : {
       "conflicts" : {
          "Writer::Cigarette" : "19830107.752"
       },
       "recommends" : {
          "Error::Membership" : "v5.16.17",
          "Marriage" : "v1.19.6"
       },
       "requires" : {
          "Alcohol" : "v12.16.0",
          "Competition::Economics" : "v19.1.7",
          "People" : "20100228.011",
          "Republic" : "20040805.896",
          "Transportation::Discussion" : "6.069"
       }
    }
  }
}

You’ll see that when I generated this, I ran ./nonsense/module-blaster. That program is in the Module-Faker repo, for your enjoyment. I hope to play with it more in the future, changing the magic true values, maybe adding real code, and just more variation — but probably centered around things that will have real impact on how PAUSE indexes things.

Probably very few people have much use for Module::Faker, let alone Data::Fake::CPAN. I get that! But Data::Fake is pretty great, and pretty useful for lots of testing. Also, generating fun, sort of plausible data makes testing more enjoyable. I don’t know why, but I always like watching my test suite fail more when it’s spitting out fun made-up names at the same time. Try it yourself!

(cdxcv) 8 great CPAN modules released last week

Niceperl

Published by Unknown on Sunday 12 May 2024 09:43

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. DBD::Oracle - Oracle database driver for the DBI module
    • Version: 1.90 on 2024-05-07, with 31 votes
    • Previous CPAN version: 1.83 was 2 years, 3 months, 21 days before
    • Author: ZARQUON
  2. Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
    • Version: 1.57 on 2024-05-06, with 16 votes
    • Previous CPAN version: 0.77 was 4 years, 9 months, 30 days before
    • Author: DDICK
  3. Minion::Backend::mysql - MySQL backend
    • Version: 1.005 on 2024-05-06, with 13 votes
    • Previous CPAN version: 1.004 was 6 months, 6 days before
    • Author: PREACTION
  4. Path::Tiny - File path utility
    • Version: 0.146 on 2024-05-08, with 188 votes
    • Previous CPAN version: 0.144 was 1 year, 5 months, 7 days before
    • Author: DAGOLDEN
  5. PDL - Perl Data Language
    • Version: 2.089 on 2024-05-11, with 52 votes
    • Previous CPAN version: 2.088 was 20 days before
    • Author: ETJ
  6. Perl::Tidy - indent and reformat perl scripts
    • Version: 20240511 on 2024-05-10, with 140 votes
    • Previous CPAN version: 20240202 was 3 months, 9 days before
    • Author: SHANCOCK
  7. Prima - a Perl graphic toolkit
    • Version: 1.73 on 2024-05-09, with 43 votes
    • Previous CPAN version: 1.72 was 3 months, 9 days before
    • Author: KARASIK
  8. SPVM - The SPVM Language
    • Version: 0.990006 on 2024-05-09, with 31 votes
    • Previous CPAN version: 0.990003 was 8 days before
    • Author: KIMOTO

Outreachy Internship 2024 Updates

Perl Foundation News

Published by Makoto Nozaki on Thursday 09 May 2024 19:21

TL;DR We just finished intern selection for this year’s Outreachy program. We got more projects and more applicants than the previous years, which made the selection hard in a good way.

Continuing our annual tradition, The Perl and Raku foundation is involved in the Outreachy program which provides internships to people subject to systemic bias and impacted by underrepresentation.

We have just finished the intern selection process, which turned out to be harder compared to the previous years. I’ll explain the reasons below.

It was harder because we got multiple high quality project proposals

Each year, we call for project ideas from the Perl/Raku community. Project proposer is required to commit to mentoring an intern from May to August. Given the significant commitment involved, it’s not uncommon for us to find suitable projects.

Fortunately, this year, we got two promising project proposals. The Foundation’s financial situation did not allow us to sponsor both projects, so we had to make the tough decision to support only one project.

After careful consideration, the Board has elected to sponsor Open Food Fact’s Perl project, titled “Extend Open Food Facts to enable food manufacturers to open data and improve food product quality.”

It was harder because more people showed up

Having more projects means we were able to attract more intern candidates. Across the two projects, more than 50 people showed interest and initiated contributions. Among them, 21 individuals actually created pull requests before the selection process.

Needless to say, it's hard work for the mentors to help dozens of candidates. They taught these intern candidates how to code and guided them through creating pull requests. On the applicants’ side, I am amazed that they worked hard to learn Perl and became proficient enough to create pull requests and make real improvements to the systems.

And the final selection was harder because we had more applicants

After the contribution process, we got an application from 14 people. It was obviously hard for the mentors to select one from so many good applicants. In the next post, Stéphane Gigandet will introduce our new intern to the community.

I wish all the best to the mentors, Stéphane and Alex, and our new intern.

Voice from the applicants

"In the journey to understand Perl better, I wanted to know what are its most wide applications, one of them being a web scraper. It's because Perl's strong support for regular expressions and built-in text manipulation functions make it well-suited for tasks like web scraping, where parsing and transforming text are essential. I took inspiration from various web scraping projects available on the internet to gain insights into the process and developed a lyrics scraper."

"I'm currently diving into Perl, and I see this as a fantastic chance to enrich my coding skills. I've thoroughly enjoyed immersing myself in it and have had the opportunity to explore various technologies like Docker and more."

"I have had the opportunity to experience Perl firsthand and have come to appreciate its significance in web development, on which I have worked. During my second year, I was searching for popular languages in backend development and found out about Perl, whose syntax was somewhat like C and Python. I didn't have any previous experience working with Perl, but now I have gained a deep understanding of its importance and impact on backend development and data processing."

"In this pull request, I made a significant stride in improving the quality and maintainability of our Perl codebase by integrating Perl::Critic, a powerful static code analysis tool."

"I've learned a whole lot about Perl and some of its frameworks such as Dancer2 (a surprisingly simple framework I've come to fall in love with)."

What's new on CPAN - April 2024

perl.com

Published on Thursday 09 May 2024 19:00

Welcome to “What’s new on CPAN”, a curated look at last month’s new CPAN uploads for your reading and programming pleasure. Enjoy!

APIs & Apps

Config & Devops

Data

Development & Version Control

Science & Mathematics

Web

Other

Maintaining Perl (Tony Cook) February 2024

Perl Foundation News

Published by alh on Monday 06 May 2024 19:42


Tony writes:

``` [Hours] [Activity] 2024/02/01 Thursday

2.50 #21873 fix, testing on both gcc and MSVC, push for CI

2.50

2024/02/02 Friday 0.72 #21915 review, testing, comments

0.25 #21883 review recent updates, apply to blead

0.97

2024/02/05 Monday 0.25 github notifications 0.08 #21885 review updates and approve 0.57 #21920 review and comment 0.08 #21921 review and approve 0.12 #21923 review and approve 0.08 #21924 review and approve 0.08 #21926 review and approve 0.67 #21925 review and comments

2.00 #21877 code review, testing

3.93

2024/02/06 Tuesday 0.23 #21925 comment 0.52 review coverity scan report, reply to email from jkeenan 0.27 #21927 review and comment 0.08 #21928 review and approve

0.08 #21922 review and approve

1.18

2024/02/07 Wednesday 0.25 github notifications 0.52 #21935 review, existing comments need addressing

2.12 #21877 work on fix, push for CI most of a fix

2.89

2024/02/08 Thursday 0.40 #21927 review and approve 0.23 #21935 review, check each comment has been addressed, approve 0.45 #21937 review and approve 0.15 #21938 review and comment 0.10 #21939 review and approve 0.13 #21941 review and approve 0.10 #21942 review and approve 0.08 #21943 review and approve 0.07 #21945 review and approve 0.17 #21877 look into CI failures, think I found problem, push probable fix 0.18 #21927 make a change to improve pad_add_name_pvn() docs, testing, push for CI 2.20 #21877 performance test on cygwin, try to work up a

regression test

4.26

2024/02/12 Monday 0.60 #18606 fix minor issue pointed out by mauke, testing 0.40 github notifications 0.08 #21872 review latest changes and approve 0.08 #21920 review latest changes and approve 1.48 #21877 debugging test 0.30 #21524 comment on downstream ticket

0.27 #21724 update title to match reality and comment

3.21

2024/02/13 Tuesday 0.35 #21915 review, brief comment 0.25 #21983 review and approve 0.03 #21233 close 0.28 #21878 comment 0.08 #21927 check CI results and make PR 21984 0.63 #21877 debug failing CI 0.27 #21984 follow-up 0.58 #21982 review, testing, comments

0.32 #21979 review and approve

2.79

2024/02/14 Wednesday 1.83 #21958 testing, finally reproduce, debugging and comment 0.08 #21987 review discussion and briefly comment 0.08 #21984 apply to blead 0.22 #21977 review and approve 0.12 #21988 review and approve 0.15 #21990 review and approve 0.82 #21550 probable fix, build tests 0.38 coverity scan follow-up 1.27 #21829/#21558 (related to 21550) debugging

0.65 #21829/#21558 more debugging, testing, comment

5.60

2024/02/15 Thursday 0.15 github notifications 0.08 #21915 review updates and approve 2.17 #21958 debugging, research, long comment 0.58 #21958 testing, follow-up

0.12 #21991 review and approve

3.10

2024/02/19 Monday 0.88 #21161 review comment and reply, minor change, testing, force push 0.23 #22001 review and comment 0.30 #22002 review and comment 0.12 #22004 review and comment 0.28 #22005 review and approve 0.32 #21993 testing, review changes 1.95 #21661 review comments on PR and fixes, review code and

history for possible refactor of vFAIL*() macros

4.08

2024/02/20 Tuesday 0.35 github notifications 0.08 #22010 review and approve 0.08 #22007 review and approve with comment 0.60 #22006 review, research and approve with comment 0.08 #21989 review and approve 0.58 #21996 review, testing, comment 0.22 #22009 review and approve 0.50 #21925 review latest updates and approve

1.05 #18606 apply to blead, work on a perldelta, make PR 22011

3.54

2024/02/21 Wednesday 0.18 #22011 fixes 0.80 #21683 refactoring

1.80 #21683 more refactor

2.78

2024/02/22 Thursday 0.38 #22007 review and comment 0.70 #21161 apply to blead, perldelta as PR22017 1.75 smoke report checks: testing win32 gcc failures 0.27 #22007 review updates and approve

1.15 #21661 re-check, research and push for smoke/ci

4.25

2024/02/26 Monday 2.10 look over smoke reports, debug PERLIO=stdio failure on mac

1.38 more debug PERLIO=stdio

3.48

2024/02/27 Tuesday 0.08 #22029 review and apply to blead 0.27 #22024 review and approve 0.33 #22026 review and approve 0.08 #22027 review and approve 0.10 #22028 review and approve 0.08 #22030 review and comment, conditionally approve 0.25 #22033 review, comments and approve 0.08 #22034 review and approve 0.17 #22035 review and comment

0.78 #21877 debugging

2.22

2024/02/28 Wednesday 0.38 github notifications 0.52 #22040 review discussion, research and comment 0.13 #22043 review and approve 0.12 #22044 review and approve 0.72 #22045 review, research, comment and approve 0.13 #22046 review, research and approve

1.55 #21877 more debugging (unexpected leak)

3.55

2024/02/29 Thursday 0.15 #21966 review update and approve 1.18 #21877 debugging

0.13 fix $DynaLoader::VERSION

1.46

Which I calculate is 55.79 hours.

Approximately 70 tickets were reviewed or worked on, and 5 patches were applied. ```

TPRF sponsors Perl Toolchain Summit

Perl Foundation News

Published by Makoto Nozaki on Friday 03 May 2024 19:49

I am pleased to announce that The Perl and Raku Foundation sponsored the Perl Toolchain Summit 2024 as a Platinum Sponsor.

The Perl Toolchain Summit (PTS) is an annual event where they bring together the volunteers who work on the tools and modules at the heart of Perl and the CPAN ecosystem. The PTS gives them 4 days to work together on these systems, with all their fellow volunteers to hand.

The event successfully concluded in Lisbon, Portugal at the end of April 2024.

If you or your company is willing to help the future PTS events, you can get in touch with the PTS team. Alternatively, you can make a donation to The Perl and Raku Foundation, which is a 501(c)(3) organization.

PTS 2024: Lisbon

rjbs forgot what he was saying

Published by Ricardo Signes on Friday 03 May 2024 15:19

Almost exactly a year since the last Perl Toolchain Summit, it was time for the next one, this time in Lisbon. Last year, I wrote:

In 2019, I wasn’t sure whether I would go. This time, I was sure that I would. It had been too long since I saw everyone, and there were some useful discussions to be had. I think that overall the summit was a success, and I’m happy with the outcomes. We left with a few loose threads, but I’m feeling hopeful that they can, mostly, get tied up.

Months later, I did not feel hopeful. They were left dangling, and I felt like some of the best work I did was not getting any value. I was grouchy about it, and figured I was done. Then, though, I started thinking that there was one last project I’d like doing for PAUSE: Upgrading the server. It’s the thing I said I wanted to do last year, but barely even started. This year, I said that if we could get buy-in to do it, I’d go. Since I’m writing this blog post, you know I went, and I’m going to tell you about it.

PAUSE Bootstrap

Last year, Matthew and I wanted to make it possible to quickly spin up a working PAUSE environment, so we could replace the long-suffering “pause2” server. We were excited by the idea of starting from work that Kenichi Ishigaki had done to create a Docker container running a test instance. We only ended up doing a little work on that, partly because we thought we’d be starting from scratch and didn’t know enough Docker to be useful.

This year, we decided it’d be our whole mission. We also said that we were not going to start with Docker. Docker made sense, it was probably a great way to do it, but Matthew and I still aren’t Docker users. We wanted results, and we felt the way to get them was to stick to what we know: automated installation and configuration of an actual VM. We pitched this plan to Robert Spier, one of the operators of the Perl NOC and he was on board. I leaned on him pretty hard to actually come to Lisbon and help, and he agreed. (He also said that a sufficiently straightforward installer would be a good starting point for turning things into Docker containers later, which was reassuring.)

At Fastmail, where Matthew and I work, we can take every other Friday for experimental or out-of-band work, and we decided we’d get started early. If the installer was done by the time we arrived, we’d be in a great position to actually ship. This was a great choice. Matthew and I, with help from another Fastmail colleague, Marcus, wrote a program. It started off life as unpause, but is now in the repo as bootstrap/mkpause. You can read the PAUSE Bootstrap README if you want to skip to “how do I use this?”.

The idea is that there’s a program to run on a fresh Debian 12 box. That installs all the needed apt packages, configures services, sets up Let’s Encrypt, creates unix users, builds a new perl, installs systemd services, and gets everything running. There’s another program that can create that fresh Debian 12 box for you, using the DigitalOcean API. (PAUSE doesn’t run in DigitalOcean, but Fastmail has an account that made it easy to use for development.)

I think Matthew and I worked well together on this. We found different rabbit holes interesting. He fixed hard problems I was (barely) content to suffer with. (There was some interesting nonsense with the state of apt locking and journald behavior shortly after VM “cloud init”.) I slogged through testing exactly whether each cron job ran correctly and got a pre-built perl environment ready for quick download, to avoid running plenv and cpanm during install.

Before we even arrived, we could go from zero to a fully running private PAUSE server in about two and a half minutes! Quick builds meant we could iterate much faster. We also had a script to import all of PAUSE’s data from the live PAUSE. It took about ten minutes to run, but we had it down to one minute by day two.

When we arrived, I took my todo and threw it up on the wall in the form of a sticky note kanban board.

PTS Stickies: Day 1

We spent day one re-testing cron jobs, improving import speed, and (especially) asking Andreas König all kinds of questions about things we’d skipped out of confusion. More on those below, but without Andreas, we could easily have broken or ignored critical bits of the system.

By the end of day two, we were confident that we could deploy the next day. I’d hoped we could deploy on day two, but there were just too many bits that were not quite ready. Robert had spent a bunch of time running the installer on the VM where he intended to run the new production PAUSE service, known at the event as “pause3”. There were networking things to tweak, and especially storage volume management. This required the rejiggering of a bunch of paths, exposing fun bugs or incorrect assumptions.

The first thing we did on day three was start reviewing our list of pre-deploy acceptance tests. Did everything on the list work? We thought so. We took down pause2 for maintenance at 10:00, resynchronized everything, watched a lot of logs, and did some uploads. We got some other attendees to upload things to pause3. Everything looked good, so we cut pause.perl.org over to pause3. It worked! We were done! Sort of.

We had some more snags to work through, but it was just the usual nonsense. A service was logging to the wrong place. The new MySQL was stricter about data validation than the old one. An accidental button-push took down networking on the VM. Everything got worked out in the end. I’ll include some “weird stuff that happened” below, but the short version is: it went really smoothly, for this kind of work.

On day four, we got to work on fit and finish. We cleaned up logging noise, we applied some small merge requests that we’d piled up while trying to ship. We improved the installer to move more configuration into files, instead of being inlined in the installer. Also, we prepared pull requests to delete about 20,000 lines of totally unused files. This is huge. When trying to learn how an older codebase works, it can be really useful to just grep the code for likely variable names or known subroutines. When tens of thousands of lines in the code base are unused, part of the job becomes separating live code out from dead code, instead of solving a problem.

We also overhauled a lot of documentation. It was exciting to replace the long and bit-rotted “how to install a private PAUSE” with something that basically said “run this program”. It doesn’t just say that, though, and now it’s accurate and written from the last successful execution of the process. You can read how to install PAUSE yourself.

Matthew, Robert, and I celebrated a successful PTS by heading off to Belém Tower to see the sights and eat pastéis.

I should make clear, finally, that the PAUSE team was five people. Andreas König and Kenichi Ishigaki were largely working on other goals not listed here. It was great to have them there for help on our work, but they got other things done, not documented in this post!

Here’s our kanban board from the end of day four:

PTS Stickies: Day 4

Specifics of Note

run_mirrors.sh

This was one of the two mirroring-related scripts we had to look into. It was bananas. Turns out that PAUSE had a list of users who ran their own FTP servers. It would, four times a day, connect to those servers and retrieve files from them directly into the users’ home directories on PAUSE.

Beyond the general bananas-ness of this, the underlying non-PAUSE program in /usr/bin/mirror no longer runs, as it uses $*, eliminated back in v5.30. Rather than fix it and keep something barely used and super weird around, we eliminated this feature. (I say “barely used”, but I found no evidence it was used at all.)

make-mirror-yaml.pl

The other mirror program! This one updated the YAML file that exposes the CPAN mirror list. Years ago, the mirror list was eliminated, and a single name now points to a CDN. Still, we were diligently updating the mirror list every hour. No longer.

rrrsync

You can rsync from CPAN, but it’s even better to use rrr. With rrr, the PAUSE server is meant to maintain a few lists of “files that changed in a given time window”. Other machines can then synchronize only files that have changed since they last checked, with occasionally full-scan reindexes.

We got this working pretty quickly, but it seemed to break at the last minute. What had happened? We couldn’t tell, everything looked great, and there were no errors. Eventually, I found myself using strace against perl. It turned out that during our reorganization of the filesystem, we’d moved where the locks live. We put in a symlink for the old name, and that’s what rrr was using… but it didn’t follow symlinks when locking. Once we updated the configuration to use the canonical name and not the link, everything worked.

Matthew said, “You solved a problem with strace!” I said, “I know!” Then we high fived and got back to work.

I was never happy with the symlinks we introduced during the filesystem reorganization, but I was happy when I eliminated the last one during day four cleanup!

the root partition filled up

We did all this work to keep the data partition capable of growth, and then / filled up. Ugh.

It turned out it was logs. This wasn’t too much of a surprise, but it was annoying. It was especially annoying because we decided early on that we’d just accept using journald for all our logging, and that should’ve kept us from going over quota.

It turned out that on the VM, something had installed a service I’d never heard of. Its job was to notice when something wanted to use the named syslog socket, and then start rsyslogd. Once that happened, we were double-logging a ton of stuff, and there was no log rotation configured. We killed it off.

We did other tuning to make sure we’d keep enough logs without running out of space, but this was the interesting part.

Future Plans

We have some. If nothing else, I’m dying to see my pull request 405 merged. (It’s the thing I wrote last year.) I have a bunch of half-done work that will be easier to finish after that. But the problem was: would this wait another year?

We finished our day — just before heading off to Belém — by talking about development between now and then. I said, “Look, I feel really demotivated and uninterested if I can’t continuously ship and review real improvements.” Andreas said, “I don’t want to see things change out from under me without understanding what happened.”

The five of us agreed to create a private PAUSE operations mailing list where we’d announce (or propose) changes and problems. We all joined, along with Neil Bowers, who is an important part of the PAUSE team but couldn’t attend Lisbon. With that, we felt good about keeping improvements flowing through the year. Robert has been shipping fixes to log noise. I’ve got a significant improvement to email handling in the wings. It’s looking like an exciting year ahead for PAUSE! (That said, it’s still PAUSE. Don’t expect miracles, okay?)

Thanks to our sponsors and organizers

The Perl Toolchain Summit is one of the most important events in the year for Perl. A lot of key projects have folks get together to get things done. Some of them are working all year, and use this time for deep dives or big lifts. Others (like PAUSE) are often pretty quiet throughout the year, and use this time to do everything they need to do for the year.

Those of us doing stuff need a place to work, and we need to a way to get there and sleep, and we’re also pretty keen on having a nice meal or two together. Our sponsors and organizers make that possible. Our sponsors provide much-needed money to the organizers, and the organizers turn that money into concrete things like “meeting rooms” and “plane tickets”.

I offer my sincere thanks to our organizers: Laurent Boivin, Philippe Bruhat, and Breno de Oliveira, and also to our sponsors. This year, the organizers have divided sponsors into those who handed over cash and those who provided in-kind donation, like people’s time or paying attendee’s airfare and hotel bills directly. All these organizations and people are helping to keep Perl’s toolchain operational and improving. Here’s the breakdown:

Monetary sponsors: Booking.com, The Perl and Raku Foundation, Deriv, cPanel, Inc Japan Perl Association, Perl-Services, Simplelists Ltd, Ctrl O Ltd, Findus Internet-OPAC, Harald Joerg, Steven Schubiger.

In kind sponsors: Fastmail, Grant Street Group, Deft, Procura, Healex GmbH, SUSE, Zoopla.

Breno especially should get called out for organizing this from five thousand miles away. You never could’ve guessed, and it ran exceptionally smoothly. Also, it meant I got to see Lisbon, which was a terrific city that I probably would not have visited any time soon otherwise. Thanks, Breno!

List of new CPAN distributions – Apr 2024

Perlancar

Published by perlancar on Wednesday 01 May 2024 02:38

dist author abstract date
AI-Ollama-Client CORION Client for AI::Ollama 2024-04-05T09:15:33
Acme-CPANModules-BPOM-FoodRegistration PERLANCAR List of modules and utilities related to Food Registration at BPOM 2024-04-27T00:06:16
Acme-CPANModules-JSONVariants PERLANCAR List of JSON variants/extensions 2024-04-29T00:05:46
Alien-NLOpt DJERIUS Build and Install the NLOpt library 2024-04-28T00:59:11
Alien-onnxruntime EGOR Discover or download and install onnxruntime (ONNX Runtime is a cross-platform inference and training machine-learning accelerator.) 2024-04-17T22:03:45
AnyEvent-I3X-Workspace-OnDemand WATERKIP An I3 workspace loader 2024-04-12T18:33:21
App-papersway SPWHITTON PaperWM-like window management for Sway/i3wm 2024-04-12T08:18:00
App-sort_by_comparer PERLANCAR Sort lines of text by a Comparer module 2024-04-16T00:06:00
App-sort_by_example PERLANCAR Sort lines of text by example 2024-04-20T00:05:10
App-sort_by_sorter PERLANCAR Sort lines of text by a Sorter module 2024-04-17T00:05:42
App-sort_by_sortkey PERLANCAR Sort lines of text by a SortKey module 2024-04-24T00:06:38
Arithmetic-PaperAndPencil JFORGET simulating paper and pencil techniques for basic arithmetic operations 2024-04-22T19:57:44
Bencher-Scenario-ExceptionHandling PERLANCAR Benchmark various ways to do exception handling in Perl 2024-04-13T00:05:36
CPAN-Requirements-Dynamic LEONT Dynamic prerequisites in meta files 2024-04-27T15:17:57
CSAF GDT Common Security Advisory Framework 2024-04-23T21:49:42
CXC-DB-DDL DJERIUS DDL for table creation, based on SQL::Translator::Schema 2024-04-04T16:24:13
Captcha-Stateless-Text HIGHTOWE stateless, text-based CAPTCHAs 2024-04-17T21:19:21
Carp-Object DAMI a replacement for Carp or Carp::Clan, object-oriented 2024-04-28T17:58:22
Carp-Patch-OutputToBrowser PERLANCAR Output stacktrace to browser as HTML instead of returning it 2024-04-25T00:05:19
Catalyst-Plugin-Flash ARISTOTLE put values on the stash of the next request 2024-04-09T05:06:19
Comparer-date_in_text PERLANCAR Compare date found in text (or text asciibetically, if no date is found) 2024-04-18T00:05:43
Crypt-Passphrase-Bcrypt-Compat LEONT A bcrypt encoder for Crypt::Passphrase 2024-04-08T14:24:10
DBD-Mock-Session-GenerateFixtures UXYZAB When a real DBI database handle ($dbh) is provided, the module generates DBD::Mock::Session data. Otherwise, it returns a DBD::Mock::Session object populated with generated data. This not a part form DBD::Mock::Session distribution just a wrapper around it. 2024-04-29T18:25:02
Data-Dumper-UnDumper BIGPRESH load Data::Dumper output, including self-references 2024-04-25T21:42:30
Data-MiniDumpX PERLANCAR A simplistic data structure dumper (demo for Plugin::System) 2024-04-14T00:06:13
DateTime-Format-PDF SKIM PDF DateTime Parser and Formatter. 2024-04-01T09:23:07
Devel-Confess-Patch-UseDataDumpHTMLCollapsible PERLANCAR Use Data::Dump::HTML::Collapsible to stringify reference 2024-04-26T00:05:16
Devel-Confess-Patch-UseDataDumpHTMLPopUp PERLANCAR Use Data::Dump::HTML::PopUp to stringify reference 2024-04-28T00:06:05
Dist-Build LEONT A modern module builder, author tools not included! 2024-04-26T10:50:10
Dist-Zilla-Plugin-DistBuild LEONT Build a Build.PL that uses Dist::Build 2024-04-26T10:55:35
Dist-Zilla-Plugin-DynamicPrereqs-Meta LEONT Add dynamic prereqs to to the metadata in our Dist::Zilla build 2024-04-27T15:50:03
ExtUtils-Builder LEONT An overview of the foundations of the ExtUtils::Builder Plan framework 2024-04-25T12:14:45
ExtUtils-Builder-Compiler LEONT Portable compilation 2024-04-25T13:18:11
JSON-Ordered-Conditional LNATION A conditional language within an ordered JSON struct 2024-04-06T06:47:37
JSON-ToHTML ARISTOTLE render JSON-based Perl datastructures as HTML tables 2024-04-09T04:28:11
Knowledge RSPIER a great new dist 2024-04-27T11:13:53
Log-Any-Simple MATHIAS A very thin wrapper around Log::Any, using a functional interface that dies automatically when you log above a given level. 2024-04-24T19:51:03
Mo-utils-Country SKIM Mo country utilities. 2024-04-11T13:41:33
Mo-utils-Time SKIM Mo time utilities. 2024-04-12T14:28:06
Mo-utils-TimeZone SKIM Mo timezone utilities. 2024-04-03T16:34:52
Mojolicious-Plugin-Authentication-OIDC TYRRMINAL OpenID Connect implementation integrated into Mojolicious 2024-04-25T19:27:09
Mojolicious-Plugin-Cron-Scheduler TYRRMINAL Mojolicious Plugin that wraps Mojolicious::Plugin::Cron for job configurability 2024-04-16T11:48:54
Mojolicious-Plugin-Migration-Sqitch TYRRMINAL Run Sqitch database migrations from a Mojo app 2024-04-30T15:37:52
Mojolicious-Plugin-Module-Loader TYRRMINAL Automatically load mojolicious namespaces 2024-04-19T14:09:36
Mojolicious-Plugin-ORM-DBIx TYRRMINAL Easily load and access DBIx::Class functionality in Mojolicious apps 2024-04-03T13:32:06
Mojolicious-Plugin-SendEmail TYRRMINAL Easily send emails from Mojolicious applications 2024-04-01T20:40:24
Mojolicious-Plugin-Sessionless TYRRMINAL Installs noop handlers to disable Mojolicious sessions 2024-04-16T12:45:37
MooX-Pack LNATION The great new MooX::Pack! 2024-04-20T01:52:17
Net-Async-OpenExchRates VNEALV Interaction with OpenExchangeRates API 2024-04-20T11:46:28
Net-EPP-Server GBROWN A simple EPP server implementation. 2024-04-08T09:38:21
Number-Iterator LNATION The great new Number::Iterator! 2024-04-18T19:45:31
Parallel-TaskExecutor MATHIAS Cross-platform executor for parallel tasks executed in forked processes 2024-04-13T20:02:27
Plack-App-Login-Request SKIM Plack application for request of login information. 2024-04-29T14:23:02
Sah-SchemaBundle-Business-ID-BCA PERLANCAR Sah schemas related to BCA (Bank Central Asia) bank 2024-04-23T00:05:53
Sah-SchemaBundle-Business-ID-Mandiri PERLANCAR Sah schemas related to Mandiri bank 2024-04-30T00:05:43
Sah-SchemaBundle-Comparer PERLANCAR Sah schemas related to Comparer 2024-04-21T00:05:30
Sah-SchemaBundle-Path PERLANCAR Schemas related to filesystem path 2024-04-01T00:06:15
Sah-SchemaBundle-Perl PERLANCAR Sah schemas related to Perl 2024-04-02T00:05:40
Sah-SchemaBundle-SortKey PERLANCAR Sah schemas related to SortKey 2024-04-22T00:06:02
Sah-SchemaBundle-Sorter PERLANCAR Sah schemas related to Sorter 2024-04-03T00:14:57
Sah-Schemas-Sorter PERLANCAR Sah schemas related to Sorter 2024-04-03T00:05:43
Seven LNATION The great new Seven! 2024-04-13T03:30:11
Sort-Key-SortKey PERLANCAR Thin wrapper for Sort::Key to easily use SortKey::* 2024-04-04T00:05:05
SortExample-Color-Rainbow-EN PERLANCAR Ordered list of names of colors in the rainbow, in English 2024-04-05T00:06:12
SortKey-Num-pattern_count PERLANCAR Number of occurrences of string/regexp pattern as sort key 2024-04-06T00:05:41
SortKey-Num-similarity PERLANCAR Similarity to a reference string as sort key 2024-04-08T00:05:21
SortKey-date_in_text PERLANCAR Date found in text as sort key 2024-04-19T00:05:23
SortSpec PERLANCAR Specification of sort specification 2024-04-09T00:05:37
SortSpec-Perl-CPAN-ChangesGroup-PERLANCAR PERLANCAR Specification to sort changes group heading PERLANCAR-style 2024-04-10T00:05:24
Sorter-from_comparer PERLANCAR Sort by comparer generated by a Comparer:: module 2024-04-11T00:05:17
Sorter-from_sortkey PERLANCAR Sort by keys generated by a SortKey:: module 2024-04-12T00:05:58
Sqids MYSOCIETY generate short unique identifiers from numbers 2024-04-06T10:43:27
TableData-Business-ID-BPOM-FoodAdditive PERLANCAR Food additives in BPOM 2024-04-10T11:10:00
Tags-HTML-Image SKIM Tags helper class for image presentation. 2024-04-20T13:32:39
Tags-HTML-Login-Request SKIM Tags helper for login request. 2024-04-29T11:23:37
Test2-Tools-MIDI JMATES test MIDI file contents 2024-04-09T23:42:34
Tiny-Prof TIMKA Perl profiling made simple to use. 2024-04-26T07:19:38
Web-Async TEAM Future-based web+HTTP handling 2024-04-23T16:50:24
YAML-Ordered-Conditional LNATION A conditional language within an ordered YAML struct 2024-04-06T06:05:51
kraken PHILIPPE api.kraken.com connector 2024-04-05T09:11:35
papersway SPWHITTON PaperWM-like window management for Sway/i3wm 2024-04-12T07:52:39

Stats

Number of new CPAN distributions this period: 81

Number of authors releasing new CPAN distributions this period: 26

Authors by number of new CPAN distributions this period:

No Author Distributions
1 PERLANCAR 30
2 TYRRMINAL 7
3 LEONT 7
4 SKIM 7
5 LNATION 5
6 MATHIAS 2
7 SPWHITTON 2
8 DJERIUS 2
9 ARISTOTLE 2
10 PHILIPPE 1
11 JFORGET 1
12 VNEALV 1
13 RSPIER 1
14 UXYZAB 1
15 WATERKIP 1
16 GBROWN 1
17 CORION 1
18 EGOR 1
19 TIMKA 1
20 TEAM 1
21 BIGPRESH 1
22 HIGHTOWE 1
23 DAMI 1
24 MYSOCIETY 1
25 JMATES 1
26 GDT 1

What's new on CPAN - March 2024

perl.com

Published on Tuesday 30 April 2024 21:00

Welcome to “What’s new on CPAN”, a curated look at last month’s new CPAN uploads for your reading and programming pleasure. Enjoy!

APIs & Apps

Data

Development & Version Control

Science & Mathematics

Web

TPRC Call for volunteers

Perl Foundation News

Published by Amber Krawczyk on Saturday 27 April 2024 11:36


We hope you are coming to [The Perl and Raku Conference[(https://tprc.us/) in Las Vegas June 24-28! Plans are underway for a wonderful TPRC. But a conference of this type is only possible because of volunteers who give their time and expertise to plan, promote, and execute every detail. We need volunteers! You may have already volunteered to speak at the conference; if so, wonderful! If you are not presenting (or even if you are), there are many ways to help. We need people to set up and take down, to run the registration desk, to serve as room monitors, to help record the talks, and to just be extra hands. If you can spare some of your time for the sake of the conference, please fill out a volunteer form at https://tprc.us/tprc-2024-las/volunteer/ . We also welcome spouses and friends of attendees who might be coming along to Las Vegas to share the experience. We are offering TPRC "companion" tickets, for access to the social parts of the conference (food, drink, parties) but not the technical. Volunteers of at least one complete day, who sign up before the conference, will have companion access "comped". If you have questions about volunteering, please contact our TPRC Volunteer Coordinator: Sarah Gray sarah.gray@pobox.com