Since each mysql statement in Perl is linked to its connection handler :

my $handlerDB1 = DBI->connect(etc ...);
my $handlerDB2 = DBI->connect(etc ...);

I wonder if there's a trick to do a

INSERT INTO tableDB2
SELECT * FROM tableDB1;

And if not, what would be the simpler way to get and insert these data ?

I tried to install w3m on a redhat 8 system and got the error.

# yum install w3m.x86_64
Updating Subscription Management repositories.
misc                                             53 kB/s | 2.0 kB     00:00
chop_vmware                                      44 kB/s | 2.0 kB     00:00
Error:
 Problem: conflicting requests
  - nothing provides perl(NKF) needed by w3m-0.5.3-60.git20230121.el8.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
# uname -a
Linux reslndbhisbox05.research.chop.edu 4.18.0-425.10.1.el8_7.x86_64 #1 SMP Wed Dec 14 16:00:01 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

Trying --skip-broken or --nobest as suggested didn't fix the problem. I appreciate any help. Thanks.

I'm trying to interpret a file line by the line using Perl and, every line divide it into several words. However there are words that contain spaces and I'd like to keep these spaces when splitting the line. What can I do?

Code Example:

open(my $fh, '<', 'myfile.txt') or die "Can't open file: $! ";

while (my $line = <$fh>) {

#Actual inputs from file, i tied with only one below line

'Hello, world, "How,are,you ? ", I am fine, thank you'

#'Python, "is, a, powerful", programming language, used for, web development'
#'Apples, oranges, "bananas, grapes", and cherries", are my favorite fruits'
#'I love to read, "novels, biographies, and, science fiction", and I also enjoy, watching movies'
#'My favorite colors are, "blue, green, and, purple", but I also like, red and yellow'


chomp $line;
my @words = split(",", $line);
print $words[2];      # this prints "How
}
close($fh);


#Expected Output:

#@array = ('Hello, world, "How are you?",I am fine,thank you')

#My output :

#@array = ('Hello, world, "How,are,you ? ",I am fine, thank you')


In the code above when a line within the file has words with spaces then using the split() function is used to break the lines into distinct words according to the space delimiter. any spaces inside the word will be removed. For instance, the line "Hello world" "How are you?" could be divided into the array ('Hello", "world", '"How', 'are you? "').

What I'm trying to do is to divide each line into a series of words, and then preserving any spaces in a word. For example, in our above scenario, ideal output will be ('Hello", "world", '"How are you? "').

What can I do to modify my code to accomplish this?

Expected Output:

#@array = ('Hello, world, "How are you?",I am fine,thank you')

My output :

#@array = ('Hello, world, "How, are, you ? ",I am fine, thank you')

Before Let down question to minus, ask the doubts in comments, thanks

This week in PSC (101)

blogs.perl.org

Published by Perl Steering Council on Saturday 25 March 2023 05:07

Porters,

We had an abbreviated PSC call today, largely due to an unexpected delay.

We discussed offering split-up deprecation categories, so you can no warnings 'deprecated::.xyz' and re-affirmed that we want to do this.

We talked about improving the backcompat of strict-vs-version behavior for use vX where X is older than v5.36 and agreed we'd bring that back, but wanted to discuss more about other related changes to the use-vX code.

Command-line editing

Personal blog of Flavio Poletti (Perl & Raku)

Published on Saturday 25 March 2023 06:00

TL;DR

Ctrl-X Ctrl-E for the win!

So everyone knew except me. I blame you all for this.

I mean, I knew that there was such a thing like Command-line editing, but I thought it was limited to Ctrl-r to search backwards.

To my surprise (why was I surprised?), I eventually discovered Ctrl-X Ctrl-E, which brings the editing part to its extreme: open the editor and leave it the heavylifting.

Which is both genius, obvious and infuriating. (I know, both is for two, but it was infuriating anyway 🤬).

As an added bonus, I also discovered about Ctrl-X *. If you’re wondering, it expands a glob into the corresponding list directly on the command line. It’s a bit of a niche use case, but it can be useful when you have a bunch of files you want to work on and you only want to get rid of a couple before running your command. Although, admittedly, I’ll probably have forgotten about it by tomorrow šŸ™„.

So there you go, future me: you will not be able to say I didn’t tell you!

This week in PSC (102)

blogs.perl.org

Published by Perl Steering Council on Friday 24 March 2023 22:49

Topics discussed:

  • Bug reporting by email: we commit to nothing as an organisation. Some would-be submitters of issues don't want to use GitHub. They can just send an email to perl5-porters and hope that some helpful soul will copy-paste to GitHub. We encourage the conventional use of GitHub.
  • On the topic email bug reports, it's likely time to deprecate perlbug as a way to send bug reports. It could be updated to point the user to GitHub, and to print perl -V output (etc.) to copy and paste. We should also have a better set of templates for issues on GitHub.
  • The segfaults on feature-class and refaliasing won't be fixed before 5.38, and should just be documented as "known bugs"
  • After we discussed renaming RFC to PPC, the general sentiment seemed to be "in favor", so we're going to make the rename
  • Rik volunteered to release 5.38, we still need a release manager for 5.37.11
  • Some discussions about preparing the Perl Toolchain Summit plans (which will host the first ever in-person PSC meeting)

Perl's XML-Sig library adding dsig element to XML tags

Perl questions on StackOverflow

Published by Jorge_Freitas on Friday 24 March 2023 21:16

I'm trying to sign a XML document using Perl's XML::Sig library. I based the code below on the example provided in its Meta CPAN example page. Using this code:

use XML::Sig;

my xml = '<?xml version="1.0"?><foo ID="abc"><bar>123</bar></foo>';

my $pkey = '/path/to/X509/private.key';
my $cert = '/path/to/X509/certificate.pem';


my $signer = XML::Sig->new({
    key         => $pkey,
    cert        => $cert,
    x509        => 1,
    sig_hash    => 'sha256',
    digest_hash => 'sha256'
});

my $signedXml = $signer -> sign($xml);

print $signedXml;

Should produce something like this:

<?xml version="1.0"?>
    <foo ID="abc">
        <bar>123</bar>
        <Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
            <SignedInfo xmlns="http://www.w3.org/2000/09/xmldsig#" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#">
              <CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments" />
              <SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" />
              <Reference URI="#abc">
                <Transforms>
                  <Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
                </Transforms>
                <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
                <DigestValue>9kpmrvv3peVJpNSTRycrV+jeHVY=</DigestValue>
              </Reference>
            </SignedInfo>
            <SignatureValue>
              HXUBnMgPJf//j4ihaWnaylNwAR5AzDFY83HljFIlLmTqX1w1C72ZTuRObvYve8TNEbVsQlTQkj4R
              hiY0pgIMQUb75GLYFtc+f0YmBZf5rCWY3NWzo432D3ogAvpEzYXEQPmicWe2QozQhybaz9/wrYki
              XiXY+57fqCkf7aT8Bb6G+fn7Aj8gnZFLkmKxwCdyGsIZOIZdQ8MWpeQrifxBR0d8W1Zm6ix21WNv
              ONt575h7VxLKw8BDhNPS0p8CS3hOnSk29stpiDMCHFPxAwrbKVL1kGDLaLZn1q8nNRmH8oFxG15l
              UmS3JXDZAss8gZhU7g9T4XllCqjrAvzPLOFdeQ==
            </SignatureValue>
            <KeyInfo>
              <KeyValue>
                <RSAKeyValue>
                  <Modulus>
            1b+m37u3Xyawh2ArV8txLei251p03CXbkVuWaJu9C8eHy1pu87bcthi+T5WdlCPKD7KGtkKn9vq
            i4BJBZcG/Y10e8KWVlXDLg9gibN5hb0Agae3i1cCJTqqnQ0Ka8w1XABtbxTimS1B0aO1zYW6d+U
            Yl0xIeAOPsGMfWeu1NgLChZQton1/NrJsKwzMaQy1VI8m4gUleit9Z8mbz9bNMshdgYEZ9oC4bH
            n/SnA4FvQl1fjWyTpzL/aWF/bEzS6Qd8IBk7yhcWRJAGdXTWtwiX4mXb4h/2sdrSNvyOsd/shCf
            OSMsf0TX+OdlbH079AsxOwoUjlzjuKdCiFPdU6yAJw==
                  </Modulus>
                  <Exponent>Iw==</Exponent>
                </RSAKeyValue>
              </KeyValue>
            </KeyInfo>
       </Signature>
     </foo>

However, I'm getting this weird output:

<?xml version="1.0"?>
     <foo xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" ID="abc">
         <bar>123</bar>
            <dsig:Signature>
                <dsig:SignedInfo xmlns:xenc="http://www.w3.org/2001/04/xmlenc#">
                    <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
                        <dsig:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
                        <dsig:Reference URI="#abc">
                            <dsig:Transforms>
                                <dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
                                <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
                            </dsig:Transforms>
                        <dsig:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
                            <dsig:DigestValue>FHuo2WkVLRpa7bNfiojdVEHxa0nrhC1/uKoRjnkWJ94=
                            </dsig:DigestValue>
                        </dsig:Reference>
                </dsig:SignedInfo>
                <dsig:SignatureValue>kgp/+lPtErMFmtYIVJ+vpzzABLYOa0UEz6I7/LjHu/X+45vqpfVNZdLMjDzqShITA38LnNe+KuY/ qmXUplBXFQd26dP9opiAlh9rzS8YQ/yiWXcpQKoM2Ssam9c84ELUDy64Dw5NHXJUaFCVfyuwth/m Fju7J3r42KOvzl3YSAduqEVotDmyVx2WGv3/vr9MAkUfSrlc5PtQP9NU3et2lCVp+4B7wWD7vrDd w4Qnz7NYlc1xcbMtcHGNWHqzoWRInJgJG/wawRp8LRgeyxYFUTV7+U6gzbkCIbgy1CKGmfqkoWgJ vVO2kwwu7cKcZ6peFDdVSKriN/EpZOUn7uC2DA==
                </dsig:SignatureValue>
                <dsig:KeyInfo>
                    <dsig:X509Data>
                        <dsig:X509Certificate>MIIIGjCCBgKgAwIBAgIITEy4wqEVR0kwDQYJKoZIhvcNAQELBQAwdDELMAkGA1UE BhMCQlIxEzARBgNVBAoTCklDUC1CcmFzaWwxNjA0BgNVBAsTLVNlY3JldGFyaWEg ZGEgUmVjZWl0YSBGZWRlcmFsIGRvIEJyYXNpbCAtIFJGQjEYMBYGA1UEAxMPQUMg VkFMSUQgUkZCIHY1MB4XDTIyMDgwMzE5MjkwMloXDTIzMDgwMzE5MjkwMlowggEn MQswCQYDVQQGEwJCUjELMAkGA1UECBMCU1AxDzANBgNVBAcTBlNVTUFSRTETMBEG A1UEChMKSUNQLUJyYXNpbDE2MDQGA1UECxMtU2VjcmV0YXJpYSBkYSBSZWNlaXRh IEZlZGVyYWwgZG8gQnJhc2lsIC0gUkZCMRYwFAYDVQQLEw1SRkIgZS1DTlBKIEEx MRYwFAYDVQQLEw1BUiBJTkZPUk1CQU5LMRkwFwYDVQQLExBWaWRlb2NvbmZlcmVu Y2lhMRcwFQYDVQQLEw4xNjY5NjA2MTAwMDE3NTFJMEcGA1UEAxNATUFYSU1VUyBE SVNUUklCVUlET1JBIERFIEVRVUlQQU1FTlRPUyBBVVRPTU9USVZPUzozNDMzMzIz MzAwMDEyODCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKtSXumD8s+I EutiXQuQqSiEXoIUvrsvgtOL4TbVlQTHwO/PXSitGdberioaYiQoC4ZKod48S9iJ 2X8gi+SUsrsvaVpNGEZFAbGHI0RD6NfUbuUZe3Ecq4NxJ7QxJNi34gkRHBrcOemd b8LtOtq6Ny0rujl6wvlNCQPFGGnjHCwzeO/qnGmiGLRedTr/xme2xD8wXwvyy1qR 1sYPzbUKCvAw74usAxN3aOGRwQgiMohsUysD3HcJyjCc5vLMFrgyRJ3aqujUXQj3 kOrfgo+w7fq2YG2RXjH0F7CpcKULXaNqCmmce9A764drrkq36Ty4cAXw3UnKWkw9 ZBk9UEYhItkCAwEAAaOCAvkwggL1MIGcBggrBgEFBQcBAQSBjzCBjDBVBggrBgEF BQcwAoZJaHR0cDovL2ljcC1icmFzaWwudmFsaWRjZXJ0aWZpY2Fkb3JhLmNvbS5i ci9hYy12YWxpZHJmYi9hYy12YWxpZHJmYnY1LnA3YjAzBggrBgEFBQcwAYYnaHR0 cDovL29jc3B2NS52YWxpZGNlcnRpZmljYWRvcmEuY29tLmJyMAkGA1UdEwQCMAAw HwYDVR0jBBgwFoAUU8ul5HVQmUAsvlsVRcm+yzCqicUwcAYDVR0gBGkwZzBlBgZg TAECASUwWzBZBggrBgEFBQcCARZNaHR0cDovL2ljcC1icmFzaWwudmFsaWRjZXJ0 aWZpY2Fkb3JhLmNvbS5ici9hYy12YWxpZHJmYi9kcGMtYWMtdmFsaWRyZmJ2NS5w ZGYwgbYGA1UdHwSBrjCBqzBToFGgT4ZNaHR0cDovL2ljcC1icmFzaWwudmFsaWRj ZXJ0aWZpY2Fkb3JhLmNvbS5ici9hYy12YWxpZHJmYi9sY3ItYWMtdmFsaWRyZmJ2 NS5jcmwwVKBSoFCGTmh0dHA6Ly9pY3AtYnJhc2lsMi52YWxpZGNlcnRpZmljYWRv cmEuY29tLmJyL2FjLXZhbGlkcmZiL2xjci1hYy12YWxpZHJmYnY1LmNybDAOBgNV HQ8BAf8EBAMCBeAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMIHNBgNV HREEgcUwgcKBIGZyYW5jaXNjby5iYXJjZWxsb3NhZHZAZ21haWwuY29toDgGBWBM AQMEoC8ELTIxMTIxOTY2MDg0NzUxMDY4OTcwMDAwMDAwMDAwMDAwMDAwMDAwMDAw MDAwMKAwBgVgTAEDAqAnBCVGUkFOQ0lTQ08gQ0FSTE9TIERPUyBTQU5UT1MgQkFS Q0VMTE9ToBkGBWBMAQMDoBAEDjM0MzMzMjMzMDAwMTI4oBcGBWBMAQMHoA4EDDAw MDAwMDAwMDAwMDANBgkqhkiG9w0BAQsFAAOCAgEAEGcTeHal+ZqMse5zRXNDVO2T AAEWdOEfF4ssR0H67+R0KFy0RJ6NQa7QwnwHwaymw3TWd+lcRwEmp//6+v0xmBYm 7SiHjgsvVVZdC9WIN51BhskXZJltX+g8uTv1BafZ93YcEJiDJlj0Ng3GSwMOPg80 FXqjtAml5uas8KV2jI5r5Pm3gM2F+tbGFB8ybb4HgQfdvtUexgN/gTT+Mp4y9UsQ znuIu1XBNQHWSaYT1BKLh/cAdrE5hNwdwYkSuvHeiA20ak2m4fmdH8rhMsWHOtua kUh6fpoQCJXjJr5wxMD40boFSQQHdEasSRBPKOXNIbmmRWBVE9RhV+xxIwPDuZY+ E8W3H0mdOtzgXLgj/9y31jMNiQCymZ6MUxUJfwqIecOQsCVunREOPFD/u8fuIX+3 gpbcQudVTtD0duZMO5Uqa2z6hzMXuLR1/og4TnZoKJ4HrFijt5vrNKxIiJKDd9zz qOqFfvPOQ8AaBxmYfYzcn7V4NYSrU7wliGl92vdD1ZUC2HsgNzwAjEbI+Je+yWSM ria3Vr+f3X3qvoiARpZ7AuhpvUpVCdl0Khb7FlqFqfjebUZvvqqu2aREPmdPsagT DIu4BesHL8JIUs8nKEEqApYTHQukxMSq5Ixind9ZY8eaB69PqVO5a0qqfQxvOUwA AoF8N+KW4C8r5OvN6Mw= 
                    </dsig:X509Certificate>
                </dsig:X509Data>
            </dsig:KeyInfo>
        </dsig:Signature>
     </foo>

The foo tag is being marked with a dsig element, and every other element references it afterwards. This is causing problems because it alters the structure of the document I'm expected to sign, making it fail its structure validation. Why is this happening? And how can I remove this dsig?

Edit: Corrected typos in the example code

Perl Async/Await example

Perl questions on StackOverflow

Published by Arsenii on Friday 24 March 2023 18:56

I am trying to get an idea of how to implement async (parallel) function call in Perl (in one of my Mojolicious controllers to process lots of data sets).

Here is what I have (a simple example):

use Future::AsyncAwait;

async sub asyncSub{
    
    async sub funcA{
        my $num = shift;
        print "This is $num (START)\n";
        sleep 1;
        print "This is $num (END)\n";
    };

    funcA(1);
    funcA(2);
    funcA(4);
    funcA(5);
    funcA(6);
    funcA(7);
    funcA(8);
    funcA(9);
    funcA(10);

}

asyncSub();

This code prints out:

This is 1 (START)
This is 1 (END)
This is 2 (START)
This is 2 (END)
This is 4 (START)
This is 4 (END)
This is 5 (START)
This is 5 (END)
This is 6 (START)
This is 6 (END)
This is 7 (START)
This is 7 (END)
This is 8 (START)
This is 8 (END)
This is 9 (START)
This is 9 (END)
This is 10 (START)
This is 10 (END)

It always works synchronously.

Thank you in advance for your help.

TL;DR

On with TASK #2 from The Weekly Challenge #209. Enjoy!

The challenge

You are given an array of accounts i.e. name with list of email addresses.

Write a script to merge the accounts where possible. The accounts can only be merged if they have at least one email address in common.

Example 1:

Input: @accounts = [ ["A", "a1@a.com", "a2@a.com"],
                     ["B", "b1@b.com"],
                     ["A", "a3@a.com", "a1@a.com"] ]
                   ]

Output: [ ["A", "a1@a.com", "a2@a.com", "a3@a.com"],
          ["B", "b1@b.com"] ]

Example 2:

Input: @accounts = [ ["A", "a1@a.com", "a2@a.com"],
                     ["B", "b1@b.com"],
                     ["A", "a3@a.com"],
                     ["B"m "b2@b.com", "b1@b.com"] ]

Output: [ ["A", "a1@a.com", "a2@a.com"],
          ["A", "a3@a.com"],
          ["B", "b1@b.com", "b2@b.com"] ]

The questions

Should the merging be ā€œstableā€? I mean, should we preserve as much as possible the order of appearance of the different groups? It seems not, because in the second example the two ā€œAā€ groups both appear before the ā€œBā€ group, despite a ā€œBā€ group appearing between them.

Which begs a related question: maybe it’s some kind of ā€œstableā€, but moving forward instead of keeping things backwards? I’m digressing.

Another question relates to the order of the addresses. The inputs are arranged in arrays, which seems to imply that order might be important. ON the other hand, these arrays contain semantically different data (a name, addresses), so maybe it’s more like a tuple and order does not matter. I’ll assume the latter.

The solution

The solution in Perl is somehow intentionally long and complicated. I took the challenge of producing a stable result, i.e. try to preserve the order of appearance of addresses if possible. Additionally, I tried to minimize the copying and duplications and iterations and whatsnot, in pure evil spirit of premature optimization.

Addresses are iterated over and amassed in ā€œgroupsā€ by name. Each group contains all disjoint addresses belonging to that name, trying to pack them as much as possible while we do the input’s sweep. If we can merge, we merge and move on to see if additional merging is possible (because previous addresses A and B might be disjoint, but both joined with later address C).

#!/usr/bin/env perl
use v5.24;
use warnings;
use experimental 'signatures';

use constant TRUE  => (!0);
use constant FALSE => (!!0);

my @accounts = (
   ['A', 'a1@a.com', 'a2@a.com'],
   ['B', 'b1@b.com'],
   ['A', 'a3@a.com', 'a4@a.com'],
   ['B', 'b2@b.com', 'b1@b.com'],
   ['A', 'a8@a.com'],
   ['A', 'a3@a.com', 'a2@a.com'],
);

for my $merged (merge_accounts(\@accounts)->@*) {
   say '[', join(', ', map { +"'$_'"} $merged->@* ), ']';
}

sub hashes_intersect ($h1, $h2) {
   my $n1 = scalar(keys($h1->%*));
   my $n2 = scalar(keys($h2->%*));
   ($h1, $h2) = ($h2, $h1) if $n1 > $n2;

   # now $h1 has *at most* as many elements as $h2, it's beneficial to
   # iterate over it
   for my $key (keys $h1->%*) {
      return TRUE if exists $h2->{$key};
   }
   return FALSE;
}

sub merge_accounts ($aref) {
   my %alternatives_for;  # track each name separately
   my %group_for;         # track aggregated groups by order of appearance
   for my $i (0 .. $aref->$#*) {
      my ($name, @addresses) = $aref->[$i]->@*;
      $group_for{$i} = my $new = {
         i => $i,
         name => $name,
         addresses => { map { $_ => 1 } @addresses },
      };

      # Add this group like it's detached
      my $all_groups = $alternatives_for{$name} //= [];
      push $all_groups->@*, $new;

      # sweep back to merge when necessary
      my $challenger = $all_groups->$#*;
      my $resistant = $challenger - 1;
      my $last_wiped;
      while ($resistant >= 0) {
         my $cas = $all_groups->[$challenger]{addresses};
         my $ras = $all_groups->[$resistant]{addresses};
         if (hashes_intersect($cas, $ras)) {
            $ras->%* = ($ras->%*, $cas->%*);     # merge

            ($last_wiped, $challenger) = ($challenger, $resistant);
            delete $group_for{$all_groups->[$last_wiped]{i}};
            $all_groups->[$last_wiped] = undef;
         }
         --$resistant;
      }

      # sweep ahead to remove wiped out stuff, if necessary
      if (defined($last_wiped)) {
         my $marker = my $cursor = $last_wiped;
         while (++$cursor < $all_groups->$#*) {
            next if defined($all_groups->[$cursor]);
            $all_groups->[$marker++] = $all_groups->[$cursor];
         }
         splice $all_groups->@*, $marker if $marker < $all_groups->@*;
      }
   }

   my @accounts = map {
      my $group = $group_for{$_};
      [ $group->{name}, sort { $a cmp $b } keys $group->{addresses}->%* ];
   } sort { $a <=> $b } keys %group_for;

   return \@accounts;
}

For contrast, in the Raku implementation I chose to ditch the stability and opted for some copying of data around, which I think improves readability and maintainability. Otherwise, the approach is pretty much the same: sweep and merge, keeping disjoint addresses.

#!/usr/bin/env raku
use v6;
sub MAIN {
   my @accounts =
      ['A', 'a1@a.com', 'a2@a.com'],
      ['B', 'b1@b.com'],
      ['A', 'a3@a.com', 'a4@a.com'],
      ['B', 'b2@b.com', 'b1@b.com'],
      ['A', 'a8@a.com'],
      ['A', 'a3@a.com', 'a2@a.com'],
   ;

   for merge-accounts(@accounts) -> $merged {
      put '[', $merged.map({"'$_'"}).join(', '), ']';
   }
}

sub merge-accounts (@accounts) {
   my %alternatives_for;
   for @accounts -> $account {
      my ($name, @addresses) = @$account;
      my $new = { name => $name, addresses => @addresses.Set };

      my @disjoint;
      my $all = %alternatives_for{$name} //= [];
      for @$all -> $candidate {
         if ($new<addresses> ∩ $candidate<addresses>) { # merge
            $new<addresses> = (
               $new<addresses>.keys.Slip,
               $candidate<addresses>.keys.Slip
            ).Set;
         }
         else {
            @disjoint.push: $candidate;
         }
      }
      @disjoint.push: $new;
      %alternatives_for{$name} = @disjoint;
   }
   return %alternatives_for.valuesĀ».Slip.flat
      .map({[ $_<name>, $_<addresses>.keys.Slip ]})
      .Array;
}

All in all, this challenge was a bit more… challenging than the average for me. All of this, of course, thanks to manwar!

Stay safe!

report uninit variable name even in optimized cases

Perl commits on GitHub

Published by mauke on Thursday 23 March 2023 23:55

report uninit variable name even in optimized cases

Several cases that used to be simple assignment ops with lexical
variables have been optimized in some way:

 - $foo = undef is now a single OP_UNDEF with special flags
 - $foo = ... is now a single OP_PADSV_STORE
 - $foo[0] = ... is now a single OP_AELEMFASTLEX_STORE

This is mostly transparent to users, except for "Use of uninitialized
value" warnings, which previously mentioned the name of the undefined
variable, but don't do so anymore in blead.

This commit teaches find_uninit_var() about the new ops, so error
messages for these ops can mention variable names again.

Fixes #20945.
[doc] clarify behavior of File::Copy::cp re: permissions

In particular, explain that "preserve permission bits like cp" means the
default behavior of /bin/cp, not "cp -p" or "cp --preserve", so the
umask still applies.

don't set a special filetype for generated .gitignore

Perl commits on GitHub

Published by mauke on Thursday 23 March 2023 22:10

don't set a special filetype for generated .gitignore

Previously it would default to Perl, which happens to produce the right
comment character ("#"), but results in nonsensical syntax highlighting.
Now we set $lang to the special value 'None', which still produces
read-only declarations, but doesn't force a mode/filetype on editors.
fix incorrect vi filetype declarations in generated files

Vim's filetype declarations are case sensitive. The correct types for
Perl, C, and Pod are perl, c, and pod, respectively.

An Interesting Perl Pattern That Doesn't Work

dev.to #perl

Published by Nicholas Hubbard on Thursday 23 March 2023 18:24

I recently came up with a pattern that is supposed to use a closure to protect a configuration hash from being mutated by its callers. Unfortunately this pattern has a terrible flaw.

{
    my %config;

    sub config {
        return %config if %config;
        %config = create_config();
        return %config;
    }
}

The config() subroutine is a lexical closure over the %config hash. No other code in a program would be able to access the %config variable, as everything is defined in its own block.

As an aside, this pattern could just as easily be written with a state variable, but I find it harder to explain the pattern when done this way.

Blocks

To understand this pattern we first have to understand how blocks work. Here is a code example that shows how blocks work:

{
    my $var = 'foo';
    print "from inside the block where \$var is defined \$var = $var\n";

    {
        print "from the most inner block \$var = $var\n";
    }
}

print "from outside the block \$var = $var\n";

__END__

$ perl tmp.pl
from inside the block where $var is defined $var = foo
from the most inner block $var = foo
from outside the block $var = 

This code shows that a block introduces a lexical scope. Variables defined in a lexical scope are only available in its defining scope, and scopes nested inside their defining scope. The program output shows that the $var variable is available in the scope it is defined in, and from the scope nested in its defining scope. However, outside of its defining scope, $var is not defined.

If we turn on strict we get a fatal compilation error for trying to use $var from outside its defining scope:

Global symbol "$var" requires explicit package name (did you forget to declare "my $var"?) at tmp.pl line 14.
Execution of tmp.pl aborted due to compilation errors.

You may be wondering if the config() subroutine is accessible from outside the block it was defined in. The answer is that it is indeed available, because subroutine declarations are always global to the current package. This code example shows this fact:

use strict;

{
    sub foo {
        print "hello from foo!\n";
    }
}

foo();

__END__

$ perl tmp.pl
hello from foo!

Closures

Now that we understand blocks we can understand closures. In Perl, a closure is a subroutine that has access to the lexical environment that it was defined in. Here is the classic example:

use strict;

{
    my $n = 0;        

    sub increment {
        $n += 1;
        return $n;
    }

    sub decrement {
        $n -= 1;
        return $n;
    }
}

print 'increment() -> ', increment(), "\n";
print 'increment() -> ', increment(), "\n";
print 'decrement() -> ', decrement(), "\n";
print 'increment() -> ', increment(), "\n";
print 'increment() -> ', increment(), "\n";
print 'decrement() -> ', decrement(), "\n";

__END__

$ perl tmp.pl
increment() -> 1
increment() -> 2
decrement() -> 1
increment() -> 2
increment() -> 3
decrement() -> 2

The increment() and decrement() subroutines are both able to access the $n variable, though no other subroutines in a larger program would be able to, due to the outer block. For this reason the increment() and decrement() subroutines are closures over $n;

The Problem

We should now have all the knowledge needed to understand the pattern that this article is about.

The idea of the pattern is that if the %config variable has already been set then we just return it, and otherwise we set its value before returning it. This means that %config will only be set the first time that we call config(), and on all subsequent calls it will simply be returned. Therefore config() can be thought of as a function constant ... right?

Here is a code example where our pattern works as expected:

use strict;

{
    my %config;

    sub config {
        return %config if %config;
        %config = create_config();
        return %config;
    }
}

sub create_config {
    print "hello from create_config()\n";
    return (foo => 12);
}

my %config1 = config();

$config1{foo} = 1004;

my %config2 = config();

print "%config2's foo key = $config2{foo}\n";

__END__

$ perl tmp.pl
hello from create_config()
%config2's foo key = 12

This output displays a couple of important points. First, we know that the create_config() subroutine was only invoked a single time, even though we invoked config() twice. We know this because the "hello from create_config()" message is only printed a single time. The other important thing to note is that because we got the output "%config2's foo key = 12", we know that our modification of %config1's foo key (which we set to 1004), did not effect the %config variable that our config() subroutine closes over. If it had, then %config2's foo key would associate to 1004.

So what is the problem? Well ... everything falls apart when the %config variable is set to a multi-dimensional data structure. The following code encapsulates the fundamental problem with our pattern:

use strict;

{
    my %config;

    sub config {
        return %config if %config;
        %config = create_config();
        return %config;
    }
}

sub create_config {
    return (foo => [1, 2, 3]);
}

my %config1 = config();

$config1{foo}->[0] = 1004;

my %config2 = config();

print "%config2's foo key = [", join(', ', @{$config2{foo}}), "]\n";

__END__

$ perl tmp.pl
%config2's foo key = [1004, 2, 3]

Uh oh! We were able to mutate the %config variable that config() closes over, which means that config() is not actually a constant function. Now we come to the fundamental problem of our pattern. Because in Perl multi-dimensional data structures are made up of references, and perl does not perform deep-copying by default, we are able to mutate the underlying references of multi-dimensional data structures.

Here is a code example that shows that Perl does not perform deep-copying:

use strict;

my @array1 = ([1, 2, 3], [4, 5, 6], [7, 8, 9]);

print '@array1 contents:', "\n";
for my $elem (@array1) {
    print "    $elem\n";
}

# copy @array1 to @array2
my @array2 = @array1;

print '@array2 contents:', "\n";
for my $elem (@array2) {
       print "    $elem\n";
}

__END__

$ perl tmp.pl
@array1 contents:
    ARRAY(0x13875e8)
    ARRAY(0x13d1ef8)
    ARRAY(0x13d1fe8)
@array2 contents:
    ARRAY(0x13875e8)
    ARRAY(0x13d1ef8)
    ARRAY(0x13d1fe8)

In this programs output we can see that @array1 and @array2 contain the exact same references, which means that Perl does not perform deep-copying. If Perl did perform deep-copying, then when we copied @array1 into @array2, Perl would have made (recursive) copies of all the references in @array1 into new refererences. Perl's lack of deep-copying is the fundamental flaw of our pattern, as it means that we can modify %config's references from its copies that are returned by config().

Solutions

There are many ways we can solve this problem. First, we could use lock_hash_recurse from the core Hash::Util module to lock %config. After locking %config, we would get an error if we tried to mutate any of its values.

We could also use Const::Fast from CPAN to make %config an actual read-only hash. Similarly to locking the hash, we would get an error if we tried to mutate %config.

Finally, we could use Clone from CPAN to return a deep-copy of %config from the config() subroutine. Unlike the other solutions, our code could freely modify copies of %config without getting any errors, but these modifications would not affect the actual %config.

perlio.c: remove silly casts

Perl commits on GitHub

Published by mauke on Thursday 23 March 2023 10:39

perlio.c: remove silly casts

TL;DR

Here we are with TASK #1 from The Weekly Challenge #209. Enjoy!

The challenge

You are given an array of binary bits that ends with 0.

Valid sequences in the bit string are:

[0] -decodes-to-> "a"
[1, 0] -> "b"
[1, 1] -> "c"

Write a script to print 1 if the last character is an ā€œaā€ otherwise print 0.

Example 1

Input: @bits = (1, 0, 0)
Output: 1

The given array bits can be decoded as 2-bits character (10) followed
by 1-bit character (0).

Example 2

Input: @bits = (1, 1, 1, 0)
Output: 0

Possible decode can be 2-bits character (11) followed by 2-bits
character (10) i.e. the last character is not 1-bit character.

The questions

No questions! Well, maybe one: why an array of bits, and not a string or some other numeric value to scan?

The solution

My question was not only to bug our fine host manwar, but also because I found it easy to address this challenge with a regular expression:

#!/usr/bin/env raku
use v6;
sub MAIN ($bits) { put special-bits-characters($bits.comb.Array) }

sub special-bits-characters ($bits) {
   $bits.join('') ~~ m{^ [ 1 <[ 0 1 ]> | 0 ]* 0 $} ?? 1 !! 0;
}

So I have an array as the official interface of my function, but it comes from a string and I turn it into a string as soon as it enters the function. And I’m skeptical about cryptocurrencies!

Perl is pretty much the same, even though I admit to being able to read regular expressions with a bit more ease:

#!/usr/bin/env perl
use v5.24;
use warnings;
use experimental 'signatures';

my @bits = map { split m{}mxs } @ARGV;
say special_bits_characters(\@bits);

sub special_bits_characters ($bits) {
   join('', $bits->@*) =~ m{\A (?: 1[01] | 0 )* 0 \z}mxs ? 1 : 0;
}

As I’m lazy and I don’t want to explain what the regular expression does, I’ll let YAPE::Regex::Explain do the heavy lifting, with just a bit of editing for clarity on my side:


The regular expression:

(?msx: \A (?: 1[01] | 0 )* 0 \z )

matches as follows:
  
NODE                     EXPLANATION
----------------------------------------------------------------------
(?msx:                   group, but do not capture (with ^ and $      
                         matching start and end of line) (with .      
                         matching \n) (disregarding whitespace and    
                         comments) (case-sensitive):                  
----------------------------------------------------------------------
  \A                       the beginning of the string                
----------------------------------------------------------------------
  (?:                      group, but do not capture (0 or more times      
                           (matching the most amount possible)):
----------------------------------------------------------------------
    1                        '1'                                      
----------------------------------------------------------------------
    [01]                     any character of: '0', '1'               
----------------------------------------------------------------------
   |                        OR                                        
----------------------------------------------------------------------
    0                        '0'                                      
----------------------------------------------------------------------
  )*                       end of grouping                            
----------------------------------------------------------------------
  0                        '0'                                        
----------------------------------------------------------------------
  \z                       the end of the string                      
----------------------------------------------------------------------
)                        end of grouping                              
----------------------------------------------------------------------

Perl is wonderful (well.. Raku too!)

Stay safe!

Now Accepting: TPRC Proposals for 2024

Perl Foundation News

Published by Amber Deuel on Wednesday 22 March 2023 15:23


It’s time to plan TPRC 2024. If you are interested in organizing the event in your city, please submit a proposal by April 28th.

Please view the information, along with a sample RFP you can send to possible vendors here. When ready to submit your proposal, please use the same form.

If you have any questions, please contact us at hello@perlfoundation.org .

Text::Extract::Word, MsOffice::Word::Surgeon - Weekly Travelling in CPAN

blogs.perl.org

Published by C.-Y. Fung on Wednesday 22 March 2023 12:44

Destination: Text::Extract::Word

Date of Latest Release: Mar 09, 2012
Distribution: Text::Extract::Word
Module version: 0.02
Main Contributors: Stuart Watt (SNKWATT)
License: The Artistic License 2.0

Date of Latest Release: Jan 26, 2023
Distribution: MsOffice::Word::Surgeon
Module version: 2.01
Main Contributors: Laurent Dami (DAMI)
License: The Artistic License 2.0

Notice

Thanks to demerphq and Sebastian Schleussner's comments, we should visit a better and easy way to do a regex matching in Perl now. It is not Regexp::Assemble introduced two weeks before - the Perl compiler after version 5.10 has already done the optimization for us! But we have to use the correct syntax. To see what happens, the bottom of this post gives a comparison of regexes for Roman numerals again; we give two more players in the race, /^($r_str_combine)$/ [given my $r_str_combine = join "|", @roman;] and $rx = sprintf qr/^(?:%s)$/, join "|", @roman. We can see the former has almost the same performance as Regexp::Assemble, while the latter is usually the fastest. We do not need to import any modules and get a fast result!


Today let us have a short tour on two modules which can perform a similar function: text searching in MS Office Word documents.

Well, we know there are two common formats of MS Office Word document: .docx and the earlier .doc.

Text::Extract::Word deals with .doc. You can use the function get_all_text to get all text (I'm a verbose tour guide...), or use its object oriented interface which get the text in different location of a document:

# taken from synopsis of the module:
my $file = Text::Extract::Word->new("test1.doc");
my $text = $file->get_text();
my $body = $file->get_body();
my $footnotes = $file->get_footnotes();
my $headers = $file->get_headers();
my $annotations = $file->get_annotations();
my $bookmarks = $file->get_bookmarks();


MsOffice::Word::Surgeon deals with .docx. Besides functionalities to extract text, you can also replace text by regular expression, and write a new .docx file.


Here comes a use case of the two modules. I was maintaining a collection of curricula vitae and database of candidate details, but due to a coding bug, some of the CV were missing or overwritten by others' CV. The CVs are in .doc, .docx or .pdf format, and have filenames as CV[number].[extension]. I use Text::Extract::Word and MsOffice::Word::Surgeon to check the MS Office documents.

Note that in the database, since I am physically located in Hong Kong, each candidate name is stored with Chinese characters and English alphabets. And names of some people consist of only two Chinese characters (some, like me, have 3; some people have 4(two characters for the surname, two characters for the given name)), so I chose to take the first two English words in the candidate name field and checked whether the two words are in the corresponding document. ID of each suspicious record will be printed.

Script one:

use utf8;
use DBI;
use Text::Extract::Word qw(get_all_text);
use v5.30.0;

for my $num (1..999) {
search_cv($num) if -e "CV".$num.".doc" || -e "CV".$num.".DOC";
}

sub search_cv {

my $cv_id = $_[0];
my $filename = "CV".$cv_id.".doc";

my $dsn = "DBI:mysql:database=cvdb;host=127.0.0.1";
my $dbh = DBI->connect($dsn, 'mydatabaseadmin', 'mypassword', { mysql_enable_utf8 => 1 });

my $fullname;
my $first_two_eng;
my $sth = $dbh->prepare("SELECT name FROM candidate WHERE id=".$cv_id);
$sth->execute;
while (my $ref = $sth->fetchrow_hashref) {
$fullname = $ref->{'name'};
}

$fullname !~ s/[^[:ascii:]]//g;

my $engname = $fullname;

my $second_space = index($engname, " ", index($engname, " ")+1 );

my $first_two_eng = ($second_space != -1) ? (substr $engname, 0, $second_space) : (substr $engname, 0);

my $found;

my $text = get_all_text($filename);
$found = index($text, $first_two_eng);

if ($found != -1) {
# say "found: ". $cv_id;
}
else {
say "SUSPICIOUS: ". $cv_id;
}
}

Script two


use utf8;
use DBI;
use MsOffice::Word::Surgeon;
use v5.30.0;

for my $num (1..999) {
search_cv($num) if -e "CV".$num.".docx";
}

sub search_cv {

my $cv_id = $_[0];
my $filename = "CV".$cv_id.".docx";

my $dsn = "DBI:mysql:database=cvdb;host=127.0.0.1";
my $dbh = DBI->connect($dsn, 'mydatabaseadmin', 'mypassword', { mysql_enable_utf8 => 1 });

my $fullname;
my $first_two_eng;
my $sth = $dbh->prepare("SELECT name FROM candidate WHERE id=".$cv_id);
$sth->execute;
while (my $ref = $sth->fetchrow_hashref) {
$fullname = $ref->{'name'};
}

$fullname !~ s/[^[:ascii:]]//g;

my $engname = $fullname;

my $second_space = index($engname, " ", index($engname, " ")+1 );

my $first_two_eng = ($second_space != -1) ? (substr $engname, 0, $second_space) : (substr $engname, 0);

my $found;

my $surgeon = MsOffice::Word::Surgeon->new(docx => $filename);

my $text = $surgeon->document->plain_text;
$found = index($text, $first_two_eng);


if ($found != -1) {
# say "found: ". $cv_id;
}
else {
say "SUSPICIOUS: ". $cv_id;
}

}

The above is just a very straightforward use case of these two modules. You may explore their POD and use them to suit your need!

# For PDF text search, I will introduce modules later.

THE HIGHLIGHTED PERL MODULES OF WEEK 12 OF 2023:
Text::Extract::Word, MsOffice::Word::Surgeon

Comparison of Regexes for Roman Numerals


use v5.30.0;
use List::Util qw/shuffle sample any/;
use Regexp::Assemble;
use Regexp::Trie;
use feature 'say';

my @roman = qw/I II III IV V
VI VII VIII IX X
XI XII XIII XIV XV
XVI XVII XVIII IXX XX/;

sub repr {
return sample int 4*rand(),
shuffle('I' x (int 4*rand()), 'V', 'X');
}

my $size = 1000;

sub c0 {
my $count = 0;
for (1..$size) {
my $letters = repr();
$count++ if any {$letters =~ /^$_$/} @roman;
}
return $count;
}

my $ra = Regexp::Assemble->new;
$ra->anchor_line;
$ra->add(@roman);
my $ra_computed = $ra->re;

sub c1 {
my $count = 0;
for (1..$size) {
$count++ if repr() =~ $ra_computed;
}
return $count;
}

my $rt = Regexp::Trie->new;
$rt->add($_) for @roman;
my $rt_computed = $rt->regexp;

sub c2 {
my $count = 0;
for (1..$size) {
$count++ if repr() =~ /^$rt_computed$/;
}
return $count;
}


my $r_str_combine = join "|", @roman;

sub cn {
my $count = 0;
for (1..$size) {
$count++ if repr() =~ /^($r_str_combine)$/;
}
return $count;
}

my $rx = sprintf qr/^(?:%s)$/, join "|", @roman;

sub cx {
my $count = 0;
for (1..$size) {
$count++ if repr() =~ $rx;
}
return $count;
}

say c0()/$size;
say c1()/$size;
say c2()/$size;
say cn()/$size;
say cx()/$size;


use Benchmark q/cmpthese/;
cmpthese(10_000, {
RAW => sub {c0},
Assemble => sub {c1},
Trie => sub {c2},
naive => sub {cn},
QR => sub {cx}
});


Result:

0.705
0.691
0.68
0.681
0.708
Rate RAW Trie naive Assemble QR
RAW 42.7/s -- -94% -94% -94% -95%
Trie 669/s 1468% -- -6% -7% -23%
naive 711/s 1565% 6% -- -2% -18%
Assemble 724/s 1595% 8% 2% -- -17%
QR 867/s 1932% 30% 22% 20% --

AutoHotKey

Personal blog of Flavio Poletti (Perl & Raku)

Published on Wednesday 22 March 2023 06:00

TL;DR

It beats me that I had to use AutoHotKey for two simple keys.

I’m Italian and I use the Italian keyboard layout. No big deal.

Up to some Windowses ago, I had to build my own custom keyboard layout because, as it appears, neither the backtick nor the tilde are anywhere to be found in the keyboard.

No, I don’t consider typing the code on the numeric pad anything that is meaningful to consider. Especially on a laptop keyboard without a separate numeric pad.

As much as it already left me extremely dubious about the care that Microsoft might have for foreigners, it went even beyond this. With Windows 11 (Home edition, at least) it seems that I’m neither able to install the keyboard layout generator, nor able to install a previously cooked layout.

So much for Windows Subsystem for Linux, right?

I looked around and found AutoHotKey, which is a big cannon to fire at a mosquito, but still it’s the only thing that seems to solve my problem. By the way, these are the mappings that I added and work for me:

#Requires AutoHotkey v2.0
<^>!'::Send "``"
<^>!vkDD::Send "~"

I took inspiration from this gist, which in a turn of events didn’t work right off the bat for me. I don’t know if it’s because of the AutoHotKey version, or because quotation characters got lost in producing the gist, or just because everything seems to necessarily be cumbersome and frustrating when dealing with such basic needs.

My last thought is for the people who accept to type the Alt-whatever combination on the numeric pad: there’s a better way and you are encouraged to not settle with such nonsense.

Rant concluded, stay safe folks!

Fixing code and documentation, setting up CI - Sending Pull-Requests

dev.to #perl

Published by Gabor Szabo on Tuesday 21 March 2023 20:26

This was the 6th session of the the Open Source Developer Course (OSDC) that I started to teach at the end of January. (We have a weekly session, but we had to skip several weeks in the middle).

This time instead of having a grand presentation we worked on a Perl library (module) that was failing its tests. Here are my notes about what we have accomplished during the session.

The changes we made were not huge, but I believe a little improvement every day by many people can lead to a lot of value to the whole open source ecosystem and through that to the world.

Pull-Requests

The workflow

During the session we first cloned the already forked repository, but I think it is better to clone the original repository and add the forked repository using git remote add fork URL so this is what I described here.

  • Clone the original repository
git clone git@github.com:reneeb/Types-RENEEB.git
  • Create a fork of the repository via GitHub UI.

  • Setup a new git remote to point to the forked repository:

git remote add fork git@github.com:yewtc/Types-RENEEB.git
  • Create a branch to prepare a pull-request
git checkout -b BRANCH
  make changes
git add .
git commit -m "..."
git push --set-upstream fork BRANCH
  • Send the pull-request from the GitHub UI

  • Integrate the progress of original repository to our local clone

git checkout master
git pull origin master
  • Once the Pull-request was accepted we could delete the branch locally and remotely
git branch -d BRANCH
git push -d origin BRANCH
  • We created Test coverage report of the Types-RENEEB pacakge. It was not too interesting as everything was 100%.
cpanm Dist::Zilla::App::Command::cover

dzil cover

Another Pull-Request

Plans

In the upcoming sessions we plan to work on more projects. Set up local development environments and configure GitHub Action workflows for projects require a database or access to some external system.

We will also find packages where the test coverage report is not 100% to understand that better.

Just when you think it’s safe outside. Here are some statistics relating to traffic collisions in New York City.

Background

NYC publishes vehicle collision data which anyone can access using their API. You can also download this information in standard CSV (Comma Separated Values) file format.
The file is fairly large, 420 MB, with almost 2 Million lines.

-rw-rw-r-- 1 austin austin 402M Mar  4 20:38 all_motor_vehicle_collision_data.csv
…
bash > wc -l all_motor_vehicle_collision_data.csv 
1972886 all_motor_vehicle_collision_data.csv

Display the first five records of the dataset using head

bash > head -n5 all_motor_vehicle_collision_data.csv 
CRASH DATE,CRASH TIME,BOROUGH,ZIP CODE,LATITUDE,LONGITUDE,LOCATION,ON STREET NAME,CROSS STREET NAME,OFF STREET NAME,NUMBER OF PERSONS INJURED,NUMBER OF PERSONS KILLED,NUMBER OF PEDESTRIANS INJURED,NUMBER OF PEDESTRIANS KILLED,NUMBER OF CYCLIST INJURED,NUMBER OF CYCLIST KILLED,NUMBER OF MOTORIST INJURED,NUMBER OF MOTORIST KILLED,CONTRIBUTING FACTOR VEHICLE 1,CONTRIBUTING FACTOR VEHICLE 2,CONTRIBUTING FACTOR VEHICLE 3,CONTRIBUTING FACTOR VEHICLE 4,CONTRIBUTING FACTOR VEHICLE 5,COLLISION_ID,VEHICLE TYPE CODE 1,VEHICLE TYPE CODE 2,VEHICLE TYPE CODE 3,VEHICLE TYPE CODE 4,VEHICLE TYPE CODE 5
09/11/2021,2:39,,,,,,WHITESTONE EXPRESSWAY,20 AVENUE,,2,0,0,0,0,0,2,0,Aggressive Driving/Road Rage,Unspecified,,,,4455765,Sedan,Sedan,,,
03/26/2022,11:45,,,,,,QUEENSBORO BRIDGE UPPER,,,1,0,0,0,0,0,1,0,Pavement Slippery,,,,,4513547,Sedan,,,,
06/29/2022,6:55,,,,,,THROGS NECK BRIDGE,,,0,0,0,0,0,0,0,0,Following Too Closely,Unspecified,,,,4541903,Sedan,Pick-up Truck,,,
09/11/2021,9:35,BROOKLYN,11208,40.667202,-73.8665,"(40.667202, -73.8665)",,,1211      LORING AVENUE,0,0,0,0,0,0,0,0,Unspecified,,,,,4456314,Sedan,,,,

Using head to display the first record only

bash > head -n1 all_motor_vehicle_collision_data.csv 
CRASH DATE,CRASH TIME,BOROUGH,ZIP CODE,LATITUDE,LONGITUDE,LOCATION,ON STREET NAME,CROSS STREET NAME,OFF STREET NAME,NUMBER OF PERSONS INJURED,NUMBER OF PERSONS KILLED,NUMBER OF PEDESTRIANS INJURED,NUMBER OF PEDESTRIANS KILLED,NUMBER OF CYCLIST INJURED,NUMBER OF CYCLIST KILLED,NUMBER OF MOTORIST INJURED,NUMBER OF MOTORIST KILLED,CONTRIBUTING FACTOR VEHICLE 1,CONTRIBUTING FACTOR VEHICLE 2,CONTRIBUTING FACTOR VEHICLE 3,CONTRIBUTING FACTOR VEHICLE 4,CONTRIBUTING FACTOR VEHICLE 5,COLLISION_ID,VEHICLE TYPE CODE 1,VEHICLE TYPE CODE 2,VEHICLE TYPE CODE 3,VEHICLE TYPE CODE 4,VEHICLE TYPE CODE 5

Use Perl to list the column names in numerical order

bash > perl -F, -an -E '$. == 1 && say $i++ . "\t$_" for @F'  all_motor_vehicle_collision_data.csv
0   CRASH DATE
1   CRASH TIME
2   BOROUGH
3   ZIP CODE
4   LATITUDE
5   LONGITUDE
6   LOCATION
7   ON STREET NAME
8   CROSS STREET NAME
9   OFF STREET NAME
10  NUMBER OF PERSONS INJURED
11  NUMBER OF PERSONS KILLED
12  NUMBER OF PEDESTRIANS INJURED
13  NUMBER OF PEDESTRIANS KILLED
14  NUMBER OF CYCLIST INJURED
15  NUMBER OF CYCLIST KILLED
16  NUMBER OF MOTORIST INJURED
17  NUMBER OF MOTORIST KILLED
18  CONTRIBUTING FACTOR VEHICLE 1
19  CONTRIBUTING FACTOR VEHICLE 2
20  CONTRIBUTING FACTOR VEHICLE 3
21  CONTRIBUTING FACTOR VEHICLE 4
22  CONTRIBUTING FACTOR VEHICLE 5
23  COLLISION_ID
24  VEHICLE TYPE CODE 1
25  VEHICLE TYPE CODE 2
26  VEHICLE TYPE CODE 3
27  VEHICLE TYPE CODE 4
28  VEHICLE TYPE CODE 5
Explanation
  • 'perl -an -E'
    • Split up the column values into array '@F'
  • '-F,'
    • Specifies a comma field separator.
  • '$. == 1'
    • The Perl special variable '$.' contains the current line number.
    • Display the first line only.
  • 'say $i++ . "\t$_" for @F'
    • Prints a tab separated counter variable '$i', and the corresponding column name, stored in the Perl default variable '$_'.

Create a work-file containing the zip-code, injury count, and fatality count. Only get records that include a zip-code and at least one injury or fatality.

3   ZIP CODE
10  NUMBER OF PERSONS INJURED
11  NUMBER OF PERSONS KILLED

Method

  • Unfortunately the previous method for splitting a comma delimited file has limitations. It cannot handle fields with embedded commas. The Street Name fields often have embedded commas which will throw off our column numbering.
  • To get around this we can use Text::CSV, which has both functional and OO interfaces. For one-liners, it exports a handy csv function. From the Text::CSV documentation 'my $aoa = csv (in => "test.csv") or die Text::CSV_XS->error_diag;'', it’ll convert the CSV file into an array of arrays.
  • I'll modify this example slightly to 'csv( in => $ARGV[0], headers => qq/skip/ )'. The @ARGV array contains any input arguments. The first element $ARGV[0] will contain the input CSV file. We don’t need the header row, so it’ll be skipped.
perl -MText::CSV=csv  -E '$aofa = csv( in => $ARGV[0], headers => qq/skip/ ); ( $_->[3] =~ /^\S+$/ ) && say qq/$_->[3],$_->[10],$_->[11]/ for @{$aofa}'  all_motor_vehicle_collision_data.csv | sort -t, -k 1 -r > sorted_injured_killed_by_zip.csv
Explanation
  • Input file 'all_motor_vehicle_collision_data.csv'
  • 'perl -MText::CSV=csv'
    • Run the perl command with '-M' switch to load a Perl module.
  • 'Text::CSV=csv'
    • Export the ā€˜csv’ function from the 'Text::CSV' module.
  • '( $_->[3] =~ /^\S+$/ )'
    • Use a Regular expression to only process rows that have non-blank data in the ZIP CODE field.
  • 'say qq/$->[3],$->[10],$_->[11]/ for @{$aofa}'
    • Loop through the Array of Arrays '$aofa'
    • Print the contents of columns 3,10,11 followed by a line break.
  • The output is piped '|' into the Linux sort command.
    • Sorting on the first field, ZIP CODE and redirecting, '>' into a new file, 'sorted_injured_killed_by_zip.csv'.
    • See the ss64.com site for more details on the Linux sort command.
  • The new file has about 1.36 Million lines.

Get a line count with wc. Display the first 10 records using head

bash > wc -l sorted_injured_killed_by_zip.csv 
1359291 sorted_injured_killed_by_zip.csv
bash > head -n10 sorted_injured_killed_by_zip.csv | column -t -s, --table-columns=ZipCode,#Injured,#Killed
ZipCode  #Injured  #Killed
11697    4         0
11697    3         0
11697    2         0
11697    2         0
11697    2         0
11697    1         0
11697    1         0
11697    1         0
11697    1         0
11697    1         0
Explanation
  • 'wc -l'
    • Counts the number of lines in our new file
  • 'head -n 10'
    • Prints out the first 10 lines of the file
  • 'column -t -s, --table-columns=ZipCode,#Injured,#Killed'
    • column
    • '-t' switch will tell 'column' to print in table format.
    • '-s' switch specifies an input delimiter of ','.
    • The output is tabbed.

List the 10 worst zip codes for injuries

We can use the output file, sorted_injured_killed_by_zip.csv, from the previous example

perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[1]; END{say qq/$_,$h{$_}/ for keys %h}' sorted_injured_killed_by_zip.csv  | sort -nr -t, -k 2  | head -n10 | column -t -s, --table-columns=ZipCode,#Injured
ZipCode  #Injured
11207    10089
11236    7472
11203    7426
11212    6676
11226    6103
11208    6027
11234    5505
11434    5403
11233    5159
11385    4440
Explanation
  • '@a=split(q/,/,$_);'
    • As there are no embedded commas in this file we use the Perl ā€˜split’ function to break up the 3 CSV fields in each row into array '@a'.
  • '$h{$a[0]} += $a[1];'
    • The first element of each row, ZIP CODE is used as a key for Hash'%h'.
    • The value is the accumulated number of injuries for that ZIP CODE.
  • '$h{$a[0]} += $a[1]'
    • We accumulate the second element, $[1], which contains 'NUMBER OF PERSONS INJURED'
    • We can set a value for a Hash key without checking if it exists already.
    • This is called Autovivification which is explained nicely by The Perl Maven.
  • 'END{say qq/$,$h{$}/ for keys %h}'
    • The 'END{}'block runs after all the rows are processed.
    • The keys(Zip Codes) are read and printed along with their corresponding values.
    • We could have used Perl to sort the output by the keys, or values.
    • I used the Linux sort.
  • 'sort -nr -t, -k 2'
    • Will perform a numeric sort, descending on the # of people injured.
  • 'head -n10'
    • Will get the first 10 records printed.
  • 'column -t -s, --table-columns=ZipCode,#Injured'
  • The 'columns' command will produce a prettier output.
    • '-t' for table format.
    • '-s' to specify that the fields are comma separated
    • '--table-columns'' to add column header names.
Observation

Zip code 11207, which encompasses East New York, Brooklyn, as well as a small portion of Southern Queens, has a lot of issues with traffic safety.

Display the 10 worst zip codes for traffic fatalities

perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[2]; END{say qq/$_,$h{$_}/ for keys %h}' sorted_injured_killed_by_zip.csv  | sort -nr -t, -k 2  | head -n10 | column -t -s, --table-columns=ZipCode,#Killed
ZipCode  #Killed
11236    44
11207    34
11234    29
11434    25
11354    25
11229    24
11208    24
11206    23
11233    22
11235    21
Explanation
  • With a few minor adjustments, we got the worst zip codes for traffic collision fatalities
  • '$h{$a[0]} += $a[2]'
    • Accumulate the third element, $[2], which contains 'NUMBER OF PERSONS KILLED'
Observation
  • Zip code 11236, which includes Canarsie Brooklyn is the worst for traffic fatalities according to this data.
    • Zip code 11207 is also very bad for traffic fatalities, as well as being the worst for collision injuries
  • These stats are not 100 percent correct, as out of 1,972,886 collision records, 1,359,291 contained Zip codes.
    • We have 613,595 records with no zip code, which were not included in the calculations.

Some NYC Borough Stats

Similar to how we created the 'sorted_injured_killed_by_zip.csv', we can run the following command sequence to create a new file 'sorted_injured_killed_by_borough.csv'

perl -MText::CSV=csv  -E '$aofa = csv( in => $ARGV[0], headers => qq/skip/ ) ; ( $_->[2] =~ /^\S+/ ) && say qq/$_->[2],$_->[10],$_->[11]/ for @{$aofa}'  all_motor_vehicle_collision_data.csv | sort  -t, -k 3rn -k 2rn -k 1  >|  sorted_injured_killed_by_borough.csv
Explanation
  • The Borough field is the third column, '2 BOROUGH', starting from 0, in the 'all_motor_vehicle_collision_data.csv' file.
  • '( $_->[2] =~ /^\S+/ )'
    • Only get rows which have non blank data in the BOROUGH field.
  • 'sort -t, -k 3rn -k 2rn -k 1'
    • I added some more precise sorting, which is unnecessary except to satisfy my curiosity.
    • sort
    • '-k 3rn''
    • Sort by column 3(starting @ 1), which is the fatality count field.
    • This is sorted numerically in descending order.
    • '-k 2rn''
      • When equal, the injury count is also sorted numerically, descending.
    • '-k 1'
    • The Borough is sorted in ascending order as a tiebreaker.

Display the first 10 rows of this file.

bash > head -n10 sorted_injured_killed_by_borough.csv | column -t -s, --table-columns=Borough,#Injured,#Killed
Borough        #Injured  #Killed
MANHATTAN      12        8
QUEENS         3         5
QUEENS         15        4
QUEENS         1         4
STATEN ISLAND  6         3
BROOKLYN       4         3
BROOKLYN       3         3
QUEENS         3         3
BROOKLYN       1         3
QUEENS         1         3

Sanity check if we got all five boroughs

cut -d, -f 1  sorted_injured_killed_by_borough.csv | sort -u 
BRONX
BROOKLYN
MANHATTAN
QUEENS
STATEN ISLAND
Explanation
  • 'cut -d, -f 1'
    • cut to split the comma delimited file records.
    • '-d,'
    • Specifies that the cut will comma delimited
    • '-f 1'
    • Get the first field from the cut, which is the Borough Name.
  • 'sort -u'
    • Sorts and prints only the unique values to STDOUT
  • We got all 5 New York City boroughs in this file.

Display collision injuries for each borough

bash > perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[1]; END{say qq/$_,$h{$_}/ for keys %h}' sorted_injured_killed_by_borough.csv   | sort -nr -t, -k 2 | column -t -s,
BROOKLYN       137042
QUEENS         105045
BRONX          62880
MANHATTAN      61400
STATEN ISLAND  15659
Observation
  • Brooklyn emerges as the Borough with the most traffic injuries.

Display collision fatalities by Borough

bash > perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[2]; END{say qq/$_,$h{$_}/ for keys %h}' sorted_injured_killed_by_borough.csv   | sort -nr -t, -k 2 | column  -J -s, --table-columns Borough,#Killed
{
   "table": [
      {
         "borough": "BROOKLYN",
         "#killed": "564"
      },{
         "borough": "QUEENS",
         "#killed": "482"
      },{
         "borough": "MANHATTAN",
         "#killed": "300"
      },{
         "borough": "BRONX",
         "#killed": "241"
      },{
         "borough": "STATEN ISLAND",
         "#killed": "88"
      }
   ]
}
Explanation
  • Similar to the Injury count by Borough, this counts all fatalities by borough and prints the output in JSON format.
  • 'column -J -s, --table-columns Borough,#Killed' Use the 'column' command with the '-J' switch, for JSON, instead of '-t' for Table.

I forgot to mention what date range is involved with this dataset. We can check this with the cut command.

cut -d, -f1 all_motor_vehicle_collision_data.csv | cut -d/ -f3 | sort -u
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
Explanation
  • Get the date field, '0 CRASH DATE' which is in 'mm/dd/yyyy' format.
  • 'cut -d, -f ' 'all_motor_vehicle_collision_data.csv'
    • Get the first column/field of data for every row of this CSV file.
    • '-d,' specifies that we are cutting on the comma delimiters.
    • '-f 1' specifies that we want the first column/field only
    • This is the date in 'mm/dd/yyyy' format.
  • 'cut -d/ -f3'
    • Will cut the date using '/’ as the delimiter.
    • Grab the third field from this, which is the four digit year.
  • 'sort -u'
    • The years are then sorted with duplicates removed.
Observation
  • The dataset started sometime in 2012 and continues until now, March 2023.

Display the 20 worst days for collisions in NYC

bash > cut -d, -f1 all_motor_vehicle_collision_data.csv | awk -F '/' '{print $3 "-" $1 "-" $2}' | sort | uniq -c | sort -k 1nr | head -n20 | column -t --table-columns=#Collisions,Date
#Collisions  Date
1161        2014-01-21
1065        2018-11-15
999         2017-12-15
974         2017-05-19
961         2015-01-18
960         2014-02-03
939         2015-03-06
911         2017-05-18
896         2017-01-07
884         2018-03-02
883         2017-12-14
872         2016-09-30
867         2013-11-26
867         2018-11-09
857         2017-04-28
851         2013-03-08
851         2016-10-21
845         2017-06-22
845         2018-06-29
841         2018-12-14
Explanation
  • Get a count for all collisions for each date on record
  • Display the first 20 with the highest collision count
  • cut
    • Get the first column from the dataset.
    • Pipe this date into the awk command.
    • AWK is a very useful one-liner tool as well as being a full scripting language.
  • 'awk -F '/' '{print $3 "-" $1 "-" $2}'
    • '-F '/' '
    • Split the date into separate fields using the ’/’ as a delimiter.
    • $1 contains the month value, $2 contains the day of month and $3 contains the four digit year value.
    • These will be printed in the format 'ā€yyyy-mm-ddā€.
  • Dates are then sorted and piped into the uniq command.
  • 'uniq -c'
    • Will create a unique output.
    • '-c' switch gets a count of all the occurrences for each value.
  • The output is piped into another sort command, which sorts by the number of occurrences descending.
Observation
  • I’m not sure if there is any explanation for why some days have a lot more collisions than others. Weatherwise, January 21 2014 was a cold day, but otherwise uneventful. November 15 2018 had some snow, but not a horrific snowfall. The clocks went back on November 4, so that wouldn’t be a factor.
  • 2014-01-21 weather
  • 2018-11-15 weather

Display the twenty worst times during the day for collisions

bash > cut -d, -f2 all_motor_vehicle_collision_data.csv | sort | uniq -c | sort -k 1nr | head -n20 |column -t  --table-columns=#Collisions,Time
#Collisions  Time
27506       16:00
26940       17:00
26879       15:00
24928       18:00
24667       14:00
22914       13:00
20687       9:00
20641       12:00
20636       19:00
19865       16:30
19264       8:00
19107       10:00
19106       14:30
19010       0:00
18691       11:00
18688       17:30
16646       18:30
16602       20:00
16144       8:30
16008       13:30
Explanation
  • We use the time field, '1 CRASH TIME', which is in 24 hour format, 'HH:MM'
Observation
  • Using the 'actual time' may be a bit too specific. I’ll use the 'hour of day' instead to give a 60 minute time frame.
 bash > cut -d, -f2 all_motor_vehicle_collision_data.csv | cut -d : -f1 | sort | uniq -c | sort -k 1nr | head -n10 | column -t --table-columns=#Collisions,Hour
#Collisions  Hour
143012      16
139818      17
132443      14
123761      15
122971      18
114555      13
108925      12
108593      8
105206      9
102541      11
Explanation
  • Similar to the previous example, except this time the cut command is used to split the time HH:MM, delimited by ':'
  • 'cut -d : -f 1'
    • '-d'
    • The 'cut' delimiter is ':'
    • '-f 1'
    • Grab the first field, 'HH' of the 'HH:MM'.
  • Use something like the printf command to append ':00' to those hours.
Observation

As you would expect, most collisions happen during rush hour.

Display the worst years for collisions

bash > cut -d, -f1 all_motor_vehicle_collision_data.csv | cut -d '/' -f3 | sort | uniq -c | sort -k 1nr | head -n10 | column -t --table-columns=#Collisions,Year
#Collisions  Year
231564       2018
231007       2017
229831       2016
217694       2015
211486       2019
206033       2014
203734       2013
112915       2020
110546       2021
103745       2022
Explanation
  • We use the first column, '0 CRASH DATE' again
  • 'cut -d '/' -f3'
    • Extracts the 'yyyy' from the 'mm/dd/yyyy'
Observation
  • Some improvement seen in 2020, 2021 and 2022, if you can believe the data.
  • One unscientific observation here is that the NYPD may have been much less vigilant in the past few years than they were prior to Commissioner Raymond Kelly and his successors.
  • Also, by only printing out the worst 10 years, partial years 2012 and 2023 were excluded.

For a comparison. See how many people were injured or killed in traffic collisions.

First create a work file, 'sorted_injured_killed_by_year.csv', with three columns, Year, Injured count and Fatality count

We need the Text::CSV Perl module here due to those embedded commas in earlier fields. Below are the three fields needed.

0   CRASH DATE
10  NUMBER OF PERSONS INJURED
11  NUMBER OF PERSONS KILLED
bash > perl -MText::CSV=csv -E '$aofa = csv( in => $ARGV[0], headers => qq/skip/ ); ($_->[10] || $_->[11]) && say substr($_->[0],6,4) . qq/,$_->[10],$_->[11]/ for @{$aofa}' all_motor_vehicle_collision_data.csv | sort > sorted_injured_killed_by_year.csv
Check out the work file we just created
bash > wc -l sorted_injured_killed_by_year.csv 
433081 sorted_injured_killed_by_year.csv

bash > head -n10 sorted_injured_killed_by_year.csv | column -t -s, --table-columns=Year,#Injured,#Killed
Year  #Injured  #Killed
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1
2012  0         1

Worst years for collision injuries

perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[1]; END{say qq/$_,  $h{$_}/ for sort {$h{$b} <=> $h{$a} } keys %h}' sorted_injured_killed_by_year.csv | head -n10 |  column  -t -s, --table-columns=Year,#Injured 
Year  #Injured
2018    61941
2019    61389
2017    60656
2016    60317
2013    55124
2022    51883
2021    51780
2015    51358
2014    51223
2020    44615
Explanation
  • This is similar to how we got the Zip Code and Borough data previously.
  • This time the Perl sort is used instead of the Linux sort.
  • 'END{say qq/$_, $h{$_}/ for sort {$h{$b} <=> $h{$a} } keys %h}'
    • 'for' statement loops through the '%h' hash keys(years).
    • The corresponding Hash values(Injured count), are sorted in descending order.
  • 'sort {$h{$b} <=> $h{$a} }'.
    • $a and $b are default Perl sort variables.
    • Rearranged it to 'sort {$h{$a} <=> $h{$b} }', to sort the injury count in ascending order.
Observation

While the collision count may have gone down, there isn't any real corresponding downward trend in injuries.

The worst years for collision fatalities.

bash > perl -n -E '@a=split(q/,/,$_);$h{$a[0]} += $a[2]; END{say qq/$_,  $h{$_}/ for sort {$h{$b} <=> $h{$a} } keys %h}' sorted_injured_killed_by_year.csv | head -n10 |  column  -t -s, --table-columns=Year,#Killed
Year  #Killed
2013    297
2021    294
2022    285
2020    268
2014    262
2017    256
2016    246
2019    244
2015    243
2018    231
Explanation
  • Slightly modified version of the injury by year count.
Observation

Same as with the injuries count. There isn't any real corresponding downward trend in traffic collision fatalities.

Conclusion

There’s lots more work that can be done to extract meaningful information from this dataset.

What’s clear to me, is that all the political rhetoric and money poured into Vision Zero has yielded little in terms of results.
Most of the solutions are obvious from a logical point of view, but not a political point of view. I walk and cycle these streets and know how dangerous it is to cross at the ā€œdesignatedā€ crosswalks when cars and trucks are turning in on top of you. Cycling in NYC is even worse.

Some sugggested solutions

  • Delayed green lights co cars cars don't turn in on pedestrians at crosswalks.
  • Much higher tax and registraton fees for giant SUV's and pickup trucks. The don’t belong in the city.
  • Better bike lanes, instead of meaningless lines painted on the road.
    • Many bike lanes are used as convenient parking for NYPD and delivery vehicles.
  • Basic enforcement of traffic laws, which isn't being done now.
    • Drivers ignore red lights, speed limits, noise restrictions etc. when they know they aren't being enforced.
    • Driving while texting or yapping on the phone is the norm, not the exception.
  • Drastically improve public transit, especially in areas not served by the subway system.

Some Perl CLI Resources

Perldocs - perlrun

Peteris Krumins has some great e-books

Dave Cross - From one of his older posts on perl.com

Some NYC Street Resources

StreetsBlog NYC

Hellgate NYC - Local NYC News

Liam Quigley - Local Reporter

More Liam Quigley - Twitter

These Stupid Trucks are Literally Killing Us - YouTube

Me

Linkedin
My Blog

Report on Volunteerism in the Perl Community

Perl Foundation News

Published by Amber Deuel on Tuesday 21 March 2023 13:14


This report is a collaboration between Dr. Ann Barcomb's research lab and TPRF.

The research was primarily conducted by Aadharsh Hariharan. The goal of this research project is to analyse the current existing volunteering scenario within these communities and create techniques that will help in better management of volunteers. The Perl community is ideal for this research project as there is no single large corporate entity behind the communities.

Please click here to view the report in it's entirety.

We will be discussing the report at the next Community and Reps meeting, which will be held on April 21st at 17:30 UTC.

To join the meeting virtually, please follow this Zoom link.

Meeting ID: 890 4922 4605

Passcode: 8675309

We hope to see you at the meeting and look forward to your input.

TL;DR

I submitted a Pull Request to Fantasy Name Generator.

About two years and a half ago I run a little series of post starting with post Fantasy Name Generator - a grammar. The gist of it is that the Fantasy Name Generator repository contains a Perl implementation, but it can become quickly slow for moderately complex expressions, as also noted in the project itself.

I finally got to generate a Pull Request about it, proposing a different parser implementation.

I suspect that there’s not much interest in that repository since a long time, because my previous issue didn’t receive any answer, and I see stalled pull requests as well. Anyway, maybe people interested into the project will anyway figure that there’s been some improvement desppite the reduced attention that the project got in these latest years.

Stay safe!

Perl Weekly Challenge 209: Special Bit Characters and Merge Account

blogs.perl.org

Published by laurent_r on Tuesday 21 March 2023 03:40

These are some answers to the Week 209 of the Perl Weekly Challenge organized by Mohammad S. Anwar.

Spoiler Alert: This weekly challenge deadline is due in a few days from now (on March 26, 2023 at 23:59). This blog post offers some solutions to this challenge. Please don’t read on if you intend to complete the challenge on your own.

Task 1: Special Bit Characters

You are given an array of binary bits that ends with 0.

Valid sequences in the bit string are:

[0] -decodes-to-> "a"
[1, 0] -> "b"
[1, 1] -> "c"

Write a script to print 1 if the last character is an ā€œaā€ otherwise print 0.

Example 1:

Input: @bits = (1, 0, 0)
Output: 1

The given array bits can be decoded as 2-bits character (10) followed by 1-bit character (0).

Example 2:

Input: @bits = (1, 1, 1, 0)
Output: 0

Possible decode can be 2-bits character (11) followed by 2-bits character (10) i.e. the last character is not 1-bit character.

This is an example of variable-length code. In order to decode such a bit string, we need to start from the beginning of the string. If the string starts with 0, then we have an "a" and can continue decoding with the next bit. If the string starts with 1, then we need to check the next digit, to figure out whether the first letter is a "b or a "c", and we can continue decoding with the third bit. And so on. So, for example, if we have the following string:

0110100100111011100

We will effectively split it as follows:

0 11 0 10 0 10 0 11 10 11 10 0,

yielding the following decoded string: "acababacbcba."

It is easy to see that such an encoding is totally unambiguous: at any point in the process, there can be only one decoded letter.

But if we pick one bit somewhere in the middle of the bit string, we can't know how to proceed. If it is a 0, this can be an "a", or the second bit of a "b". Similarly, if it is a 1, then it can be the first bit of a "b", or the first or the second bit of a "c". So, we can be sure to always unambiguously decode the string only if we start from the beginning. There are cases, however, where it is possible to decode part of the string starting from somewhere in the middle. For example, if we find two consecutive 0, we know that the second one can only be an "a" and proceed from there. We don't need to proceed from the beginning to find that the last letter in the above bit string is an "a". But, since we cannot be sure to meet such a situation, the best is to start from the beginning as explained above. For much longer strings, looking for the last occurrence of a "00" pattern, and proceed from the second 0 (an "a"), may be an interesting performance improvement, since we're really interested in finding out whether the final 0 is an "a" or the second bit of a "b". With the small examples at hand, this optimization would be useless.

Special Bit Characters in Raku

The program goes through the bit string and consume either one or two bits, depending on whether the current digit is a 0 or a 1. If the last bit (necessarily a 0) is the start of a group, then it is an "a". Otherwise, it is the second bit of a "10" group (i.e. of a "b"). Note that a loop statement alone, without three statements in parentheses, is just an infinite loop, from which we exit with either of the return statements.

sub ends-with-a (@in) {
    my $i = 0;
    my $end = @in.end;
    loop {
        return 1 if $i == $end;
        $i += @in[$i] == 0 ?? 1 !! 2;
        return 0 if $i > $end;
    }
}

for <1 0 0>, <1 1 1 0>, <0 0 0 1 0>, <1 1 0> -> @test {
    say (~ @test).fmt("%-12s => "), ends-with-a @test;
}

This program displays the following output:

$ raku ./main.raku
1 0 0        => 1
1 1 1 0      => 0
0 0 0 1 0    => 0
1 1 0        => 1

Special Bit Characters in Perl

This is a port to Perl of the above Raku program. Please refer to the previous sections if you need additional explanations.

use strict;
use warnings;
use feature "say";

sub ends_with_a {
    my $i = 0;
    my $end = $#_;
    while (1) {
        return 1 if $i == $end;
        $i += $_[$i] == 0 ? 1 : 2;
        return 0 if $i > $end;
    }
}

for my $test ([<1 0 0>], [<1 1 1 0>], 
    [<0 0 0 1 0>], [<1 1 0>]) {
    printf "%-12s => %d\n", "@$test", ends_with_a @$test;
}

This program displays the following output:

$ perl ./special-bit-characters.pl
1 0 0        => 1
1 1 1 0      => 0
0 0 0 1 0    => 0
1 1 0        => 1

Task 2: Merge Account

You are given an array of accounts, i.e. name with list of email addresses.

Write a script to merge the accounts where possible.

Example 1:

Input: 
    @accounts = [ ["A", "a1@a.com", "a2@a.com"],
                  ["B", "b1@b.com"],
                  ["A", "a3@a.com", "a1@a.com"] ]
                ]

Output: [ ["A", "a1@a.com", "a2@a.com", "a3@a.com"],
          ["B", "b1@b.com"] ]

Example 2:

Input: 
    @accounts = [ ["A", "a1@a.com", "a2@a.com"],
                  ["B", "b1@b.com"],
                  ["A", "a3@a.com"],
                  ["B"m "b2@b.com", "b1@b.com"] ]

Output: [ ["A", "a1@a.com", "a2@a.com"],
          ["A", "a3@a.com"],
          ["B", "b1@b.com", "b2@b.com"] ]

Merge Account in Raku

We use a hash of hashes to merge items belonging to the same name and remove possible duplicates.

sub merge-account (@in) {
    my %merged;
    for @in -> @part {
        my ($key, @values) = @part;
        %merged{$key}{$_} = True for @values;
    }
    return %merged;
}

my @tests = ( <A a1@a.com a2@a.com>, 
              <B b1@b.com>,
              <A a3@a.com a1@a.com> ),
            ( <A a1@a.com a2@a.com>,
              <B b1@b.com>,
              <A a3@a.com>,
              <B b2@b.com b1@b.com> );
for @tests -> @test {
    say @test, " => ";
    my %merged = merge-account @test;
    for %merged.keys.sort -> $k {
        say "\t[", (join " ", $k, |%merged{$k}.keys.sort), "]";
    };
    say "";
}

This program displays the following output:

$ raku ./merge-account.raku
((A a1@a.com a2@a.com) (B b1@b.com) (A a3@a.com a1@a.com)) => 
    [A a1@a.com a2@a.com a3@a.com]
    [B b1@b.com]

((A a1@a.com a2@a.com) (B b1@b.com) (A a3@a.com) (B b2@b.com b1@b.com)) => 
    [A a1@a.com a2@a.com a3@a.com]
    [B b1@b.com b2@b.com]

Merge Account in Perl

This is a port to Perl of the previous Raku program. We use a hash of hashes to merge items belonging to the same name and remove possible duplicates.

use strict;
use warnings;
use feature "say";

sub merge_account {
    my %merged;
    for my $part (@_) {
        # say   Dumper $part;
        my ($key, @values) = @$part;
        $merged{$key}{$_} = 1 for @values;
    }
    # say %merged;
    return \%merged;
}

my @tests = ( [ [<A a1\@a.com a2\@a.com>], 
                [<B b1\@b.com>],
                [<A a3\@a.com a1\@a.com>] ],

              [ [<A a1\@a.com a2\@a.com>],
                [<B b1\@b.com>],
                [<A a3\@a.com>],
                [<B b2\@b.com b1\@b.com>] ] );

for my $test (@tests) {
    # say Dumper $test, " => ";
    for my $part (@$test) {
        print join " ", @$part;
        print " - ";
    }
    say " =>";
    my %merged = %{merge_account @$test};
    for my $k (sort keys %merged) {
        say "\t[", (join " ", $k, sort keys %{$merged{$k}}), "]";
    };
    say "";
}

This program displays the following output:

$ perl  ./merge-account.pl
A a1@a.com a2@a.com - B b1@b.com - A a3@a.com a1@a.com -  =>
        [A a1@a.com a2@a.com a3@a.com]
        [B b1@b.com]

A a1@a.com a2@a.com - B b1@b.com - A a3@a.com - B b2@b.com b1@b.com -  =>
        [A a1@a.com a2@a.com a3@a.com]
        [B b1@b.com b2@b.com]

Wrapping up

The next week Perl Weekly Challenge will start soon. If you want to participate in this challenge, please check https://perlweeklychallenge.org/ and make sure you answer the challenge before 23:59 BST (British summer time) on April 2, 2023. And, please, also spread the word about the Perl Weekly Challenge if you can.

Community Input Requested - Future of TPRF website(s)

Perl Foundation News

Published by Amber Deuel on Monday 20 March 2023 19:06


TPRF is looking for community input on the future of the TPRF website(s).

We are asking for community members to submit proposals for the Perl/Raku website(s). The proposals would ideally address the following:

* Should there be separate websites for Perl and Raku, with a single set of main content that will be on both pages, or should TPRF continue to operate with one website, but feature separate sites for language specific donations?

* An approach to the professional graphic designing of the website (Please provide links to any examples/existing websites that show what a website along these lines would look like)

* Cost to deploy

* Ongoing costs for technical aspects and maintenance

Suggestions should also be:

1. Template-based, for custom branding

2. easy to add, update, move and delete content

3. customizable menus

4. mobile friendly

5. multi-domain support

6. access to logs etc for analysis and SEO purposes

You can submit your ideas using our form template at https://forms.gle/s9uKWorzQ3GSHe8JA OR you can submit a free form proposal to board@perlfoundation.org

#608 - Love You Perl!!!

Perl Weekly

Published on Monday 20 March 2023 10:00

Hi there,

My 6 years old twins reminded me on Sunday morning about Mother's Day. Although I didn't even know such day existed when I was in India but this country exposed me to all kind of celebrations. I miss my mom a lot. Let me share with you one of my childhood memory. During my school days, everytime somebody complain to my mom about my fight she wouldn't listen to my side of story. In fact she would start beating me up infront of the complainant. All my so-called friends knew this and they would blackmail me all the time. My mom would never defend me and believe what others have to say about me. I hated it to the bottom of my heart. This affected my personality and made me an introvert. I have been fighting with my nature all my life, even today.

I see the similar treatment to my beloved programming language Perl. It pains me when I see some Perl Gurus never defend the language same like my mom. I expect them to show the bright side of the language to the rest of the world instead, being the language guru. Even a single negative statement about Perl would take us one step back and create a bad image. I am fully aware of all the shortcomings but I am not going to go out and tell everyone about it. I would rather find the solutions, if I can. There is nothing wrong to try other programming languages from the job prospective. In fact, I have recently played with Python and Raku, thanks to The Weekly Challenge but Perl remains my first choice of programming language.

I know I don't blog regularly any more but I have done it in the past every time I have done something nice using Perl. You can find some of them here and bpo. I know there are many Perl fans out there doing their bits to dust off the negative image. Honestly speaking, we need more people to come forward. For example, Gabor Szabo has been running dedicated website, Perl Maven where he showcase the cool features of Perl.

I have seen some of the brightest brain creating magic using Perl every week when it comes to solving the weekly challenge. The best part is that they even share the code with others on various platforms like Facebook.

Last but not the least, Happy Mother's Day to all the moms out there, including mine who is in India right now. I love you mom. I would like to apologise to all the readers of the newsletter about the last week incident when I couldn't get my contributions to the newsletter available on time. I will try to be more organised in future, promise.

Perl Weekly #608 - Love You Perl!!!

dev.to #perl

Published by Gabor Szabo on Monday 20 March 2023 06:49

Originally published at Perl Weekly 608

Hi there,

My 6 years old twins reminded me on Sunday morning about Mother's Day. Although I didn't even know such day existed when I was in India but this country exposed me to all kind of celebrations. I miss my mom a lot. Let me share with you one of my childhood memory. During my school days, everytime somebody complain to my mom about my fight she wouldn't listen to my side of story. In fact she would start beating me up infront of the complainant. All my so-called friends knew this and they would blackmail me all the time. My mom would never defend me and believe what others have to say about me. I hated it to the bottom of my heart. This affected my personality and made me an introvert. I have been fighting with my nature all my life, even today.

I see the similar treatment to my beloved programming language Perl. It pains me when I see some Perl Gurus never defend the language same like my mom. I expect them to show the bright side of the language to the rest of the world instead, being the language guru. Even a single negative statement about Perl would take us one step back and create a bad image. I am fully aware of all the shortcomings but I am not going to go out and tell everyone about it. I would rather find the solutions, if I can. There is nothing wrong to try other programming languages from the job prospective. In fact, I have recently played with Python and Raku, thanks to The Weekly Challenge but Perl remains my first choice of programming language.

I know I don't blog regularly any more but I have done it in the past every time I have done something nice using Perl. You can find some of them here and bpo. I know there are many Perl fans out there doing their bits to dust off the negative image. Honestly speaking, we need more people to come forward. For example, Gabor Szabo has been running dedicated website, Perl Maven where he showcase the cool features of Perl.

I have seen some of the brightest brain creating magic using Perl every week when it comes to solving the weekly challenge. The best part is that they even share the code with others on various platforms like Facebook.

Last but not the least, Happy Mother's Day to all the moms out there, including mine who is in India right now. I love you mom. I would like to apologise to all the readers of the newsletter about the last week incident when I couldn't get my contributions to the newsletter available on time. I will try to be more organised in future, promise.

--
Your editor: Mohammad S. Anwar.

Announcements

Template Toolkit’s DEFAULT is not too useful

For all Template Toolkit user, do checkout the DEFAULT behaviour.

Articles

1000th consecutive posting day

What an achievements!!! Congratulation for reaching the milestone. Keep it up great work.

The first line of Perl_CGI script, env perl vs perl only, how different?

Another post showing what not to do when using Perl.

tempfile and mktemp

Deal with the warning tempfile is deprecated. Nice work.

ChatGPT for Perl Learning

Ever played with ChatGPT? This should give you fresh perspective.

CPAN

Quiq - Weekly Travelling in CPAN

A brief introduction to the CPAN module Quiq..

First release of Validate::CodiceFiscale

Find the reason behind the creation of Validate::CodiceFiscale.

The Weekly Challenge

The Weekly Challenge by Mohammad Anwar will help you step out of your comfort-zone. You can even win prize money of $50 Amazon voucher by participating in the weekly challenge. We pick one winner at the end of the month from among all of the contributors during the month. The monthly prize is kindly sponsored by Peter Sergeant of PerlCareers.

The Weekly Challenge - 209

Welcome to a new week with a couple of fun tasks: "Special Bit Characters" and "Merge Account". If you are new to the weekly challenge, why not join us and have fun every week? For more information, please read the FAQ.

RECAP - The Weekly Challenge - 208

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Minimum Index Sum" and "Duplicate and Missing" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

MIS-ing

A detailed discussion of the solutions in Raku can be very handy with the official documentation. Great work, keep it up.

Perl Weekly Challenge #208

Happy to see the speed up in the getting the task done. Well done. Please do checkout the details.

Smells like teen SQL

Just loved the SQL twist. You don't want to miss the fun. Thanks for sharing.

PWC208 - Minimum Index Sum

Valid questions raised but we still have working solutions in Perl and Raku. Thank you.

PWC208 - Duplicate and Missing

A rather lengthy blog post for the task. Thanks for sharing the knowledge.

Perl Weekly Challenge: Week 208

Nice showcase of Raku features with line-by-line explanation. Cool contributions.

The Weekly Challenge 208

Compact and powerful solutions in Perl. Thanks for your contributions.

Perl Weekly Challenge 208: Minimum Index Sum and Duplicate and Missing

Side by side Perl and Raku solutions make it great to learn the trick. Great job.

grep, grep and grep!

grep in action with full force using Raku. Easy to follow the solution.

Perl Weekly Challenge 208

Yet another one-liner in Perl for you. Do checkout it out.

Juggling with indexes

Simple and easy to follow solutions in Perl. Thanks for sharing.

Matching strings and missing numbers

Give it a try the online solutions. This is exceptional. Keep it up great work.

The Weekly Challenge #208

Nice attempt to solve the weekly challenge. Keep it up great work.

Missing Index? Just Duplicate!

Ruby is choice of language for this week blog. Thanks for your contributions.

Weekly Challenge 208

You get the nitty gritty exposed in the blog post with examples. Well done.

Rakudo

2023.11 Ainions

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.

The corner of Gabor

A couple of entries sneaked in by Gabor.

The problem with open source: not enough contributors

I have a feeling that there are not enough people contributing to open source, but GitHub reports to have 94M users. Am I mistaken or is that number misleading?

GitHub Actions and Workflows

The name 'GitHub Actions' reffers to two different things. One of them is also called 'GitHub Actions Workflows'.

One-liner: Remove first two characters of every line in thousands of files

A very simple problem for someone who knows Perl. So it was posted on a forum where few people know Perl.

Perl Jobs by Perl Careers

Adventure! Senior Perl roles in Malaysia, Dubai and Malta

Clever folks know that if you’re lucky, you can earn a living and have an adventure at the same time. Enter our international client: online trading is their game and they’re looking for experienced Perl people who have passion, drive, and an appreciation for new experiences.

Senior Perl Developer with Cross-Trained Chops. UK Remote Perl Role

The best senior developers know that variety is the spice of life. Sure, you’ve got Perl chops for days, but that’s not all you can do — and that’s why our client wants to meet you. They’re looking for senior Perl developers, Node engineers, and those with mighty Python and SQL skills to lead their team.

C, C++, and Perl Software Engineers, Let’s Keep the Internet Safe. UK Remote Perl Role

A leading digital safeguarding solutions provider is looking for a software engineer experienced in C, C++, or Perl. You’ll have strong Linux knowledge and a methodical approach to problem solving that you use to investigate, replicate, and address customer issues. Your keen understanding of firewalls, proxies, Iptables, Squid, VPNs/IPSec and HTTP(S) will be key to your success at this company.

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

RECAP - The Weekly Challenge - 208

The Weekly Challenge

Published on Monday 20 March 2023 00:00

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

The Weekly Challenge - 209

The Weekly Challenge

Published on Monday 20 March 2023 00:00

TABLE OF CONTENTS 01. HEADLINES 02. SPONSOR 03. RECAP 04. PERL REVIEW 05. RAKU REVIEW 06. CHART 07. NEW MEMBERS 08. GUESTS 09. TASK #1: Special Bit Characters 10. TASK #2: Merge Account HEADLINES Welcome to the Week #209 of The Weekly Challenge. We have new guest Simon Dueck joined Team PWC and shared solutions to Week #207 and Week #208 in F#. E. Alvarez blog post sharing Javascript magic once again.

Perl Weekly Challenge #208

blogs.perl.org

Published by oldtechaa on Sunday 19 March 2023 17:25

First of all, a greeting. I posted an introduction with a notification of intent to take over a module on CPAN, but the maintainer responded to me. I'm Avery, I'm developing SeekMIDI, a small graphical MIDI sequencer. I started it in 2016 and I took a long break from programming entirely, and I've just restarted developing my programming skills again. For starters, I'm working on Perl Weekly Challenges and bug fixes to modules.

Without further ado, here are my solutions to the PWC #208. All solutions are about to be posted, but this could be a spoiler if you're trying to solve it too. I was very pleased this week that I got it down to about 15-25 minutes for each task, so I'm definitely getting more comfortable in Perl again.

First, task 1:

use strict;
use v5.10;

my @list1 = ("Perl", "Raku", "Love");
my @list2 = ("Raku", "Perl", "Hate");

# my @list1 = ("A", "B", "C");
# my @list2 = ("D", "E", "F");

# my @list1 = ("A", "B", "C");
# my @list2 = ("C", "A", "B");

my $minindex;
my @results;
for (my $index1 = 0; $index1 < scalar @list1; $index1++) {
    for (my $index2 = 0; $index2 < scalar @list2; $index2++) {
        if ($list1[$index1] eq $list2[$index2] && defined($minindex)) {
            if ($index1 + $index2 < $minindex) {
                @results = ($list1[$index1]);
                $minindex = $index1 + $index2;
            } elsif ($index1 + $index2 == $minindex) {
                push (@results, $list1[$index1]);
            }
        } elsif ($list1[$index1] eq $list2[$index2] && !defined($minindex)) {
            @results = ($list1[$index1]);
            $minindex = $index1 + $index2;
        }
    }
}
if (scalar @results == 0) {exit}
foreach (@results) {
    say $_;
}

Pretty simple. For every item in list 1, I iterate through list 2 and look for a match with a lower sum than the previous lowest sum. If I find one it replaces the previous results, if it's equal I add it to the results. Then I say the results.

Now task 2:

use strict;
use v5.10;

my $index;
my ($duplicate, $missing);
foreach (@ARGV) {
    if (!defined($index)) {$index = 1 and next}
    if ($_ == $ARGV[$index - 1]) {$duplicate = $_}
    if ($_ != $ARGV[$index - 1] + 1) {$missing = $ARGV[$index - 1] + 1}
    $index++;
}
defined($duplicate) && defined($missing) ? say ("Duplicate is $duplicate", "\n", "Missing is $missing") : say -1;

For this one, I iterate through them and if it's the same thing as the previous one it's my duplicate and if it's not the previous item plus 1 then it's your missing number.

Anyways, hope you liked them this week. I always look at the other solutions blogged about and get ideas for use in my code. If you have any better ideas comment below.

Perl Weekly Challenge #208

dev.to #perl

Published by oldtechaa on Sunday 19 March 2023 12:28

I've been doing the Perl Weekly Challenge, so here are my solutions to the PWC #208. All solutions are about to be posted, but this could be a spoiler if you're trying to solve it too. I was very pleased this week that I got it down to about 15-25 minutes for each task, so I'm definitely getting more comfortable in Perl again.

First, task 1:

use strict;
use v5.10;

my @list1 = ("Perl", "Raku", "Love");
my @list2 = ("Raku", "Perl", "Hate");

# my @list1 = ("A", "B", "C");
# my @list2 = ("D", "E", "F");

# my @list1 = ("A", "B", "C");
# my @list2 = ("C", "A", "B");

my $minindex;
my @results;
for (my $index1 = 0; $index1 < scalar @list1; $index1++) {
    for (my $index2 = 0; $index2 < scalar @list2; $index2++) {
        if ($list1[$index1] eq $list2[$index2] && defined($minindex)) {
            if ($index1 + $index2 < $minindex) {
                @results = ($list1[$index1]);
                $minindex = $index1 + $index2;
            } elsif ($index1 + $index2 == $minindex) {
                push (@results, $list1[$index1]);
            }
        } elsif ($list1[$index1] eq $list2[$index2] && !defined($minindex)) {
            @results = ($list1[$index1]);
            $minindex = $index1 + $index2;
        }
    }
}
if (scalar @results == 0) {exit}
foreach (@results) {
    say $_;
}

Pretty simple. For every item in list 1, I iterate through list 2 and look for a match with a lower sum than the previous lowest sum. If I find one it replaces the previous results, if it's equal I add it to the results. Then I say the results.

Now task 2:

use strict;
use v5.10;

my $index;
my ($duplicate, $missing);
foreach (@ARGV) {
    if (!defined($index)) {$index = 1 and next}
    if ($_ == $ARGV[$index - 1]) {$duplicate = $_}
    if ($_ != $ARGV[$index - 1] + 1) {$missing = $ARGV[$index - 1] + 1}
    $index++;
}
defined($duplicate) && defined($missing) ? say ("Duplicate is $duplicate", "\n", "Missing is $missing") : say -1;

For this one, I iterate through them and if it's the same thing as the previous one it's my duplicate and if it's not the previous item plus 1 then it's your missing number.

Anyways, hope you liked them this week. I always look at the other solutions blogged about and get ideas for use in my code. If you have any better ideas comment below.

(cdxxxvi) 10 great CPAN modules released last week

Niceperl

Published by Unknown on Sunday 19 March 2023 08:58

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.320 on 2023-03-14, with 12 votes
    • Previous version: 2.316 was 23 days before
    • Author: KUERBIS
  2. Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
    • Version: 0.010 on 2023-03-14, with 12 votes
    • Previous version: 0.009 was 5 days before
    • Author: LEONT
  3. ExtUtils::MakeMaker - Create a module Makefile
    • Version: 7.68 on 2023-03-14, with 52 votes
    • Previous version: 7.66 was 2 months, 20 days before
    • Author: BINGOS
  4. FFI::Platypus - Write Perl bindings to non-Perl libraries with FFI. No XS required.
    • Version: 2.07 on 2023-03-14, with 58 votes
    • Previous version: 2.05 was 3 months, 28 days before
    • Author: PLICEASE
  5. Future - represent an operation awaiting completion
    • Version: 0.50 on 2023-03-17, with 56 votes
    • Previous version: 0.49 was 5 months before
    • Author: PEVANS
  6. Future::AsyncAwait - deferred subroutine syntax for futures
    • Version: 0.65 on 2023-03-17, with 45 votes
    • Previous version: 0.64 was 1 month, 3 days before
    • Author: PEVANS
  7. Net::DNS - Perl Interface to the Domain Name System
    • Version: 1.37 on 2023-03-13, with 25 votes
    • Previous version: 1.36 was 2 months, 14 days before
    • Author: NLNETLABS
  8. Net::Server - Extensible Perl internet server
    • Version: 2.014 on 2023-03-14, with 31 votes
    • Previous version: 2.013 was 3 months, 11 days before
    • Author: RHANDOM
  9. Term::Choose - Choose items from a list interactively.
    • Version: 1.758 on 2023-03-17, with 13 votes
    • Previous version: 1.757 was 8 months before
    • Author: KUERBIS
  10. Test::Simple - Basic utilities for writing tests.
    • Version: 1.302194 on 2023-03-14, with 182 votes
    • Previous version: 1.302193 was 8 days before
    • Author: EXODIST

(dxlix) metacpan weekly report - Playwright

Niceperl

Published by Unknown on Sunday 19 March 2023 08:08

This is the weekly favourites list of CPAN distributions. Votes count: 40

Week's winner: Playwright (+2)

Build date: 2023/03/19 07:07:48 GMT


Clicked for first time:


Increasing its reputation:

Perl Interview question and answers 2023

Perl on Medium

Published by Suraj Taradale on Sunday 19 March 2023 05:10

Collection of resources for anyone preparing for a Perl programming interview. These frequently asked questions and their detailed answers…

Perl Coding Interview Question and Answers

Perl on Medium

Published by Suraj Taradale on Sunday 19 March 2023 05:07

As a Perl developer, you need to have a solid understanding of the language’s syntax, semantics, and best practices. This blog provides a…

The Weekly Challenge - Guest Contributions

The Weekly Challenge

Published on Sunday 19 March 2023 00:00

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Fennel, Fish, Forth, Fortran, Gembase, GNAT, Go, Haskell, Haxe, HTML, Idris, IO, J, Janet, Java, JavaScript, Julia, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.

Deadline for TPRC

Perl Foundation News

Published by Amber Deuel on Friday 17 March 2023 13:58


The Perl & Raku Conference is a high-quality, inexpensive technical Conference that celebrates the Perl and Raku programming languages. The beauty of the conference is that it remains accessible to everyone regardless of experience, yet it is still valuable to the most skilled programmers.

The Perl & Raku Conference 2023 will be held in Toronto, Ontario, Canada from July 11-13, 2023.

The deadline to submit your talks is coming up quickly! Please visit https://www.papercall.io/tprctor23 to submit your talk before March 31st.

One-liner: Remove first two characters of every line

Perl Maven

Published by Gabor Szabo on Tuesday 14 March 2023 21:30

In a project creating a Ladino dictionary in which I have a few thousands of YAML files. They used to include lists of values, but a while ago I split them up into individual entries. I did this because the people who are editing them are not used to YAML files and it makes it a lot easier to explain them what to do.

However the previous change left me with 1-item lists in each file. I wanted to clean that up.

#607 - The Perl Planetarium

Perl Weekly

Published on Monday 13 March 2023 10:00

Hi there!

In the previous newsletter Mohammad included a blog post of Dave Cross about the Perl Planetarium, but I think it is worth another highlighting. It is an excellent way to follow what's going on in and around Perl on a daily basis. Now also including commits to the source code of Perl and questions on Stack overflow. I've also opened two issues asking Dave to allow the user to filter the messages based on source and to color-code the sources. Something one of you could implement in the front-end.

A couple of days ago I saw a post on Facebook by Curtis Ovid Poe and then Will Braswell reposted it. The rant was on why Perl is dying. Basically Ovid says that Perl lags behind the more popular languages adopting new features. Including features that were available to some other languages 10-20-40 years ago. Predictably some people jumped on blaming the Perl 6 effort. Others pointed out the lack of volunteers. The lack of outreach. The missing modules from the standard libraries.

Some very interesting comment, but you might need to be Facebook account to be able to read them. Sorry for that.

While preparing a blog post I searched GitHub for repositories written in Perl and then it gave me the idea to check out the other languages and got this table on The most popular programming languages on GitHub. I posted the link to the Perl programmers group on Facebook. They did not like the numbers.

Here is an idea to make GitHub better for Perl projects: The dependabot helps developers keep their dependencies up-to-date. As far as I can tell it does not support Perl.. However, it's source code is on GitHub so interested parties could probaly add this support. Let me know when it becomes available so I can try it.

Enjoy your week!

RECAP - The Weekly Challenge - 207

The Weekly Challenge

Published on Monday 13 March 2023 00:00

TABLE OF CONTENTS 01. HEADLINES 02. STAR CONTRIBUTORS 03. CONTRIBUTION STATS 04. GUESTS 05. LANGUAGES 06. CENTURION CLUB 07. DAMIAN CONWAY’s CORNER 08. ANDREW SHITOV’s CORNER 09. PERL SOLUTIONS 10. RAKU SOLUTIONS 11. PERL & RAKU SOLUTIONS HEADLINES Thank you Team PWC for your continuous support and encouragement. STAR CONTRIBUTORS Following members shared solutions to both tasks in Perl and Raku as well as blogged about it.

The Weekly Challenge - 208

The Weekly Challenge

Published on Monday 13 March 2023 00:00

TABLE OF CONTENTS 01. HEADLINES 02. SPONSOR 03. RECAP 04. PERL REVIEW 05. RAKU REVIEW 06. CHART 07. NEW MEMBERS 08. GUESTS 09. TASK #1: Minimum Index Sum 10. TASK #2: Duplicate and Missing HEADLINES Welcome to the Week #208 of The Weekly Challenge. Please do accept my apology for the late release of the weekly challenge. Thank you for your kind messages. It means a lot. Did you notice, we now have over 10,000+ solutions in Perl?

(cdxxxv) 13 great CPAN modules released last week

Niceperl

Published by Unknown on Saturday 11 March 2023 23:20

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Alien::Build - Build external dependencies for use in CPAN
    • Version: 2.78 on 2023-03-07, with 22 votes
    • Previous version: 2.77 was 1 month, 18 days before
    • Author: PLICEASE
  2. App::Netdisco - An open source web-based network management tool.
    • Version: 2.060010 on 2023-03-10, with 14 votes
    • Previous version: 2.060007 was 7 days before
    • Author: OLIVER
  3. Class::Method::Modifiers - Provides Moose-like method modifiers
    • Version: 2.15 on 2023-03-05, with 29 votes
    • Previous version: 2.15 was before
    • Author: ETHER
  4. Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
    • Version: 0.009 on 2023-03-09, with 12 votes
    • Previous version: 0.007 was 25 days before
    • Author: LEONT
  5. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.16.1 on 2023-03-06, with 92 votes
    • Previous version: 3.16.0 was 6 months, 29 days before
    • Author: TURNSTEP
  6. JSON::Validator - Validate data against a JSON schema
    • Version: 5.14 on 2023-03-06, with 31 votes
    • Previous version: 5.13 was 2 months, 28 days before
    • Author: JHTHORSEN
  7. Number::Phone - base class for Number::Phone::* modules
    • Version: 3.9001 on 2023-03-07, with 16 votes
    • Previous version: 3.9000 was 2 days before
    • Author: DCANTRELL
  8. Perl::Tidy - indent and reformat perl scripts
    • Version: 20230309 on 2023-03-08, with 132 votes
    • Previous version: 20221112 was 3 months, 27 days before
    • Author: SHANCOCK
  9. Rex - the friendly automation framework
    • Version: 1.14.1 on 2023-03-05, with 82 votes
    • Previous version: 1.14.0 was 1 month, 6 days before
    • Author: FERKI
  10. SPVM - SPVM Language
    • Version: 0.9695 on 2023-03-06, with 28 votes
    • Previous version: 0.9694 was 3 days before
    • Author: KIMOTO
  11. Test::Simple - Basic utilities for writing tests.
    • Version: 1.302193 on 2023-03-06, with 182 votes
    • Previous version: 1.302192 was 1 month, 4 days before
    • Author: EXODIST
  12. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000151 on 2023-03-08, with 13 votes
    • Previous version: 1.000150 was 7 days before
    • Author: EXODIST
  13. Test2::Suite - Distribution with a rich set of tools built upon the Test2 framework.
    • Version: 0.000148 on 2023-03-05, with 39 votes
    • Previous version: 0.000147 was 7 days before
    • Author: EXODIST

(dxlviii) metacpan weekly report

Niceperl

Published by Unknown on Saturday 11 March 2023 23:15

This is the weekly favourites list of CPAN distributions. Votes count: 45

This week there isn't any remarkable distribution

Build date: 2023/03/11 22:14:10 GMT


Clicked for first time:


Increasing its reputation:

Mendapatkan Username Login pada Wordpress versi 4.9.4

Perl on Medium

Published by SamUkhid on Thursday 09 March 2023 20:06

Hai, disini saya akan menjelaskan cara mendapatkan username pada Login wordpress, yangmana cocok sekali apabila digabungkan dengan teknik…

#606 - First Love Perl?

Perl Weekly

Published on Monday 06 March 2023 10:00

Hi there,

Do you remember the first programming language you picked up?

I am sure in the life span of a programmer, one picks more than one programming language but still have special place for one. I remember during my first encounter in the year 1993, I was introduced to COBOL as part of the course I joined immediately after getting through 12th or (+2 as some remember). I had only access to PC at the institute and that too 2 times a week for one hour each. Imagine relying on books and nothing else, I was attracted towards many other programming languages. During that period, I picked up Fortran as well and I loved it more than COBOL. When I got bored of Fortran, I went to Pascal. That too didn't last long before I moved to C. I stayed with C longer than any of the programming languages I picked before. I remember when I moved to Mumbai for a better opportunity in the year 1997, I hardly got any offer so I took a part-time job to build a desktop application for a small factory using FoxPRO. It was during this period, I got the offer to join a startup looking for fresher willing to learn Perl. It was my first proper job as a programmer. Infact, I was given two choices, Perl or Java. I choose Perl as it was easier to move from C. And here I am today, in the year 2023, working as a Perl Developer.

Reason I am sharing my story is last week I came across some post on social media talking about (again) Perl is dead or not moving fast enough to keep up with the younger generations. I don't want to repeat what is said in favour or against Perl but it pains to see not many coming forward to stop the lies. I remember when Perl v5.36 was released, it made a big impact and everyone started talking about it. I even did daily series on the new features of latest release. I have kept it recorded in the GitHub repository for anyone to have a look. In few months time, there is going to be another big release that every Perl fans are looking forward to it. So to all ignorants and haters, Perl is not going anywhere and will remain stronger as ever.

Just for fun, I asked ChatGPT his views on Perl. I must say it didn't dissappoint me. It tried to be polite and diplomatic in his response. I liked it. Please don't get me wrong, I don't live in the fantasy world. I know the fact, we don't have many new projects being developed in Perl. So what? That doesn't make it any lesser than others. You can try other while still in love with Perl. I am not suggesting you do but if you do then there is nothing wrong in it. I know many experts in Perl equally good in many other languages. You can find them doing weekly challenges in so many different languages.

The moral of the story, Perl will remain my first love no matter what. How about you, Perl fans?

Last but not the least, Enlightened Perl Organisation (EPO) is closing down, Mark Keating made it public the sad news. I would like to take this opportunity to thank Mark Keating and his entire team for all the hard work and support, specially to the London Perl Workshop.

Please enjoy the rest of the newsletter.

Improvements to Planet Perl and Perlanet

Perl Hacks

Published by Dave Cross on Friday 03 March 2023 10:16

This is a story of one of those nice incidents where something starts off simple, then spirals out of control for a while but, in the end, everyone wins.

On Reddit, a few days ago, someone asked ‘Is there a “Planet Perl” with an RSS feed?’ and a few people replied, pointing out the existence of Planet Perl (which is the first Google result for “Planet Perl”). I’m obviously not marketing that site very well as every time I mention it, I get people (pleasantly) surprised that it exists.

On this occasion, it was Elvin Aslanov who seemed to discover my site for the first time. And, very soon afterwards, he started sending pull requests to add feeds to the site. As a result, we now have three more feeds that are being pulled into the site.

  • Perl on Medium. I’m slightly embarrassed that I hadn’t thought of this myself. I did, after all, once try to start a Perl publication on Medium. I think I must have decided that there are better sites for technical blogging and blanked it from consideration. Medium’s not the busiest of places for Perl bloggers, but there are a few posts there and they’re mostly from people who are outside of the echo chamber – so getting more eyes on their posts is a good idea.
  • Perl questions on Stack Overflow. Another one that would have been obvious if I had thought for a second. I’ve been answering questions on SO for years. It’s a good way to get more perspective on how Perl is being used across the industry. Unfortunately, the feed only includes the titles of the posts – you’ll need to click the link to actually see the question.
  • Perl commits on GitHub. I’m interested in hearing how useful people think this is. I worry slightly that there will be times when the number of commits will overwhelm the other feeds. But maybe that’s a good idea. Perhaps it’s good for more people to see just how busy the Perl 5 Porters are. I’m a bit annoyed that the feed puts everything in a fixed-width font, but not (yet) annoyed enough to do anything about it.

You might know that Planet Perl is driven by Perlanet. So adding new feeds is just a case of adding a few lines to a configuration file. And looking at the pull requests I got from Elvin, showed a potential problem in the way the configuration was laid out. Each feed has three lines of YAML configuration. There’s a title for the feed, a URL for a web page that displays the content of the feed and the URL for the feed itself. They’re called “title”, “web” and “url”. And it’s that last name that’s slightly problematic – it’s just not clear enough. Elvin got “web” and “url” muddled up in one of his PRs and, when I pointed that out to him, he suggested that renaming “url” to “feed” would make things much clearer.

I agreed, and the next day I hacked away for a while before releasing version 3.0.0 of Perlanet. In this version, the “url” key is renamed to “feed”. It still accepts the old name (so older config files will still work) but you’ll get a warning if you try to use a config name in the old config.

I didn’t stop there. Last year, I wrote a blog post about producing a docker image that already had Perlanet installed – so that it was quicker to rebuild my various planets every few hours. Since then I’ve been rebuilding that image every time I updated Perlanet. But it’s been rather a manual process. And because I’m old and decrepit, I can never remember the steps I go through to rebuild it, tag it correctly and push it to the Docker Hub. This means it always takes far longer than it’s supposed to. So this time, I wrote a script to do that for me. And because I now have the kind of mind set that sees GitHub Workflows everywhere I look, I wrote a Workflow definition that builds and publishes the image any time the Dockerfile changes. I guess the next step will be to write an action that automatically updates the Dockerfile (thereby triggering the rebuild) each time I release a new version of Perlanet.

But that’s a problem for another day. For now, I’m happy with the improvements I’ve made to Planet Perl, Perlanet and the Perlanet Docker infrastructure.

The post Improvements to Planet Perl and Perlanet appeared first on Perl Hacks.

What is Perl?

Perl on Medium

Published by Nediir? on Thursday 02 March 2023 16:10

Perl is a general-purpose, high-level, interactive, and interpreted programming language created by Larry Wall in 1987, originally…

TPRC 2023 March Newsletter

Perl Foundation News

Published by Todd Rinaldo on Wednesday 01 March 2023 20:06


The Perl and Raku Conference 2023

The Perl and Raku Foundation is delighted to announce that the annual North American Perl and Raku Conference will be held in Toronto, Canada on July 11 to 13, 2023. The venue will be the DoubleTree (a Hilton Hotels property), located at 108 Chestnut St in downtown Toronto. That's directly North from Toronto's city hall, close to public transit and one of the country's biggest malls, the Toronto Eaton Centre.

Toronto is Canada's largest city, and the city and surrounding area offers many attractions for a vacation before or after your conference attendance. For our American visitors: you're in luck, three of your dollars are about the same as four of ours.

Call For Papers is now open! You can submit your talk Ideas at https://tprc.to/papers. Talk submission deadline is March 31 Midnight UTC. Talks must be given live and in-person. If you are looking for any talk ideas, try out the conference wiki.

Early bird pricing is available. Purchase your ticket now through April 15th for the low price of $275 USD. You can purchase your tickets now at https://tprc2023.sched.com/tickets.

Visit our website at https://tprc.to/. Information on special hotel rates can be found at https://tprc.to/location/. We've managed to arrange for a really great room rate for the conference days, and there are a few rooms for before and after the conference, in case you'd like to stay longer for a little vacation.

If you are flying, you can book your tickets to YYZ (Pearson, the main airport; many public and private transit options) or YTZ (Billy Bishop, on Toronto Island; 5 minute cab ride); you can also fly into BUF (Buffalo, NY) and drive (2 hours by car) across the border. Just remember to bring your passport.

Unique to this year / Be Advised

  • The CFP closes in only 30 days!
  • The hotel has fewer rooms than prior years, and our price is half of what you can find anywhere close-by. Book your room early!
  • The hotel’s rooms are almost all King rooms. Contact the hotel directly, and pay an extra $30CAD if you want a room with two Queen beds!

How to reach our organizers

  • Visit our website: https://tprc.to
  • Follow us on Twitter: @PerlConferences
  • Like us on Facebook: The Perl Foundation (@tpf.perl)
  • Subscribe to the mailing list: https://tprc.to/subscribe
  • Post a message to us at https://tprc.to/tprc-2023-tor/contact-us/.

An organizer will get back to you shortly!

Organizer: Alex Beamish and the Toronto Perl Mongers.

List of new CPAN distributions – Feb 2023

Perlancar

Published by perlancar on Wednesday 01 March 2023 03:02

dist author abstract date
ANSI-Palette LNATION The great new ANSI::Palette! 2023-02-19T08:11:12
Acme-Matrix LNATION Heavenly digital rain 2023-02-16T16:53:43
Acme-MetaSyntactic-newsradio PERLANCAR Characters from the sitcom Newsradio (1995-1999) 2023-02-18T10:26:03
Alien-CPython3 ZMUGHAL Find or build Python 2023-02-14T09:03:20
Alien-Deno ZMUGHAL Find or build Deno 2023-02-14T09:43:22
Alien-Go ZMUGHAL Find or build Go 2023-02-14T09:22:54
Alien-LLVM ZMUGHAL Find or build LLVM 2023-02-14T09:20:09
Alien-Nodejs ZMUGHAL Find or build Node.js 2023-02-14T09:41:20
Alien-OpenJDK ZMUGHAL Find or build OpenJDK 2023-02-14T09:06:06
Alien-Rust ZMUGHAL Find or build Rust 2023-02-14T09:39:53
Alien-Zig ZMUGHAL Find or build Zig 2023-02-14T09:21:35
Alien-innoextract ZMUGHAL Find or build innoextract 2023-02-16T05:03:53
App-APA RWP Access APA News via RSS 2023-02-16T11:39:42
App-AdditivesUtils PERLANCAR Utilities related to food additives 2023-02-05T09:40:33
App-BPOMUtils-Additives PERLANCAR Utilities related to food additives in BPOM 2023-02-05T00:05:16
App-BPOMUtils-RPO-Checker PERLANCAR Various checker utilities to help with Processed Food Registration (RPO – Registrasi Pangan Olahan) at BPOM 2023-02-11T04:07:22
App-BPOMUtils-Table PERLANCAR Some utilities providing table data related to BPOM 2023-02-05T00:37:21
App-BPOMUtils-Table-FoodAdditive PERLANCAR List additives in BPOM processed food division 2023-02-13T13:26:53
App-BPOMUtils-Table-FoodCategory PERLANCAR List food categories in BPOM processed food division 2023-02-13T13:27:04
App-BPOMUtils-Table-FoodIngredient PERLANCAR List ingredients in BPOM processed food division 2023-02-13T13:27:15
App-BPOMUtils-Table-FoodType PERLANCAR List food types in BPOM processed food division 2023-02-13T13:27:37
App-BPOMUtils-Table-MicrobeInput PERLANCAR List of microbe specification in BPOM processed food division 2023-02-13T13:27:48
App-BPOMUtils-Table-RegCodePrefix PERLANCAR List known alphabetical prefixes in BPOM registered product codes 2023-02-13T13:27:59
App-ChemistryUtils PERLANCAR Utilities related to chemistry 2023-02-19T00:05:51
App-FoodAdditivesUtils PERLANCAR Utilities related to food additives 2023-02-05T09:42:10
App-Greple-tee UTASHIRO module to replace matched text by the external command result 2023-02-19T11:20:41
App-HTMLUtils-Grep PERLANCAR Command-line utilities related to grepping HTML files 2023-02-08T14:24:26
App-KBLIUtils PERLANCAR Utilities related to chemistry 2023-02-14T13:21:56
App-Toot BLAINEM post a status to Mastodon 2023-02-05T15:50:14
App-financeta-new VIKAS App::financeta is a Technical Analysis GUI 2023-02-28T03:05:45
App-lastaccess TULAMILI 現行ディレクトリの下のファイルを、最後にアクセスされた順で網羅的に表示する。 2023-02-27T05:05:45
App-madeafter TULAMILI 各ファイルの3個の日時情報が、現在から何秒前であるかを、表形式で表示。 2023-02-27T07:47:16
App-samesize TULAMILI 現行ディレクトリにある全く同じバイトサイズのファイルを一覧にする。MD5ハッシュ値を使うオプションもある。 2023-02-27T06:09:10
App-sponge PERLANCAR Soak up standard input and write to a file 2023-02-14T08:46:33
Bencher-Scenario-Regexp-Assemble PERLANCAR Benchmark Regexp::Assemble 2023-02-05T00:05:27
Bencher-Scenario-Regexp-IPv4 PERLANCAR Benchmark Regexp::IPv4 2023-02-12T00:05:47
Bencher-Scenario-Regexp-IPv6 PERLANCAR Benchmark Regexp::IPv6 2023-02-19T00:06:02
Bencher-Scenario-Regexp-Pattern-Git PERLANCAR Benchmark patterns in Regexp::Pattern::Git 2023-02-26T00:05:41
Blockchain-Contract-Solidity-ABI REFECO Contract ABI utilities 2023-02-02T17:17:21
Chrome-Policy RWP Chrome Policy class 2023-02-04T01:26:12
DBIx-Class-CryptColumn LEONT Automatically hash password/passphrase columns 2023-02-08T00:42:46
DBIx-WithClass GEEKRUTH DBIx::Class, with more class! 2023-02-12T22:01:18
Data-Resolver POLETTIX resolve keys to data 2023-02-05T09:58:58
Dist-Zilla-PluginBundle-RWP RWP Add [@RWP] plugin bundle into dist.ini 2023-02-04T19:09:32
Env-Assert MIKKOI Ensure that the environment variables match what you need, or abort. 2023-02-15T13:52:04
Env-Dot MIKKOI Read environment variables from .env file 2023-02-19T11:46:01
File-Copy-clonefile SKAJI call clonefile system call 2023-02-16T17:05:15
File-Find-IncludesTimeRange VVELOX Takes a array of time stamped items(largely meant for use with files) returns ones that include the specified time range. 2023-02-27T05:35:02
File-Meta-Cache DRCLAW Cache open file descriptors and stat meta data 2023-02-27T08:03:49
File-Text-CSV JV Easy access to CSV data files 2023-02-23T13:44:17
Future-AsyncAwait-Hooks PEVANS scoped hook blocks that run extra code around await expressions 2023-02-12T00:37:10
GetoptLongWrapper NGABRIEL A wrapper for the Getopt::Long module 2023-02-05T06:47:44
Graph-Geometric MERKYS create and work with geometric graphs 2023-02-08T13:27:29
Image-Magick JCRISTY objected-oriented Perl interface to ImageMagick. Use it to create, edit, compose, or convert bitmap images from within a Perl script. 2023-02-04T03:49:48
Insight-Scriptures LNATION The great new Insight::Scriptures! 2023-02-10T23:04:34
LWP-UserAgent-msgraph ESTRELOW 2023-02-16T21:21:09
Manipulator LNATION manipulate data structs via codeblocks 2023-02-11T17:15:13
Math-Formula MARKOV Expressions on steriods 2023-02-24T14:21:13
Mockery LNATION Mock objects for testing purposes 2023-02-11T01:17:45
Mojolicious-Plugin-Passphrase LEONT Securely hash and validate your passwords. 2023-02-12T22:20:19
Mojolicious-Plugin-PrometheusTiny JAVIER Export metrics using Prometheus::Tiny::Shared 2023-02-05T08:03:15
MooX-Keyword-Chain LNATION Subroutine chains 2023-02-06T01:48:54
MooX-Keyword-Random LNATION return a random result! 2023-02-07T17:26:06
Music-Chord-Progression-T GENE Generate transposed chord progressions 2023-02-02T04:08:04
Music-Chord-Progression-Transform GENE Generate transformed chord progressions 2023-02-02T12:36:42
Music-CreatingRhythms GENE Perl from the C code of the book 2023-02-26T16:55:23
Netkit RHYSADAMS create netkit labs with code. 2023-02-26T20:21:37
NetworkManager-Strict RWP NetworkManager strict settings class 2023-02-12T13:56:23
PDF-Collage POLETTIX PDF manipulation with scissors and glue 2023-02-12T18:53:40
RT-Extension-7zDownload LENDL RT-Extension-7zDownload Extension 2023-02-21T17:19:52
Redis-Cluster-Fast PLAIN A fast perl binding for Redis Cluster 2023-02-02T19:14:20
Regexp-N_Queens ABIGAIL Abstract 2023-02-17T18:46:26
Reply-Plugin-Prompt FREED reply plugin for powerlevel10k style prompt 2023-02-26T13:37:11
SMS-Send-IN-Textlocal INDRADG Regional context SMS::Send driver to send messages via Textlocal's India service ( https://api.textlocal.in/send/ ) 2023-02-14T16:07:26
SPVM-File-Copy KIMOTO Copying and Moving Files 2023-02-09T05:35:31
SPVM-File-Find KIMOTO Short Description 2023-02-23T02:12:56
SPVM-File-Glob KIMOTO The glob Function 2023-02-16T01:35:58
SPVM-File-Path KIMOTO Creating and Removing Multi-Level Path 2023-02-15T01:07:08
SPVM-File-Spec KIMOTO foo 2023-02-03T22:32:02
SPVM-File-Temp KIMOTO Short Description 2023-02-21T05:37:07
SPVM-FindBin KIMOTO Locate Directory of Original Program 2023-02-07T01:12:58
SPVM-Resource-RE2 KIMOTO Google/RE2 Resources 2023-02-28T00:56:16
Sah-Schemas-Array PERLANCAR Sah schemas related to array type 2023-02-12T00:05:58
Spreadsheet-Edit JIMAVERA Slice and dice spreadsheets, optionally using tied variables. 2023-02-08T02:01:32
Sub-Middler DRCLAW Middleware subroutine chaining 2023-02-27T08:32:56
Sys-GetRandom MAUKE Perl interface to getrandom(2) 2023-02-10T00:59:26
TableData-Chemistry-Element PERLANCAR Chemical elements 2023-02-05T09:40:44
TableDataBundle-Business-ID-BBIA PERLANCAR Collection of TableData:: modules related to BBIA (Balai Besar Industri Agro) 2023-02-20T12:01:10
TableDataBundle-Test PERLANCAR Collection of more TableData::Test::* modules for testing 2023-02-11T04:09:32
TableDataRole-Spec-TableDef PERLANCAR Role to require methods related to TableDef 2023-02-25T01:28:01
Task-BeLike-RWP RWP Perl modules bundle to install from CPAN 2023-02-12T18:05:20
Terse-Es LNATION Terse Elasticsearch. 2023-02-02T21:50:50
Terse-Plugin-CHI LNATION Terse response cache 2023-02-03T19:28:57
Terse-Plugin-Glitch LNATION The great new Terse::Plugin::Glitch! 2023-02-12T06:02:32
Terse-Plugin-UA LNATION Terse LWP::UserAgent plugin. 2023-02-01T17:48:15
Terse-Runtime LNATION Terse Runtime 2023-02-17T22:52:09
Test-Builder-SubtestSelection SVW Select subtests by name or by number 2023-02-23T17:20:32
Test-NoTty NWCLARK 2023-02-26T10:37:32
Test-cpan-versioning KARASIK Testing how CPAN uploader treats non-standard version 2023-02-10T08:51:01
Text-Wrap-NoStrip PERLANCAR Line wrapping without stripping the whitespace 2023-02-18T10:27:29
Thread-GoChannel LEONT Fast thread queues with go-like semantics 2023-02-24T16:34:05
Tie-Indirect JIMAVERA tie variables to access data located at run-time. 2023-02-07T23:45:20
Tk-PopList HANJE Popping a selection list relative to a widget 2023-02-28T17:37:58
Weather-Meteo NHORNE Interface to https://open-meteo.com for historical weather data 2023-02-28T16:13:50
Weather-WeatherKit DKECHAG Apple WeatherKit REST API client 2023-02-22T00:05:38
WebService-IdoitAPI MAMAWE a library to access the i-doit JSON RPC API 2023-02-17T14:37:36
Workflow-Inotify BIGFOOT Linux::Inotify2 wrapper 2023-02-09T19:50:39
Wrapper-GetoptLong NGABRIEL A wrapper for the Getopt::Long module 2023-02-08T04:50:45
XS-Parse-Keyword-FromPerl PEVANS drive XS::Parse::Keyword directly from Perl 2023-02-16T23:33:53

Stats

Number of new CPAN distributions this period: 109

Number of authors releasing new CPAN distributions this period: 42

Authors by number of new CPAN distributions this period:

No Author Distributions
1 PERLANCAR 26
2 LNATION 12
3 ZMUGHAL 9
4 KIMOTO 8
5 RWP 5
6 GENE 3
7 LEONT 3
8 TULAMILI 3
9 JIMAVERA 2
10 POLETTIX 2
11 NGABRIEL 2
12 DRCLAW 2
13 PEVANS 2
14 MIKKOI 2
15 NHORNE 1
16 MAMAWE 1
17 MAUKE 1
18 LENDL 1
19 MERKYS 1
20 REFECO 1
21 JCRISTY 1
22 JAVIER 1
23 DKECHAG 1
24 VIKAS 1
25 GEEKRUTH 1
26 BIGFOOT 1
27 VVELOX 1
28 SVW 1
29 FREED 1
30 PLAIN 1
31 NWCLARK 1
32 HANJE 1
33 ESTRELOW 1
34 ABIGAIL 1
35 UTASHIRO 1
36 JV 1
37 MARKOV 1
38 SKAJI 1
39 RHYSADAMS 1
40 INDRADG 1
41 BLAINEM 1
42 KARASIK 1

Perl Programming Language Developer Road Map

Perl on Medium

Published by Azmi ŞAHİN on Tuesday 28 February 2023 11:33

Perl Syntax: Master the Perl syntax. Get a basic understanding of Perl’s language features, data types, conditional expressions, loops…

#605 - Trying to save a disappearing language

Perl Weekly

Published on Monday 27 February 2023 10:00

Hi there!

In 1492 the Jews were expelled from Spain and 4 years later also from Portugal. Many of those Jews settled in the Ottoman Empire and kept talking and changing their language. This is what created the language we call 'Judeo-Espanyol' or in its modern name 'Ladino'. For 500 years these people kept speaking and improving their language which is now a mix of medieval Spanish, Portuguese, Catalan, Turkish, Greek, Italian, Hebrew, Arabic, and probably a few other languages. These Jews are usually referred to as the 'Sephardic' Jews because 'Sepharad' means Spain in Hebrew. It was originally written using old Hebrew letters called Rashi and Solitreo, but in the last 100 years or so it was switched to Latin letters.

Unfortunately due to various reasons (The modernization of Turkey by Atatürk, the Holocaust, emigration to USA, Israel and elsewhere) the majority of remaining Ladino speakers switched to the local language and today there are only a few ten thousands of native Ladino speakers, most of them above 60.

There are various grass-root projects to revitalize the language. One of them is called Kantoniko. It is a multi-lingual dictionary with sample sentences, audio, poster etc. There are also experiments to use the material for educational purposes for people who would like to learn Ladino. The project has some Open Source code written in Python and JavaScript. There is also a lot of data under various Creative-Commons licenses.

I know Python is not your cup of tea, but the whole project might be interesting to some of you and might take a look at it. I recorded a series of videos in English about the site and how to contribute to it.

Enjoy your week!