Perl Scripts

Perl Scripts

Scripts @ is used to protect pages that are being framed inside someone else's (usually search engines') web pages. It is used together with Apache module mod_rewrite to check whether the HTTP_REFERER field contains an URL within another. [Source code]

Put this in a .htaccess file:

RewriteEngine on

# allow for Google cache etc.
RewriteCond %{HTTP_REFERER} ^http://www\.google\..*/.*$ [OR]
RewriteCond %{HTTP_REFERER} ^http://216\.239\.[0-9]{1,3}\.[0-9]{1,3}/(search|custom|translate).*$
RewriteRule ^.*$ - [L]

# Redirect to the framing script
RewriteCond %{HTTP_REFERER} ^http://.*\.com/.*yourdomain\.here(%2F|/).*$
RewriteRule ^.*\.html$ /cgi/ [R,L] gets the number of HTTP requests from Apache access_log file, and uses RRDtool to save the data in a Round Robin Database and to display it graphically. The results are shown in two images: one for the last 24 hours and the other for the current month. The images are updated every 10 minutes. [Source code]

Create the RRD file with the following command:

rrdtool create httprequests.rrd --step 600 DS:http:COUNTER:1800:0:U RRA:AVERAGE:0.5:1:360 RRA:AVERAGE:0.5:24:420

And set a cron job to update it every 10 minutes:

*/10 * * * * /path/to/

SWISH++ is a fast file indexing and searching engine. This script is a web interface for the search engine. It is based on the example provided with the Debian package (/usr/share/doc/swish++/examples/www_example/search.cgi.gz, see also by an anonymous author. You can test the script in my search page. [Source code] reads the system uptime in seconds since the Epoch from /proc/uptime and converts it into textual form. The script is quite simple and probably not very accurate since it uses a rough average to calculate the days in a month. The script's output is only an HTML fragment that can be inserted to a web page with an SSI tag:

<!--#include virtual="/path/to/" -->

This produces the following output:

Uptime Thursday 29 June, 2017 at [17:21:38]: 3 days, 14 hours, 48 minutes and 42 seconds.

[Source code]

This script is for displaying on a web page the songs playing on your XMMS. parses the information produced by the XMMS Infopipe plugin (by Urpo Lankinen). The script is roughly based on the "fifolog" example (Example 16.10) from the O'Reilly Perl Cookbook (see eg. It displays the currently playing song, previous 3 songs, and the streaming radio station (if you're listening to one). Here's an example output:

  • bailout_still_waiting [5:59]
Previous 3:
  • bailout_still_waiting [5:59]
  • Sounds From The Ground - Pearl (Space Station Soma: Tune in, turn on, space out. Ambient and mid-tempo electronica. [SomaFM]) [0:0]
  • Pete Lawrence - Musical Box (Ulrich Schnauss R (Space Station Soma: Tune in, turn on, space out. Ambient and mid-tempo electronica. [SomaFM]) [0:0]

You can include the output on a web page. Mine's on my About me page. [Source code only] [Source code & text files (ZIPped)]. You can either set up a cron job to update the information periodically or have the script check whether the currently playing song has changed. (It's low both on CPU and memory consumption.)

Scripts @ is used to display and update Top 3 high scores in different Shockwave and Java games played by me and my friends. The output can be seen in Humppa.CS game results page (in Finnish). You also need to create text files for each of the games and make them rw for the script. [Source code] parses the NetHack record file and displays the high scores as an HTML page. The high scores can be listed according to player name, character name, profession, race, gender and alignment. The script uses tiles from the RLTiles project (formerly known as Mitsuhiro Itakura's NetHack tiles) to display different professions, races and genders in small graphics tiles. The results can be seen in NetHack Top100 @untamo. [Source code]

You also need to generate a text file, nh_players.txt, that maps the user ID's to user names. The file is of the format "ID# name\n", each player on its own line. You can get the user ID's from /etc/passwd/ (for example). Here's an example:

1000 Player1
1001 Player2
1002 Player3

Wow. It looks like someone is actually using this script. A word of advice, though. Save the images to your own server! My friend does not like you hogging his bandwidth. Thank you.

urllist.cgi (by Juha Huttunen) is a script that catches URLs from IRC channels and stores them in text files. is an add-on to urllist.cgi for searching among the URLs with different criteria. It was originally written as a home assignment for the LUT course 010577001 Internet Programming in summer 2001, and has been in use for nearly 4 years. However, recently urllist.cgi has undergone changes from textfile-based to SQL-based. Therefore, I'm handing over to Juha the task of transforming to match the changes, and ceasing any further updates on the script.

The textfile-based script is still available for download. [Source code] In addition to the script itself, you also need 8 text files (for each channel!) that contain the URLs: files #channel.1 to #channel.7 (from the last seven days) and #channel.old (for URLs older than one week). See how unefficient this is! The files are of the following format:

#channel [Tue Jul 3 12:40:14 2001] <nick>
#channel [Wed Jul 4 13:00:00 2001] <foo>

This is another add-on to urllist.cgi. The script automatically redirects to the last URL catched from the channel given as parameter. Quite handy as a button in your browser's personal toolbar or similar. [Source code]

Commissioned Work

Sponsored by