Perl Benchmarks


In computer programming, benchmarks are various measures of performance obtained during a test run of software that provide an estimation of what can be expected under real-world conditions.

Many benchmark modules are available for download on CPAN. The module comes with Perl, though one of the slides used in a 2006 talk by Brian D Foy, editor of The Perl Review, says it "sux" (he referred to that statement as "a throw away joke" on his blog). Another slide quotes a page on that warns people about the limitations of benchmarks:

How can we benchmark a programming language?
We can't — we benchmark programming language implementations.
How can we benchmark language implementations?
We can't — we measure particular programs.

However, differences between the system or software used in benchmarking and the real-world conditions that the benchmark is intended to simulate are common and merely reduce, not negate, the value of a benchmark. Further, the collective results of benchmarks that measure the performance of various computer programs that are written in a particular programming language could be compiled into a single, average measurement, which essentially is a benchmark of the language's performance. An example of this is the Number of tests won chart, below, which is surely more accurate at judging a language's performance than the informal performance comparisons that could be found in most programming related internet discussion venues.


The following data comes from Debian : AMD™ Sempron™ benchmarks from May 7, 2006 and Gentoo : Intel® Pentium® 4 benchmarks from May 10, 2006. The Debian and Gentoo tests used equivalent benchmarks, but on Gentoo, some benchmarks had a higher workload, most language implementations were built from source, and Size tests measured GZip bytes instead of lines of code.

Since this data doesn't include the particular tests that were passed and failed, it should be used as a general indication of speed, memory, and size, and not assumed accurate for all applications. More complete, test-specific data can be found on the Debian and Gentoo pages linked to above. Also see this page about flawed benchmarks and comparisons.

Number of tests won (Debian : AMD™ Sempron™ / Gentoo : Intel® Pentium® 4)

Perl C (gcc)
1/1 12/15
0/1 13/15
11/14 2/2
Perl C++ (g++)
0/2 14/12
0/0 14/14
10/14 4/0
Perl Java JDK Server
3/3 13/13
12/12 4/4
13/16 2/0
Perl PHP
9/8 4/6
10/10 3/5
10/11 3/4
Perl Python
5/7 11/9
8/8 8/8
6/3 9/13
Perl Ruby
14/14 2/2
10/9 6/7
8/2 6/14

Example: In comparing speed between Perl and C (gcc), Perl loses 12 to 1 under the Debian operating system and 15 to 1 under Gentoo, but Perl won most speed benchmarks against PHP under both Debian and Gentoo.

Other Considerations

Some comparisons of computer languages can be more subjective. Quality of documentation, backward-compatibility of new versions of the language, and availability of programmers are often important considerations when choosing a programming language.

Material used to teach a programming language should preferably be written by qualified technical writers. If the documentation is written by a developer of the language, it should be closely reviewed by an experienced programmer with well defined needs before the programmer decides the language is suitable.

Identifying the writer of documentation isn't always easy. For example, much of Perl's documentation was written by unnamed volunteers with unknown qualifications, though it's reprinted in books credited to qualified writers. Bad experience with Perl's documentation has made me seek out secondary sources of instruction, such as internet forums. Such secondary sources shouldn't be necessary.

Note: All references to Perl refer to Perl 5.