|
|
func('last_insert_rowid')
DBD::SQLite2 - Self Contained RDBMS in a DBI Driver (sqlite 2.x)
use DBI; my $dbh = DBI->connect("dbi:SQLite2:dbname=dbfile","","");
SQLite is a public domain RDBMS database engine that you can find at http://www.hwaci.com/sw/sqlite/.
Rather than ask you to install SQLite first, because SQLite is public domain, DBD::SQLite2 includes the entire thing in the distribution. So in order to get a fast transaction capable RDBMS working for your perl project you simply have to install this module, and nothing else.
SQLite supports the following features:
See http://www.hwaci.com/sw/sqlite/lang.html for details.
Everything for your database is stored in a single disk file, making it easier to move things around than with DBD::CSV.
Yes, DBD::SQLite2 is small and light, but it supports full transactions!
User-defined aggregate or regular functions can be registered with the SQL parser.
There's lots more to it, so please refer to the docs on the SQLite web page, listed above, for SQL details. Also refer to the DBI manpage for details on how to use DBI itself.
The API works like every DBI module does. Please see the DBI manpage for more details about core features.
Currently many statement attributes are not implemented or are limited by the typeless nature of the SQLite database.
Returns the version of the SQLite library which DBD::SQLite2 is using, e.g., ``2.8.0''.
Returns either ``UTF-8'' or ``iso8859'' to indicate how the SQLite library was compiled.
Set this attribute to 1 to transparently handle binary nulls in quoted and returned data.
NOTE: This will cause all backslash characters (\
) to be doubled
up in all columns regardless of whether or not they contain binary
data or not. This may break your database if you use it from another
application. This does not use the built in sqlite_encode_binary
and sqlite_decode_binary functions, which may be considered a bug.
func('last_insert_rowid')
This method returns the last inserted rowid. If you specify an INTEGER PRIMARY KEY as the first column in your table, that is the column that is returned. Otherwise, it is the hidden ROWID column. See the sqlite docs for details.
This method will register a new function which will be useable in SQL query. The method's parameters are:
The name of the function. This is the name of the function as it will be used from SQL.
The number of arguments taken by the function. If this number is -1, the function can take any number of arguments.
This should be a reference to the function's implementation.
For example, here is how to define a now()
function which returns the
current number of seconds since the epoch:
$dbh->func( 'now', 0, sub { return time }, 'create_function' );
After this, it could be use from SQL as:
INSERT INTO mytable ( now() );
This method will register a new aggregate function which can then used from SQL. The method's parameters are:
The name of the aggregate function, this is the name under which the function will be available from SQL.
This is an integer which tells the SQL parser how many arguments the function takes. If that number is -1, the function can take any number of arguments.
This is the package which implements the aggregator interface.
The aggregator interface consists of defining three methods:
new()
This method will be called once to create an object which should
be used to aggregate the rows in a particular group. The step()
and
finalize()
methods will be called upon the reference return by
the method.
step(@_)
This method will be called once for each rows in the aggregate.
finalize()
This method will be called once all rows in the aggregate were
processed and it should return the aggregate function's result. When
there is no rows in the aggregate, finalize()
will be called right
after new().
Here is a simple aggregate function which returns the variance (example adapted from pysqlite):
package variance;
sub new { bless [], shift; }
sub step { my ( $self, $value ) = @_;
push @$self, $value; }
sub finalize { my $self = $_[0];
my $n = @$self;
# Variance is NULL unless there is more than one row return undef unless $n || $n == 1;
my $mu = 0; foreach my $v ( @$self ) { $mu += $v; } $mu /= $n;
my $sigma = 0; foreach my $v ( @$self ) { $sigma += ($x - $mu)**2; } $sigma = $sigma / ($n - 1);
return $sigma; }
$dbh->func( "variance", 1, 'variance', "create_aggregate" );
The aggregate function can then be used as:
SELECT group_name, variance(score) FROM results GROUP BY group_name;
To access the database from the command line, try using dbish which comes with the DBI module. Just type:
dbish dbi:SQLite:foo.db
On the command line to access the file foo.db.
Alternatively you can install SQLite from the link above without conflicting
with DBD::SQLite2 and use the supplied sqlite
command line tool.
SQLite is fast, very fast. I recently processed my 72MB log file with it, inserting the data (400,000+ rows) by using transactions and only committing every 1000 rows (otherwise the insertion is quite slow), and then performing queries on the data.
Queries like count(*)
and avg(bytes)
took fractions of a second to return,
but what surprised me most of all was:
SELECT url, count(*) as count FROM access_log GROUP BY url ORDER BY count desc LIMIT 20
To discover the top 20 hit URLs on the site (http://axkit.org), and it returned within 2 seconds. I'm seriously considering switching my log analysis code to use this little speed demon!
Oh yeah, and that was with no indexes on the table, on a 400MHz PIII.
For best performance be sure to tune your hdparm settings if you are using linux. Also you might want to set:
PRAGMA default_synchronous = OFF
Which will prevent sqlite from doing fsync's when writing (which slows down non-transactional writes significantly) at the expense of some peace of mind. Also try playing with the cache_size pragma.
Likely to be many, please use http://rt.cpan.org/ for reporting bugs.
Matt Sergeant, matt@sergeant.org
Perl extension functions contributed by Francis J. Lacoste <flacoste@logreport.org> and Wolfgang Sourdeau <wolfgang@logreport.org>