|
|
func('last_insert_rowid')
DBD::SQLite - Self Contained RDBMS in a DBI Driver
use DBI; my $dbh = DBI->connect("dbi:SQLite:dbname=dbfile","","");
SQLite is a public domain RDBMS database engine that you can find at http://www.hwaci.com/sw/sqlite/.
Rather than ask you to install SQLite first, because SQLite is public domain, DBD::SQLite includes the entire thing in the distribution. So in order to get a fast transaction capable RDBMS working for your perl project you simply have to install this module, and nothing else.
SQLite supports the following features:
See http://www.hwaci.com/sw/sqlite/lang.html for details.
Everything for your database is stored in a single disk file, making it easier to move things around than with DBD::CSV.
Yes, DBD::SQLite is small and light, but it supports full transactions!
User-defined aggregate or regular functions can be registered with the SQL parser.
There's lots more to it, so please refer to the docs on the SQLite web page, listed above, for SQL details. Also refer to the DBI manpage for details on how to use DBI itself.
The API works like every DBI module does. Please see the DBI manpage for more details about core features.
Currently many statement attributes are not implemented or are limited by the typeless nature of the SQLite database.
Returns the version of the SQLite library which DBD::SQLite is using, e.g., ``2.8.0''. Can only be read.
If set to a true value, DBD::SQLite will turn the UTF-8 flag on for all text strings coming out of the database. For more details on the UTF-8 flag see the perlunicode manpage. The default is for the UTF-8 flag to be turned off.
Also note that due to some bizareness in SQLite's type system (see
http://www.sqlite.org/datatype3.html), if you want to retain
blob-style behavior for some columns under $dbh->{unicode} = 1
>> (say, to store images in the database), you have to state so
explicitely using the 3-argument form of L<DBI/bind_param>
when doing
updates:
use DBI qw(:sql_types); $dbh->{unicode} = 1; my $sth = $dbh->prepare ("INSERT INTO mytable (blobcolumn) VALUES (?)"); $sth->bind_param(1, $binary_data, SQL_BLOB); # binary_data will # be stored as-is.
Defining the column type as BLOB in the DDL is not sufficient.
func('last_insert_rowid')
This method returns the last inserted rowid. If you specify an INTEGER PRIMARY KEY as the first column in your table, that is the column that is returned. Otherwise, it is the hidden ROWID column. See the sqlite docs for details.
Note: You can now use $dbh->last_insert_id()
if you have a recent version of
DBI.
Retrieve the current busy timeout.
Set the current busy timeout. The timeout is in milliseconds.
This method will register a new function which will be useable in SQL query. The method's parameters are:
The name of the function. This is the name of the function as it will be used from SQL.
The number of arguments taken by the function. If this number is -1, the function can take any number of arguments.
This should be a reference to the function's implementation.
For example, here is how to define a now()
function which returns the
current number of seconds since the epoch:
$dbh->func( 'now', 0, sub { return time }, 'create_function' );
After this, it could be use from SQL as:
INSERT INTO mytable ( now() );
This method will register a new aggregate function which can then used from SQL. The method's parameters are:
The name of the aggregate function, this is the name under which the function will be available from SQL.
This is an integer which tells the SQL parser how many arguments the function takes. If that number is -1, the function can take any number of arguments.
This is the package which implements the aggregator interface.
The aggregator interface consists of defining three methods:
new()
This method will be called once to create an object which should
be used to aggregate the rows in a particular group. The step()
and
finalize()
methods will be called upon the reference return by
the method.
step(@_)
This method will be called once for each rows in the aggregate.
finalize()
This method will be called once all rows in the aggregate were
processed and it should return the aggregate function's result. When
there is no rows in the aggregate, finalize()
will be called right
after new().
Here is a simple aggregate function which returns the variance (example adapted from pysqlite):
package variance;
sub new { bless [], shift; }
sub step { my ( $self, $value ) = @_;
push @$self, $value; }
sub finalize { my $self = $_[0];
my $n = @$self;
# Variance is NULL unless there is more than one row return undef unless $n || $n == 1;
my $mu = 0; foreach my $v ( @$self ) { $mu += $v; } $mu /= $n;
my $sigma = 0; foreach my $v ( @$self ) { $sigma += ($x - $mu)**2; } $sigma = $sigma / ($n - 1);
return $sigma; }
$dbh->func( "variance", 1, 'variance', "create_aggregate" );
The aggregate function can then be used as:
SELECT group_name, variance(score) FROM results GROUP BY group_name;
As of version 1.11, blobs should ``just work'' in SQLite as text columns. However
this will cause the data to be treated as a string, so SQL statements such
as length(x)
will return the length of the column as a NUL terminated string,
rather than the size of the blob in bytes. In order to store natively as a
BLOB use the following code:
use DBI qw(:sql_types); my $dbh = DBI->connect("dbi:sqlite:/path/to/db"); my $blob = `cat foo.jpg`; my $sth = $dbh->prepare("INSERT INTO mytable VALUES (1, ?)"); $sth->bind_param(1, $blob, SQL_BLOB); $sth->execute();
And then retreival just works:
$sth = $dbh->prepare("SELECT * FROM mytable WHERE id = 1"); $sth->execute(); my $row = $sth->fetch; my $blobo = $row->[1]; # now $blobo == $blob
To access the database from the command line, try using dbish which comes with the DBI module. Just type:
dbish dbi:SQLite:foo.db
On the command line to access the file foo.db.
Alternatively you can install SQLite from the link above without conflicting
with DBD::SQLite and use the supplied sqlite
command line tool.
SQLite is fast, very fast. I recently processed my 72MB log file with it, inserting the data (400,000+ rows) by using transactions and only committing every 1000 rows (otherwise the insertion is quite slow), and then performing queries on the data.
Queries like count(*)
and avg(bytes)
took fractions of a second to return,
but what surprised me most of all was:
SELECT url, count(*) as count FROM access_log GROUP BY url ORDER BY count desc LIMIT 20
To discover the top 20 hit URLs on the site (http://axkit.org), and it returned within 2 seconds. I'm seriously considering switching my log analysis code to use this little speed demon!
Oh yeah, and that was with no indexes on the table, on a 400MHz PIII.
For best performance be sure to tune your hdparm settings if you are using linux. Also you might want to set:
PRAGMA default_synchronous = OFF
Which will prevent sqlite from doing fsync's when writing (which slows down non-transactional writes significantly) at the expense of some peace of mind. Also try playing with the cache_size pragma.
Likely to be many, please use http://rt.cpan.org/ for reporting bugs.
Matt Sergeant, matt@sergeant.org
Perl extension functions contributed by Francis J. Lacoste <flacoste@logreport.org> and Wolfgang Sourdeau <wolfgang@logreport.org>