This is a fork of the original sqlx library by @launchbadge. sqlx v0.7 broke backwards compatibility with v0.6 in a way that makes it difficult to upgrade. sqlx v0.7 also removed support for Microsoft SQL Server (to sell it as a separate commercial product). This fork is intended to be a drop-in replacement for sqlx v0.6, with the following changes:
- Added support for ODBC.
- Updated to use the latest versions of dependencies, including
- All missing security updates
- Latest SQLite version
- Improved support for Microsoft SQL Server, including:
- Support for reading and writing
binaryandvarbinarydata- Support for reading and writing
numeric,decimal,date,datetime, anddatetimeoffsetdata (using thechronofeature),uuiddata (using theuuidfeature),- Multiple bug fixes around string handling, including better support for long strings
- Support for packet chunking, which fixes a bug where large bound parameters or large queries would fail
- Support for TLS encrypted connections
The main use case driving the development of sqlx-oldapi is the SQLPage SQL-only rapid application building tool.
Have a question? Be sure to check the FAQ first!
SQLx is an async, pure Rustβ SQL crate featuring compile-time checked queries without a DSL.
-
Truly Asynchronous. Built from the ground-up using async/await for maximum concurrency.
-
Compile-time checked queries (if you want). See SQLx is not an ORM.
-
Database Agnostic. Support for PostgreSQL, MySQL, SQLite, MSSQL, and ODBC.
-
Pure Rust. The Postgres and MySQL/MariaDB drivers are written in pure Rust using zero unsafeβ β code.
-
Runtime Agnostic. Works on different runtimes (
async-std/tokio/actix) and TLS backends (native-tls,rustls).
β The SQLite driver uses the libsqlite3 C library as SQLite is an embedded database (the only way we could be pure Rust for SQLite is by porting all of SQLite to Rust).
β β SQLx uses #![forbid(unsafe_code)] unless the sqlite feature is enabled. As the SQLite driver interacts
with C, those interactions are unsafe.
-
Cross-platform. Being native Rust, SQLx will compile anywhere Rust is supported.
-
Built-in connection pooling with
sqlx::Pool. -
Row streaming. Data is read asynchronously from the database and decoded on-demand.
-
Automatic statement preparation and caching. When using the high-level query API (
sqlx::query), statements are prepared and cached per-connection. -
Simple (unprepared) query execution including fetching results into the same
Rowtypes used by the high-level API. Supports batch execution and returning results from all statements. -
Transport Layer Security (TLS) where supported (MySQL and PostgreSQL).
-
Asynchronous notifications using
LISTENandNOTIFYfor PostgreSQL. -
Nested transactions with support for save points.
-
Anydatabase driver for changing the database driver at runtime. AnAnyPoolconnects to the driver indicated by the URL scheme.
SQLx is compatible with the async-std, tokio and actix runtimes; and, the native-tls and rustls TLS backends. When adding the dependency, you must chose a runtime feature that is runtime + tls.
# Cargo.toml [dependencies] # tokio + rustls sqlx-oldapi = { version = "0.6", features = [ "runtime-tokio-rustls" ] } # async-std + native-tls sqlx-oldapi = { version = "0.6", features = [ "runtime-async-std-native-tls" ] }
The runtime and TLS backend not being separate feature sets to select is a workaround for a Cargo issue.
-
runtime-async-std-native-tls: Use theasync-stdruntime andnative-tlsTLS backend. -
runtime-async-std-rustls: Use theasync-stdruntime andrustlsTLS backend. -
runtime-tokio-native-tls: Use thetokioruntime andnative-tlsTLS backend. -
runtime-tokio-rustls: Use thetokioruntime andrustlsTLS backend. -
runtime-actix-native-tls: Use theactixruntime andnative-tlsTLS backend. -
runtime-actix-rustls: Use theactixruntime andrustlsTLS backend. -
postgres: Add support for the Postgres database server. -
mysql: Add support for the MySQL/MariaDB database server. -
mssql: Add support for the MSSQL database server. -
sqlite: Add support for the self-contained SQLite database engine. -
odbc: Add support for ODBC database connections. -
any: Add support for theAnydatabase driver, which can proxy to a database driver at runtime. -
macros: Add support for thequery*!macros, which allow compile-time checked queries. -
migrate: Add support for the migration management andmigrate!macro, which allow compile-time embedded migrations. -
uuid: Add support for UUID (in Postgres). -
chrono: Add support for date and time types fromchrono. -
time: Add support for date and time types fromtimecrate (alternative tochrono, which is preferred byquery!macro, if both enabled) -
bstr: Add support forbstr::BString. -
git2: Add support forgit2::Oid. -
bigdecimal: Add support forNUMERICusing thebigdecimalcrate. -
decimal: Add support forNUMERICusing therust_decimalcrate. -
ipnetwork: Add support forINETandCIDR(in postgres) using theipnetworkcrate. -
json: Add support forJSONandJSONB(in postgres) using theserde_jsoncrate. -
tls: Add support for TLS connections. -
offline: Enables building the macros in offline mode when a live database is not available (such as CI).- Requires
sqlx-cliinstalled to use. See sqlx-cli/README.md.
- Requires
SQLx supports compile-time checked queries. It does not, however, do this by providing a Rust API or DSL (domain-specific language) for building queries. Instead, it provides macros that take regular SQL as an input and ensure that it is valid for your database. The way this works is that SQLx connects to your development DB at compile time to have the database itself verify (and return some info on) your SQL queries. This has some potentially surprising implications:
- Since SQLx never has to parse the SQL string itself, any syntax that the development DB accepts can be used (including things added by database extensions)
- Due to the different amount of information databases let you retrieve about queries, the extent of SQL verification you get from the query macros depends on the database
If you are looking for an (asynchronous) ORM, you can check out ormx or SeaORM, which is built on top
of SQLx.
See the examples/ folder for more in-depth usage.
[dependencies] # PICK ONE: # Async-std: sqlx-oldapi = { version = "0.6", features = [ "runtime-async-std-native-tls", "postgres" ] } async-std = { version = "1", features = [ "attributes" ] } # Tokio: sqlx-oldapi = { version = "0.6", features = [ "runtime-tokio-native-tls" , "postgres" ] } tokio = { version = "1", features = ["full"] } # Actix-web: sqlx-oldapi = { version = "0.6", features = [ "runtime-actix-native-tls" , "postgres" ] } actix-web = "4"
use sqlx_oldapi::postgres::PgPoolOptions; use sqlx_oldapi::{query, query_as, query_as_unchecked, query_scalar, query_with}; // use sqlx_oldapi::mysql::MySqlPoolOptions; // etc. #[async_std::main] // or #[tokio::main] // or #[actix_web::main] async fn main() -> Result<(), sqlx_oldapi::Error> { // Create a connection pool // for MySQL, use MySqlPoolOptions::new() // for SQLite, use SqlitePoolOptions::new() // for ODBC, use OdbcPoolOptions::new() // etc. let pool = PgPoolOptions::new() .max_connections(5) .connect("postgres://postgres:password@localhost/test").await?; // Make a simple query to return the given parameter (use a question mark `?` instead of `1γγ«` for MySQL) let row: (i64,) = query_as("SELECT 1γγ«") .bind(150_i64) .fetch_one(&pool).await?; assert_eq!(row.0, 150); Ok(()) }
A single connection can be established using any of the database connection types and calling connect().
use sqlx_oldapi::Connection; let conn = SqliteConnection::connect("sqlite::memory:").await?;
Generally, you will want to instead create a connection pool (sqlx_oldapi::Pool) in order for your application to
regulate how many server-side connections it's using.
let pool = MySqlPool::connect("mysql://user:pass@host/database").await?;
In SQL, queries can be separated into prepared (parameterized) or unprepared (simple). Prepared queries have their
query plan cached, use a binary mode of communication (lower bandwidth and faster decoding), and utilize parameters
to avoid SQL injection. Unprepared queries are simple and intended only for use case where a prepared statement
will not work, such as various database commands (e.g., PRAGMA or SET or BEGIN).
SQLx supports all operations with both types of queries. In SQLx, a &str is treated as an unprepared query
and a Query or QueryAs struct is treated as a prepared query.
// low-level, Executor trait conn.execute("BEGIN").await?; // unprepared, simple query conn.execute(query("DELETE FROM table")).await?; // prepared, cached query
We should prefer to use the high level, query interface whenever possible. To make this easier, there are finalizers
on the type to avoid the need to wrap with an executor.
query("DELETE FROM table").execute(&mut conn).await?; query("DELETE FROM table").execute(&pool).await?;
The execute query finalizer returns the number of affected rows, if any, and drops all received results.
In addition, there are fetch, fetch_one, fetch_optional, and fetch_all to receive results.
The Query type returned from query will return Row<'conn> from the database. Column values can be accessed
by ordinal or by name with row.get(). As the Row retains an immutable borrow on the connection, only one
Row may exist at a time.
The fetch query finalizer returns a stream-like type that iterates through the rows in the result sets.
// provides `try_next` use futures::TryStreamExt; let mut rows = query("SELECT * FROM users WHERE email = ?") .bind(email) .fetch(&mut conn); while let Some(row) = rows.try_next().await? { // map the row into a user-defined domain type let email: &str = row.try_get("email")?; }
To assist with mapping the row into a domain type, there are two idioms that may be used:
let mut stream = query("SELECT * FROM users") .map(|row: PgRow| { // map the row into a user-defined domain type }) .fetch(&mut conn);
#[derive(sqlx_oldapi::FromRow)] struct User { name: String, id: i64 } let mut stream = query_as::<_, User>("SELECT * FROM users WHERE email = ? OR name = ?") .bind(user_email) .bind(user_name) .fetch(&mut conn);
Instead of a stream of results, we can use fetch_one or fetch_optional to request one required or optional result
from the database.
ODBC support requires the odbc feature and an ODBC driver manager with appropriate drivers installed.
- Install unixODBC and ODBC drivers:
# Ubuntu/Debian sudo apt-get update sudo apt-get install -y unixodbc odbcinst unixodbc-common libodbcinst2 # Install database-specific drivers sudo apt-get install -y odbc-postgresql # For PostgreSQL sudo apt-get install -y libsqliteodbc # For SQLite
ODBC drivers are available for most popular databases:
- Oracle: https://www.oracle.com/uk/database/technologies/releasenote-odbc-ic.html
- Slowflake: https://docs.snowflake.com/en/developer-guide/odbc/odbc
- BigQuery: https://cloud.google.com/bigquery/docs/reference/odbc-jdbc-drivers
- DB2: https://www.ibm.com/support/pages/db2-odbc-cli-driver-download-and-installation-information
- Configure your ODBC connection using a DSN (Data Source Name):
# ~/.odbc.ini [MyDatabase] Description=SnowflakeDB for SQLx Testing Driver=SnowflakeDSIIDriver Locale=en-US SERVER=abcxyz-hh12345.snowflakecomputing.com Database=testdb PORT=443 SSL=on ACCOUNT=abcxyz-hh12345 uid=test pwd=ab_XY.1234567=- schema=public
- Test your connection:
echo "SELECT 1;" | isql -v MyDatabase
- Install ODBC drivers for your database (usually included with database client software)
- Configure DSN through ODBC Data Source Administrator
- Use the DSN name in your connection string
use sqlx_oldapi::odbc::OdbcPoolOptions; // Using DSN let pool = OdbcPoolOptions::new() .max_connections(5) .connect("DSN=MyDatabase").await?; // Using connection string let pool = OdbcPoolOptions::new() .max_connections(5) .connect("Driver={PostgreSQL Unicode};Server=localhost;Port=5432;Database=mydb;UID=myuser;PWD=mypassword").await?; // Using Any driver (automatically detects ODBC) use sqlx_oldapi::any::AnyPoolOptions; let pool = AnyPoolOptions::new() .max_connections(5) .connect("DSN=MyDatabase").await?;
We can use the macro, sqlx_oldapi::query! to achieve compile-time syntactic and semantic verification of the SQL, with
an output to an anonymous record type where each SQL column is a Rust field (using raw identifiers where needed).
let countries = query!( " SELECT country, COUNT(*) as count FROM users GROUP BY country WHERE organization = ? ", organization ) .fetch_all(&pool) // -> Vec<{ country: String, count: i64 }> .await?; // countries[0].country // countries[0].count
Differences from query():
-
The input (or bind) parameters must be given all at once (and they are compile-time validated to be the right number and the right type).
-
The output type is an anonymous record. In the above example the type would be similar to:
{ country: String, count: i64 }
-
The
DATABASE_URLenvironment variable must be set at build time to a database which it can prepare queries against; the database does not have to contain any data but must be the same kind (MySQL, Postgres, etc.) and have the same schema as the database you will be connecting to at runtime.For convenience, you can use a
.envfile 1 to set DATABASE_URL so that you don't have to pass it every time:DATABASE_URL=mysql://localhost/my_database
The biggest downside to query!() is that the output type cannot be named (due to Rust not
officially supporting anonymous records). To address that, there is a query_as!() macro that is
mostly identical except that you can name the output type.
// no traits are needed struct Country { country: String, count: i64 } let countries = query_as!(Country, " SELECT country, COUNT(*) as count FROM users GROUP BY country WHERE organization = ? ", organization ) .fetch_all(&pool) // -> Vec<Country> .await?; // countries[0].country // countries[0].count
To avoid the need of having a development database around to compile the project even when no
modifications (to the database-accessing parts of the code) are done, you can enable "offline mode"
to cache the results of the SQL query analysis using the sqlx command-line tool. See
sqlx-cli/README.md.
Compile time verified queries do quite a bit of work at compile time. Incremental actions like
cargo check and cargo build can be significantly faster when using an optimized build by
putting the following in your Cargo.toml (More information in the
Profiles section of The Cargo Book)
[profile.dev.package.sqlx-macros] opt-level = 3
1 The dotenv crate itself appears abandoned as of December 2021
so we now use the dotenvy crate instead. The file format is the same.
This crate uses #![forbid(unsafe_code)] to ensure everything is implemented in 100% Safe Rust.
If the sqlite feature is enabled, this is downgraded to #![deny(unsafe_code)] with #![allow(unsafe_code)] on the
sqlx::sqlite module. There are several places where we interact with the C SQLite API. We try to document each call for the invariants we're assuming. We absolutely welcome auditing of, and feedback on, our unsafe code usage.
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.