What’s New in SQLAlchemy 2.1?¶
About this Document
This document describes changes between SQLAlchemy version 2.0 and version 2.1.
Introduction¶
This guide introduces what’s new in SQLAlchemy version 2.1 and also documents changes which affect users migrating their applications from the 2.0 series of SQLAlchemy to 2.1.
Please carefully review the sections on behavioral changes for potentially backwards-incompatible changes in behavior.
General¶
Asyncio "greenlet" dependency no longer installs by default¶
SQLAlchemy 1.4 and 2.0 used a complex expression to determine if the
greenlet
dependency, needed by the asyncio
extension, could be installed from pypi using a pre-built wheel instead
of having to build from source. This because the source build of greenlet
is not always trivial on some platforms.
Disadvantages to this approach included that SQLAlchemy needed to track
exactly which versions of greenlet
were published as wheels on pypi;
the setup expression led to problems with some package management tools
such as poetry
; it was not possible to install SQLAlchemy without
greenlet
being installed, even though this is completely feasible
if the asyncio extension is not used.
These problems are all solved by keeping greenlet
entirely within the
[asyncio]
target. The only downside is that users of the asyncio extension
need to be aware of this extra installation dependency.
New Features and Improvements - ORM¶
Session autoflush behavior simplified to be unconditional¶
Session autoflush behavior has been simplified to unconditionally flush the session each time an execution takes place, regardless of whether an ORM statement or Core statement is being executed. This change eliminates the previous conditional logic that only flushed when ORM-related statements were detected.
Previously, the session would only autoflush when executing ORM queries:
# 2.0 behavior - autoflush only occurred for ORM statements session.add(User(name="new user")) # This would trigger autoflush users = session.execute(select(User)).scalars().all() # This would NOT trigger autoflush result = session.execute(text("SELECT * FROM users"))
In 2.1, autoflush occurs for all statement executions:
# 2.1 behavior - autoflush occurs for all executions session.add(User(name="new user")) # Both of these now trigger autoflush users = session.execute(select(User)).scalars().all() result = session.execute(text("SELECT * FROM users"))
This change provides more consistent and predictable session behavior across all types of SQL execution.
ORM Relationship allows callable for back_populates¶
To help produce code that is more amenable to IDE-level linting and type
checking, the relationship.back_populates
parameter now
accepts both direct references to a class-bound attribute as well as
lambdas which do the same:
classA(Base): __tablename__ = "a" id: Mapped[int] = mapped_column(primary_key=True) # use a lambda: to link to B.a directly when it exists bs: Mapped[list[B]] = relationship(back_populates=lambda: B.a) classB(Base): __tablename__ = "b" id: Mapped[int] = mapped_column(primary_key=True) a_id: Mapped[int] = mapped_column(ForeignKey("a.id")) # A.bs already exists, so can link directly a: Mapped[A] = relationship(back_populates=A.bs)
ORM Mapped Dataclasses no longer populate implicit default
, collection-based default_factory
in __dict__
¶
This behavioral change addresses a widely reported issue with SQLAlchemy’s
Declarative Dataclass Mapping feature that was introduced in 2.0.
SQLAlchemy ORM has always featured a behavior where a particular attribute on
an ORM mapped class will have different behaviors depending on if it has an
actively set value, including if that value is None
, versus if the
attribute is not set at all. When Declarative Dataclass Mapping was introduced, the
mapped_column.default
parameter introduced a new capability
which is to set up a dataclass-level default to be present in the generated
__init__
method. This had the unfortunate side effect of breaking various
popular workflows, the most prominent of which is creating an ORM object with
the foreign key value in lieu of a many-to-one reference:
classBase(MappedAsDataclass, DeclarativeBase): pass classParent(Base): __tablename__ = "parent" id: Mapped[int] = mapped_column(primary_key=True, init=False) related_id: Mapped[int | None] = mapped_column(ForeignKey("child.id"), default=None) related: Mapped[Child | None] = relationship(default=None) classChild(Base): __tablename__ = "child" id: Mapped[int] = mapped_column(primary_key=True, init=False)
In the above mapping, the __init__
method generated for Parent
would in Python code look like this:
def__init__(self, related_id=None, related=None): ...
This means that creating a new Parent
with related_id
only would populate
both related_id
and related
in __dict__
:
# 2.0 behavior; will INSERT NULL for related_id due to the presence # of related=None >>> p1 = Parent(related_id=5) >>> p1.__dict__ {'related_id': 5, 'related': None, '_sa_instance_state': ...}
The None
value for 'related'
means that SQLAlchemy favors the non-present
related Child
over the present value for 'related_id'
, which would be
discarded, and NULL
would be inserted for 'related_id'
instead.
In the new behavior, the __init__
method instead looks like the example below,
using a special constant DONT_SET
indicating a non-present value for 'related'
should be ignored. This allows the class to behave more closely to how
SQLAlchemy ORM mapped classes traditionally operate:
def__init__(self, related_id=DONT_SET, related=DONT_SET): ...
We then get a __dict__
setup that will follow the expected behavior of
omitting related
from __dict__
and later running an INSERT with
related_id=5
:
# 2.1 behavior; will INSERT 5 for related_id >>> p1 = Parent(related_id=5) >>> p1.__dict__ {'related_id': 5, '_sa_instance_state': ...}
Dataclass defaults are delivered via descriptor instead of __dict__¶
The above behavior goes a step further, which is that in order to
honor default values that are something other than None
, the value of the
dataclass-level default (i.e. set using any of the
mapped_column.default
,
column_property.default
, or deferred.default
parameters) is directed to be delivered at the
Python descriptor level using mechanisms in SQLAlchemy’s attribute
system that normally return None
for un-popualted columns, so that even though the default is not
populated into __dict__
, it’s still delivered when the attribute is
accessed. This behavior is based on what Python dataclasses itself does
when a default is indicated for a field that also includes init=False
.
In the example below, an immutable default "default_status"
is applied to a column called status
:
classBase(MappedAsDataclass, DeclarativeBase): pass classSomeObject(Base): __tablename__ = "parent" id: Mapped[int] = mapped_column(primary_key=True, init=False) status: Mapped[str] = mapped_column(default="default_status")
In the above mapping, constructing SomeObject
with no parameters will
deliver no values inside of __dict__
, but will deliver the default
value via descriptor:
# object is constructed with no value for ``status`` >>> s1 = SomeObject() # the default value is not placed in ``__dict__`` >>> s1.__dict__ {'_sa_instance_state': ...} # but the default value is delivered at the object level via descriptor >>> s1.status 'default_status' # the value still remains unpopulated in ``__dict__`` >>> s1.__dict__ {'_sa_instance_state': ...}
The value passed
as mapped_column.default
is also assigned as was the
case before to the Column.default
parameter of the
underlying Column
, where it takes
place as a Python-level default for INSERT statements. So while __dict__
is never populated with the default value on the object, the INSERT
still includes the value in the parameter set. This essentially modifies
the Declarative Dataclass Mapping system to work more like traditional
ORM mapped classes, where a "default" means just that, a column level
default.
Dataclass defaults are accessible on objects even without init¶
As the new behavior makes use of descriptors in a similar way as Python
dataclasses do themselves when init=False
, the new feature implements
this behavior as well. This is an all new behavior where an ORM mapped
class can deliver a default value for fields even if they are not part of
the __init__()
method at all. In the mapping below, the status
field is configured with init=False
, meaning it’s not part of the
constructor at all:
classBase(MappedAsDataclass, DeclarativeBase): pass classSomeObject(Base): __tablename__ = "parent" id: Mapped[int] = mapped_column(primary_key=True, init=False) status: Mapped[str] = mapped_column(default="default_status", init=False)
When we construct SomeObject()
with no arguments, the default is accessible
on the instance, delivered via descriptor:
>>> so = SomeObject() >>> so.status default_status
default_factory for collection-based relationships internally uses DONT_SET¶
A late add to the behavioral change brings equivalent behavior to the
use of the relationship.default_factory
parameter with
collection-based relationships. This attribute is documented <orm_declarative_dc_relationships>
as being limited to exactly the collection class that’s stated on the left side
of the annotation, which is now enforced at mapper configuration time:
classParent(Base): __tablename__ = "parents" id: Mapped[int] = mapped_column(primary_key=True, init=False) name: Mapped[str] children: Mapped[list["Child"]] = relationship(default_factory=list)
With the above mapping, the actual
relationship.default_factory
parameter is replaced internally
to instead use the same DONT_SET
constant that’s applied to
relationship.default
for many-to-one relationships.
SQLAlchemy’s existing collection-on-attribute access behavior occurs as always
on access:
>>> p1 = Parent(name="p1") >>> p1.children []
This change to relationship.default_factory
accommodates a
similar merge-based condition where an empty collection would be forced into
a new object that in fact wants a merged collection to arrive.
New rules for None-return for ORM Composites¶
ORM composite attributes configured using composite()
can now
specify whether or not they should return None
using a new parameter
composite.return_none_on
. By default, a composite
attribute now returns a non-None object in all cases, whereas previously
under 2.0, a None
value would be returned for a pending object with
None
values for all composite columns.
Given a composite mapping:
importdataclasses @dataclasses.dataclass classPoint: x: int | None y: int | None classBase(DeclarativeBase): pass classVertex(Base): __tablename__ = "vertices" id: Mapped[int] = mapped_column(primary_key=True) start: Mapped[Point] = composite(mapped_column("x1"), mapped_column("y1")) end: Mapped[Point] = composite(mapped_column("x2"), mapped_column("y2"))
When constructing a pending Vertex
object, the initial value of the
x1
, y1
, x2
, y2
columns is None
. Under version 2.0,
accessing the composite at this stage would automatically return None
:
>>> v1 = Vertex() >>> v1.start None
Under 2.1, the default behavior is to return the composite class with attributes
set to None
:
>>> v1 = Vertex() >>> v1.start Point(x=None, y=None)
This behavior is now consistent with other forms of access, such as accessing
the attribute from a persistent object as well as querying for the attribute
directly. It is also consistent with the mapped annotation Mapped[Point]
.
The behavior can be further controlled by applying the
composite.return_none_on
parameter, which accepts a callable
that returns True if the composite should be returned as None, given the
arguments that would normally be passed to the composite class. The typical callable
here would return True (i.e. the value should be None
) for the case where all
columns are None
:
classVertex(Base): __tablename__ = "vertices" id: Mapped[int] = mapped_column(primary_key=True) start: Mapped[Point] = composite( mapped_column("x1"), mapped_column("y1"), return_none_on=lambda x, y: x is None and y is None, ) end: Mapped[Point] = composite( mapped_column("x2"), mapped_column("y2"), return_none_on=lambda x, y: x is None and y is None, )
For the above class, any Vertex
instance whether pending or persistent will
return None
for start
and end
if both composite columns for the attribute
are None
:
>>> v1 = Vertex() >>> v1.start None
The composite.return_none_on
parameter is also set
automatically, if not otherwise set explicitly, when using
ORM Annotated Declarative - Complete Guide; setting the left hand side to
Optional
or | None
will assign the above None
-handling callable:
classVertex(Base): __tablename__ = "vertices" id: Mapped[int] = mapped_column(primary_key=True) # will apply return_none_on=lambda *args: all(arg is None for arg in args) start: Mapped[Point | None] = composite(mapped_column("x1"), mapped_column("y1")) end: Mapped[Point | None] = composite(mapped_column("x2"), mapped_column("y2"))
The above object will return None
for start
and end
automatically
if the columns are also None:
>>> session.scalars( ... select(Vertex.start).where(Vertex.x1 == None, Vertex.y1 == None) ... ).first() None
If composite.return_none_on
is set explicitly, that value will
supersede the choice made by ORM Annotated Declarative. This includes that
the parameter may be explicitly set to None
which will disable the ORM
Annotated Declarative setting from taking place.
New RegistryEvents System for ORM Mapping Customization¶
SQLAlchemy 2.1 introduces RegistryEvents
, providing for event
hooks that are specific to a registry
. These events include
RegistryEvents.before_configured()
and RegistryEvents.after_configured()
to complement the same-named events that can be established on a
Mapper
, as well as RegistryEvents.resolve_type_annotation()
that allows programmatic access to the ORM Annotated Declarative type resolution
process. Examples are provided illustrating how to define resolution schemes
for any kind of type hierarchy in an automated fashion, including PEP 695
type aliases.
E.g.:
fromtypingimport Any fromsqlalchemyimport event fromsqlalchemy.ormimport DeclarativeBase fromsqlalchemy.ormimport registry as RegistryType fromsqlalchemy.ormimport TypeResolve fromsqlalchemy.typesimport TypeEngine classBase(DeclarativeBase): pass @event.listens_for(Base, "resolve_type_annotation") defresolve_custom_type(resolve_type: TypeResolve) -> TypeEngine[Any] | None: if resolve_type.resolved_type is MyCustomType: return MyCustomSQLType() else: return None @event.listens_for(Base, "after_configured") defafter_base_configured(registry: RegistryType) -> None: print(f"Registry {registry} fully configured")
See also
Resolving Types Programmatically with Events - Complete documentation on using
the RegistryEvents.resolve_type_annotation()
event
RegistryEvents
- Complete API reference for all registry events
New Features and Improvements - Core¶
Row
now represents individual column types directly without Tuple
¶
SQLAlchemy 2.0 implemented a broad array of PEP 484 typing throughout
all components, including a new ability for row-returning statements such
as select()
to maintain track of individual column types, which
were then passed through the execution phase onto the Result
object and then to the individual Row
objects. Described
at SQL Expression / Statement / Result Set Typing, this approach solved several issues
with statement / row typing, but some remained unsolvable. In 2.1, one
of those issues, that the individual column types needed to be packaged
into a typing.Tuple
, is now resolved using new PEP 646 integration,
which allows for tuple-like types that are not actually typed as Tuple
.
In SQLAlchemy 2.0, a statement such as:
stmt = select(column("x", Integer), column("y", String))
Would be typed as:
Select[Tuple[int, str]]
In 2.1, it’s now typed as:
Select[int, str]
When executing stmt
, the Result
and Row
objects will be typed as Result[int, str]
and Row[int, str]
, respectively.
The prior workaround using Row._t
to type as a real Tuple
is no longer needed and projects can migrate off this pattern.
Mypy users will need to make use of Mypy 1.7 or greater for pep-646 integration to be available.
Limitations¶
Not yet solved by pep-646 or any other pep is the ability for an arbitrary
number of expressions within Select
and others to be mapped to
row objects, without stating each argument position explicitly within typing
annotations. To work around this issue, SQLAlchemy makes use of automated
"stub generation" tools to generate hardcoded mappings of different numbers of
positional arguments to constructs like select()
to resolve to
individual Unpack[]
expressions (in SQLAlchemy 2.0, this generation
produced Tuple[]
annotations instead). This means that there are arbitrary
limits on how many specific column expressions will be typed within the
Row
object, without restoring to Any
for remaining
expressions; for select()
, it’s currently ten expressions, and
for DML expressions like insert()
that use Insert.returning()
,
it’s eight. If and when a new pep that provides a Map
operator
to pep-646 is proposed, this limitation can be lifted. [1] Originally, it was
mistakenly assumed that this limitation prevented pep-646 from being usable at all,
however, the Unpack
construct does in fact replace everything that
was done using Tuple
in 2.0.
An additional limitation for which there is no proposed solution is that
there’s no way for the name-based attributes on Row
to be
automatically typed, so these continue to be typed as Any
(e.g. row.x
and row.y
for the above example). With current language features,
this could only be fixed by having an explicit class-based construct that
allows one to compose an explicit Row
with explicit fields
up front, which would be verbose and not automatic.
URL stringify and parse now supports URL escaping for the "database" portion¶
A URL that includes URL-escaped characters in the database portion will now parse with conversion of those escaped characters:
>>> fromsqlalchemyimport make_url >>> u = make_url("driver://user:pass@host/database%3Fname") >>> u.database 'database?name'
Previously, such characters would not be unescaped:
>>> # pre-2.1 behavior >>> fromsqlalchemyimport make_url >>> u = make_url("driver://user:pass@host/database%3Fname") >>> u.database 'database%3Fname'
This change also applies to the stringify side; most special characters in the database name will be URL escaped, omitting a few such as plus signs and slashes:
>>> fromsqlalchemyimport URL >>> u = URL.create("driver", database="a?b=c") >>> str(u) 'driver:///a%3Fb%3Dc'
Where the above URL correctly round-trips to itself:
>>> make_url(str(u)) driver:///a%3Fb%3Dc >>> make_url(str(u)).database == u.database True
Whereas previously, special characters applied programmatically would not be escaped in the result, leading to a URL that does not represent the original database portion. Below, b=c is part of the query string and not the database portion:
>>> fromsqlalchemyimport URL >>> u = URL.create("driver", database="a?b=c") >>> str(u) 'driver:///a?b=c'
Potential breaking change to odbc_connect= handling for mssql+pyodbc¶
Fixed a mssql+pyodbc issue where valid plus signs in an already-unquoted
odbc_connect=
(raw DBAPI) connection string were replaced with spaces.
Previously, the pyodbc connector would always pass the odbc_connect value
to unquote_plus(), even if it was not required. So, if the (unquoted)
odbc_connect value contained PWD=pass+word
that would get changed to
PWD=pass word
, and the login would fail. One workaround was to quote
just the plus sign — PWD=pass%2Bword
— which would then get unquoted
to PWD=pass+word
.
Implementations using the above workaround with URL.create()
to specify a plus sign in the PWD=
argument of an odbc_connect string
will have to remove the workaround and just pass the PWD=
value as it
would appear in a valid ODBC connection string (i.e., the same as would be
required if using the connection string directly with pyodbc.connect()
).
New Hybrid DML hook features¶
To complement the existing hybrid_property.update_expression()
decorator,
a new decorator hybrid_property.bulk_dml()
is added, which works
specifically with parameter dictionaries passed to Session.execute()
when dealing with ORM-enabled insert()
or update()
:
fromtypingimport MutableMapping fromdataclassesimport dataclass @dataclass classPoint: x: int y: int classLocation(Base): __tablename__ = "location" id: Mapped[int] = mapped_column(primary_key=True) x: Mapped[int] y: Mapped[int] @hybrid_property defcoordinates(self) -> Point: return Point(self.x, self.y) @coordinates.inplace.bulk_dml @classmethod def_coordinates_bulk_dml( cls, mapping: MutableMapping[str, Any], value: Point ) -> None: mapping["x"] = value.x mapping["y"] = value.y
Additionally, a new helper from_dml_column()
is added, which may be
used with the hybrid_property.update_expression()
hook to indicate
re-use of a column expression from elsewhere in the UPDATE statement’s SET
clause:
fromsqlalchemyimport from_dml_column classProduct(Base): __tablename__ = "product" id: Mapped[int] = mapped_column(primary_key=True) price: Mapped[float] tax_rate: Mapped[float] @hybrid_property deftotal_price(self) -> float: return self.price * (1 + self.tax_rate) @total_price.inplace.update_expression @classmethod def_total_price_update_expression(cls, value: Any) -> List[Tuple[Any, Any]]: return [(cls.price, value / (1 + from_dml_column(cls.tax_rate)))]
In the above example, if the tax_rate
column is also indicated in the
SET clause of the UPDATE, that expression will be used for the total_price
expression rather than making use of the previous value of the tax_rate
column:
>>> fromsqlalchemyimport update >>> print(update(Product).values({Product.tax_rate: 0.08, Product.total_price: 125.00}))UPDATEproductSETtax_rate=:tax_rate,price=(:param_1/(:tax_rate+:param_2))
When the target column is omitted, from_dml_column()
falls back to
using the original column expression:
>>> fromsqlalchemyimport update >>> print(update(Product).values({Product.total_price: 125.00}))UPDATEproductSETprice=(:param_1/(tax_rate+:param_2))
Addition of BitString
subclass for handling postgresql BIT
columns¶
Values of BIT
columns in the PostgreSQL dialect are
returned as instances of a new str
subclass,
BitString
. Previously, the value of BIT
columns was driver dependent, with most drivers returning str
instances
except asyncpg
, which used asyncpg.BitString
.
With this change, for the psycopg
, psycopg2
, and pg8000
drivers,
the new BitString
type is mostly compatible with str
, but
adds methods for bit manipulation and supports bitwise operators.
As BitString
is a string subclass, hashability as well
as equality tests continue to work against plain strings. This also leaves
ordering operators intact.
For implementations using the asyncpg
driver, the new type is incompatible with
the existing asyncpg.BitString
type.
Operator classes added to validate operator usage with datatypes¶
SQLAlchemy 2.1 introduces a new "operator classes" system that provides
validation when SQL operators are used with specific datatypes. This feature
helps catch usage of operators that are not appropriate for a given datatype
during the initial construction of expression objects. A simple example is an
integer or numeric column used with a "string match" operator. When an
incompatible operation is used, a deprecation warning is emitted; in a future
major release this will raise InvalidRequestError
.
The initial motivation for this new system is to revise the use of the
ColumnOperators.contains()
method when used with JSON
columns.
The ColumnOperators.contains()
method in the case of the JSON
datatype makes use of the string-oriented version of the method, that
assumes string data and uses LIKE to match substrings. This is not compatible
with the same-named method that is defined by the PostgreSQL
JSONB
type, which uses PostgreSQL’s native JSONB containment
operators. Because JSON
data is normally stored as a plain string,
ColumnOperators.contains()
would "work", and even in trivial cases
behave similarly to that of JSONB
. However, since the two
operations are not actually compatible at all, this mis-use can easily lead to
unexpected inconsistencies.
Code that uses ColumnOperators.contains()
with JSON
columns will
now emit a deprecation warning:
fromsqlalchemyimport JSON, select, Column fromsqlalchemy.ormimport DeclarativeBase, Mapped, mapped_column classBase(DeclarativeBase): pass classMyTable(Base): __tablename__ = "my_table" id: Mapped[int] = mapped_column(primary_key=True) json_column: Mapped[dict] = mapped_column(JSON) # This will now emit a deprecation warning select(MyTable).filter(MyTable.json_column.contains("some_value"))
Above, using ColumnOperators.contains()
with JSON
columns
is considered to be inappropriate, since ColumnOperators.contains()
works as a simple string search without any awareness of JSON structuring.
To explicitly indicate that the JSON data should be searched as a string
using LIKE, the
column should first be cast (using either cast()
for a full CAST,
or type_coerce()
for a Python-side cast) to String
:
fromsqlalchemyimport type_coerce, String # Explicit string-based matching select(MyTable).filter(type_coerce(MyTable.json_column, String).contains("some_value"))
This change forces code to distinguish between using string-based "contains"
with a JSON
column and using PostgreSQL’s JSONB containment
operator with JSONB
columns as separate, explicitly-stated operations.
The operator class system involves a mapping of SQLAlchemy operators listed
out in sqlalchemy.sql.operators
to operator class combinations that come
from the OperatorClass
enumeration, which are reconciled at
expression construction time with datatypes using the
TypeEngine.operator_classes
attribute. A custom user defined type
may want to set this attribute to indicate the kinds of operators that make
sense:
fromsqlalchemy.typesimport UserDefinedType fromsqlalchemy.sql.sqltypesimport OperatorClass classComplexNumber(UserDefinedType): operator_classes = OperatorClass.MATH
The above ComplexNumber
datatype would then validate that operators
used are included in the "math" operator class. By default, user defined
types made with UserDefinedType
are left open to accept all
operators by default, whereas classes defined with TypeDecorator
will make use of the operator classes declared by the "impl" type.
See also
Operators.op.operator_class
- define an operator class when creating custom operators
PostgreSQL¶
Changes to Named Type Handling in PostgreSQL¶
Named types such as ENUM
, DOMAIN
and
the dialect-agnostic Enum
have undergone behavioral changes in
SQLAlchemy 2.1 to better align with how a distinct type object that may
be shared among tables works in practice.
Named Types are Now Associated with MetaData¶
Named types are now more strongly associated with the MetaData
at the top of the table hierarchy and are de-associated with any particular
Table
they may be a part of. This better represents how
PostgreSQL named types exist independently of any particular table, and that
they may be used across many tables simultaneously.
Enum
and DOMAIN
now have their
SchemaType.metadata
attribute set as soon as they are
associated with a table, and no longer refer to the Table
or tables they are within (a table of course still refers to the named types
that it uses).
Schema Inheritance from MetaData¶
Named types will now "inherit" the schema of the MetaData
by default. For example, MetaData(schema="myschema")
will cause all
Enum
and DOMAIN
to use the schema
"myschema":
metadata = MetaData(schema="myschema") table = Table( "mytable", metadata, Column("status", Enum("active", "inactive", name="status_enum")), ) # The enum will be created as "myschema.status_enum"
To have named types use the schema name of an immediate Table
that they are associated with, set the SchemaType.schema
parameter of the type to be that same schema name:
table = Table( "mytable", metadata, Column( "status", Enum("active", "inactive", name="status_enum", schema="tableschema") ), schema="tableschema", )
The SchemaType.inherit_schema
parameter remains available
for this release but is deprecated for eventual removal, and will emit a
deprecation warning when used.
Modified Create and Drop Behavior¶
The rules by which named types are created and dropped are also modified to
flow more in terms of a MetaData
:
MetaData.create_all()
andTable.create()
will create any named types neededTable.drop()
will not drop any named typesMetaData.drop_all()
will drop named types after all tables are dropped
Refined CheckFirst Behavior¶
There is also newly refined "checkfirst" behavior. A new enumeration
CheckFirst
is introduced which allows fine-grained control
within MetaData.create_all()
, MetaData.drop_all()
,
Table.create()
, and Table.drop()
as to what "check"
queries are emitted, allowing tests for types, sequences etc. to be included
or not:
fromsqlalchemyimport CheckFirst # Only check for table existence, skip type checks metadata.create_all(engine, checkfirst=CheckFirst.TABLES) # Check for both tables and types metadata.create_all(engine, checkfirst=CheckFirst.TABLES | CheckFirst.TYPES)
inherit_schema is Deprecated¶
Because named types now inherit the schema of MetaData
automatically
and remain agnostic of what Table
objects refer to them, the
Enum.inherit_schema
parameter is deprecated. For 2.1
it still works the old way by associating the type with the parent
Table
, however as this binds the type to a single Table
even though the type can be used against any number of tables, it’s preferred
to set Enum.schema
directly as desired when the schema
used by the MetaData
is not what’s desired.