I have a C# application that needs to connect to either an Oracle database or a SQLite database. The databases can be considered "identical" - same schema - but users have the ability to "load" their own SQLite file for faster local access. I'm trying to design a data access layer for this need.
My problem is that the abstract DbCommand
and DbParameter
classes don't have a very useful user interface. For example, DbCommand.Parameters.Add(object)
is not well typed (and has unclear semantics in general). I am aware that DbCommand.CreateDbParameter()
exists and creates a DbParameter
that can be populated with data. In contrast, SQLiteCommand.Parameters.Add
is overloaded with several parameters that specify what the parameter name and value are.
I see two unsatisfactory options -
Create an interface and make two classes which implement it. There is potential for some shared data - perhaps query strings, but even then differing parameter syntax may disallow this. This duplicates code but best utilizes the database provider APIs.
Create a single class that relies on the generic implementation. Have the constructor/methods accept a connection
DbConnection
, and execute the same code on different connections. It's unclear whether this is even possible due to syntax issues from point 1. If possible this would create a single, flexible class, which may be clunky internally.
Are there any "native" C# ways to handle this case gracefully? I am surprised that the abstract base DbXxx
classes are so useless. The ADO.NET docs are housed under a .NET Framework category, which seems to imply it's legacy technology. I wouldn't think a language as popular as C# would not come with a well typed way to add parameters to queries; it's not a vendor specific feature. Go's database/sql
built in package, for example, handles all these cases.
I do not want to use Entity Framework - this is application replaces legacy software which has some custom queries which are sloppy to translate into LINQ/EF.
1 Answer 1
Well, we don't know much about your application, its size and requirements in regards to database access. But the usual standard approach I would recommend here is:
Start creating some DBMS independent interface for your DAL which suits your needs, specificially when used from your legacy application. The crucial part here is to avoid dynamic SQL as parameters in this interface, because SQL dialects are most often too DBMS specific to allow the usage of SQL as an abstraction layer.
Make two different implementations of that interface, one for each DBMS. Here, things like custom queries might require to be implemented twice, each one with an implementation tailored for the specific DBMS.
Whenever you notice your two different implementations share some similar logic, try to refactor them into a common place, like a common base class which is placed between your interface and the implementation classes.
Of course, when you are able to choose an implementation technology for step 2 which allows different DBMSs to be accessed by exactly the same code, most of the code will end up in step 3 in just one class. You got some suggestions in the comments like using Dapper or EF without LINQ. But this is not necessarily an all-or-nothing decision, you may be able to generalize some parts and let some others exists as specific implementations.
This is also a question of size: for a "small" data access layer, staying with a simple technology like ADO.NET and living with some duplicate query logic may be acceptable. For a larger DAL, the expected costs of introducing another abstraction layer like some ORM or micro ORM may be far below the costs of maintaining hundreds of duplicate queries.
TLDR: instead of trying to make the decision up-front, go a route which allows you to make the decision incrementally and gradually, as you go.
str(query)
to render the SELECT in a vendor-specific sql dialect, which your C# app consumes.