This questions needs some set up.
I have some code that serializes C++ objects into JSON/YAML/BSON.
Code here review here.
Simply you can serialize and de-serialize C++ objects to a std::iostream
.
std::cout << ThorsAnvil::Serialize::JsonExporter(object); // prints JSON
std::cin >> ThorsAnvil::Serialize::JsonImporter(object); // reads JSON
In addition to that I have an implementation of std::iostream
that uses a UNIX (or Windows) socket as the underlying stream. So you can now send objects across a socket. Code here with early review here.
I came across Mongo DB (at work). They have a very unique interface for their wire API that is all about documents. And I can stream documents into a wire format (BSON) with my existing library. So lets see if we can make a simple C++ wrapper that makes using Mongo trivial with C++.
So I am thinking the code should look like this:
class Address
{
std::string street;
std::string state;
int zip;
};
class Person
{
std::string name;
int age;
Address address;
};
struct AgeLessThan
{
ThorsAnvil::DB::Mongo::LE age;
};
struct AgeGreaterThan
{
ThorsAnvil::DB::Mongo::GT age;
};
struct FindByName
{
std::string name;
};
// For reference the LE class looks like this:
// Side affect of the Mongo library being document based
// is that all commands becomes a structure that is then
// simply serialized onto the wire as BSON.
// struct LE
// {
// int $lte;
// };
int main()
{
std::vector<Person> people{{"Martin", 50, {"Seattle", "WA", 98121}},..etc..}};
using ThorsAnvil::DB::Mongo::DB;
using ThorsAnvil::DB::Mongo::Collection;
using ThorsAnvil::DB::Mongo::Query;
using ThorsAnvil::DB::Mongo::ReadConfigBuilder;
DB db(<host>, <port>, <username>, <password>, <databaseName>);
Collection collection(db, "People");
// Add "People" objects to the "collection"
collection.insert(people);
// Delete a "People" record
collection.del(std::tie(Query<AgeLessThan>{48}, Query<FindByName>{"Bob"}));
// Find people in the DB
collection.find<Person>(Query<AgeGreaterThan>{18}, ReadConfigBuilder{}.batchSize(5).build());
}
So the above is the goal (I have a working implementation).
Also interested in opinions on the interface.
Most of the functionality is specified in the Collection
class.
You will see a lot of classes that are not included here (like HandShake
, HandShakeReply
, AuthInit
, AuthCont
, AuthReply
, Inserter
, Deleter
, Finder
). That is deliberate. These classes do not contain any code and are simply objects that hold data. They mirror the definition of the mongo commands in their layout. The serialization library does all the work and automatically converts the C++ object into a BSON object on the stream that mirrors these types. There is nothing in the actually object apart from the data being transferred.
Part 1:
All messages to Mongo are an Op_Msg.
So we have a wrapper object that can send data to and receive data from Mongo. The class Op_MsgObj
wrappes a reference to Section
that is the actual payload we want to send to Mongo (which is usually a mongo command and some data).
/*
* Send integer to/from stream in little endian form
* Handles 16/32/64 bit values.
*
* Used like:
*
* std::int32_t value = 5;
*
* // Write a little endian version of value to the file stream.
* file << make_LE(value);
* // Outputs 05 00 00 00
*
* // Read a little endian version of value from the file stream.
* file >> make_LE(value);
* // Input 05 00 00 00 will be read into value and be int{5}
*/
template<typename Type>
struct LittleEndian
{
using T = std::remove_cv_t<std::remove_reference_t<Type>>;
using UT = std::make_signed_t<T>;
using ST = std::make_signed_t<UT>;
T& value;
LittleEndian(Type&& value)
: value(value)
{}
friend std::ostream& operator<<(std::ostream& stream, LittleEndian<Type> const& data)
{
ST output = boost::endian::native_to_little(static_cast<ST>(static_cast<UT>(data.value)));
stream.write(reinterpret_cast<char const*>(&output), sizeof(output));
return stream;
};
friend std::istream& operator>>(std::istream& stream, LittleEndian<Type> const& data)
{
ST input;
stream.read(reinterpret_cast<char*>(&input), sizeof(input));
data.value = static_cast<T>(boost::endian::little_to_native(input));
return stream;
}
};
template<typename T>
LittleEndian<T> make_LE(T&& value) {return LittleEndian<T>(std::forward<T>(value));}
template<typename Section>
class Op_MsgObj
{
private:
static std::int32_t getNextMessageId()
{
static std::int32_t nextMessageId = 0;
return nextMessageId++;
}
Section& section;
protected:
mutable ThorsAnvil::Serialize::ParserInterface::ParserConfig config;
// Used for reading only
// Message Header
mutable std::int32_t messageLength;
mutable std::int32_t requestId;
mutable std::int32_t responseTo;
mutable std::int32_t opCode;
// Message Body
mutable OP_MsgFlag flags;
mutable char kind;
mutable std::int32_t checksum;
public:
Op_MsgObj(Section&& section)
: section(section)
{}
bool hasCheckSum() const {return false;}
std::int32_t getFlags() const {return 0;}
friend std::ostream& operator<<(std::ostream& stream, Op_MsgObj const& message)
{
std::int32_t messageLength = 0
// Size of Header
+ sizeof(std::int32_t) // MessageSize
+ sizeof(std::int32_t) // requestId
+ sizeof(std::int32_t) // responseTo
+ sizeof(std::int32_t) // OpCode
// Op_Msg Body
+ sizeof(std::int32_t) // Flags
// Op_Msg_Section
+ sizeof(char) // Kind Marker
+ ThorsAnvil::Serialize::bsonGetPrintSize(message.section)
// Op_Msg Body
+ (message.hasCheckSum() ? sizeof(std::int32_t) : 0);
stream // Message Header
<< make_LE(messageLength)
<< make_LE(getNextMessageId())
<< make_LE(0)
<< make_LE(OpCode{OpCode::OP_MSG})
// Message Body
<< make_LE(message.getFlags())
// Section Kind 0
<< static_cast<char>(0)
<< ThorsAnvil::Serialize::bsonExporter(message.section);
if (message.hasCheckSum())
{
std::int32_t checksum = 0;
stream << make_LE(checksum);
}
return stream << std::flush;
}
friend std::istream& operator>>(std::istream& stream, Op_MsgObj const& message)
{
stream // Message Header
>> make_LE(message.messageLength)
>> make_LE(message.requestId)
>> make_LE(message.responseTo)
>> make_LE(message.opCode)
// Message Body
>> make_LE(message.flags)
>> make_LE(message.kind)
>> ThorsAnvil::Serialize::bsonImporter(message.section, message.config);
if (message.flags & OP_MsgFlag::checksumPresent) {
stream >> message.checksum;
}
return stream;
}
};
template<typename Section>
Op_MsgObj<Section> Op_Msg(Section&& section) {return Op_MsgObj<Section>(std::forward<Section>(section));}
So with this code we can send data to Mongo and receive the reply using:
mongoStream << Op_Msg(messageToSend);
mongoStream >> Op_Msg(replyToMessageObject);
Conveniently for debugging we can also send the same message to std::cout
or a file to make sure we are sending what we expect to send.
std::ofstream dump("Dump", std::ios::binary);
dump << Op_Msg(messageToSend);
Part 2:
The Connection.
This is a wrapper around the SocketStream
. The main difference is that once created it performs a handshake with the Mongo server to authenticate the user. Apart from that it should act like a normal stream.
In Mongo.h
#ifndef THORSANVIL_DB_MONGO_MONGO_H
#define THORSANVIL_DB_MONGO_MONGO_H
#include "ThorsSocket/SocketStream.h"
class Connection
{
using SocketStream = ThorsAnvil::ThorsSocket::SocketStream<ThorsAnvil::ThorsSocket::SocketStreamBuffer>;
private:
SocketStream stream;
public:
Connection(std::string_view host, int port,
std::string_view username,
std::string_view password,
std::string_view database,
ThorsAnvil::DB::Access::Options const& options);
std::iostream& getStream() {return stream;}
};
In Mongo.cpp
Connection::Connection(
std::string_view host, int port,
std::string_view username,
std::string_view password,
std::string_view database,
ThorsAnvil::DB::Access::Options const& options)
: stream({host, port})
{
std::string userNameStr{std::begin(username), std::size(username)};
std::string passwordStr{std::begin(password), std::size(password)};
std::string databaseStr{std::begin(database), std::size(database)};
using std::string_literals::operator""s;
// Get AppName
auto findAppName = options.find("AppName");
std::string const& appName = findAppName == options.end() ? "ThorsAnvil::Mongo Lib v1.0" : findAppName->second;
// Get Compression
auto findCompression = options.find("compressors");
std::string const& compresType = findCompression == options.end() ? "" : findCompression->second;
// Send handshake
stream << Op_Msg(HandShake{userNameStr, databaseStr, appName, compresType});
HandShakeReply reply;
stream >> Op_Msg(reply);
if (reply.ok != 1)
{
ThorsLogAndThrowCritical("ThorsAnvil::DB::Mongo::MongoConnection",
"MongoConnection",
"Handshake Request Failed: ",
"Code: ", reply.codeName,
"Msg: ", reply.errmsg);
}
// Start Authorization
ThorsAnvil::Crypto::ScramClientSha256 client(userNameStr);
stream << Op_Msg(AuthInit{databaseStr, "SCRAM-SHA-256"s, client.getFirstMessage()});
AuthReply authInitReply;
stream >> Op_Msg(authInitReply);
if (authInitReply.ok != 1)
{
ThorsLogAndThrowCritical("ThorsAnvil::DB::Mongo::MongoConnection",
"MongoConnection",
"Handshake FirstMessage: ",
"Code: ", authInitReply.code,
"Name: ", authInitReply.codeName,
"Msg: ", authInitReply.errmsg);
}
stream << Op_Msg(AuthCont{authInitReply.conversationId, databaseStr, client.getProofMessage(passwordStr, authInitReply.payload.data)});
AuthReply authContReply;
stream >> Op_Msg(authContReply);
if (authContReply.ok != 1)
{
ThorsLogAndThrowCritical("ThorsAnvil::DB::Mongo::MongoConnection",
"MongoConnection",
"Handshake Proof: ",
"Code: ", authContReply.code,
"Name: ", authContReply.codeName,
"Msg: ", authContReply.errmsg);
}
// Send Auth Cont 2: Send the DB Info
stream << Op_Msg(AuthCont{authContReply.conversationId, databaseStr, ""s});
AuthReply authContReply2;
stream >> Op_Msg(authContReply2);
if (authContReply2.ok != 1)
{
ThorsLogAndThrowCritical("ThorsAnvil::DB::Mongo::MongoConnection",
"MongoConnection",
"Handshake DB Connect: ",
"Code: ", authContReply2.code,
"Name: ", authContReply2.codeName,
"Msg: ", authContReply2.errmsg);
}
if (!authContReply2.done)
{
ThorsLogAndThrowCritical("ThorsAnvil::DB::Mongo::MongoConnection",
"MongoConnection",
"Handshake DB Connect: ", "Expected handshake to be complete");
}
}
Part 3
The connection to the DB.
This represents a connection to a specific DB on the Mongo server. It has its own connection stream (std::iostream that uses TCP/IP socket). Nothing special here. On initial connection it does all the appropriate handshaking and authentication.
Note: It does not currently support compression. But I plan on adding that.
MongoDB.h
class DB
{
Connection connection;
std::string db;
public:
DB(std::string_view host, int port,
std::string_view username,
std::string_view password,
std::string_view database,
ThorsAnvil::DB::Access::Options const& options)
: connection(host, port, username, password, database, options)
, db(database)
{}
std::string const& getName() const {return db;}
std::iostream& getStream() {return connection.getStream();}
};
Part 4
The insert/del command have a trivial interface. Unfortunately the find
command has a sea of optional parameters. So the interface for find()
exploded a bit.
The WriteConfig
and ReadConfig
objects contain a set of potential parameters that can be sent to Mongo. If you don't explicitly set them in the config object they are not sent to Mongo.
I could have put the "sort" and "projection" members into the readConfig
object that would have stopped the find()
interface from exploding into so many options. Maybe that would have been better (Not sure).
Note: The insert()
and del()
{delete is a reserved word} both take callback functions to report the response to the actions. This is because In the long term I want all these actions to be able to happen in parallel and long running commands to the DB could happen on a separate thread and the callback is then used to report to the main app that the operation has completed. BUT I have not done that part yet (but my stream object supports that via co-routines so that should be a simple add).
class Collection
{
DB& db;
std::string collection;
WriteConfig writeConfig;
ReadConfig readConfig;
public:
Collection(DB& db, std::string const& collection, WriteConfig&& defaultWriteConfig = WriteConfig{}, ReadConfig&& defaultReadConfig = ReadConfig{})
: db(db)
, collection(collection)
, writeConfig(std::move(defaultWriteConfig))
, readConfig(std::move(defaultReadConfig))
{}
std::string const& getName() const {return collection;}
std::string const& getDBName() const {return db.getName();}
WriteConfig const& getWriteConfig() const {return writeConfig;}
ReadConfig const& getReadConfig() const {return readConfig;}
void setWriteConfig(WriteConfig&& c) {writeConfig = std::move(c);}
void setReadConfig(ReadConfig&& c) {readConfig = std::move(c);}
template<typename T>
void insert(T const& doc, std::function<void(WriteResponse const&)>&& action = [](WriteResponse const&){}) {insert(doc, getWriteConfig(), std::move(action));}
template<typename T>
void insert(T const& doc, WriteConfig const& config, std::function<void(WriteResponse const&)>&& action = [](WriteResponse const&){});
// Suggestions for T
// std::tuple<Doc1&, Doc2&>
// std::vector<Doc>
// std::array<N, Doc>
template<typename... Q>
void del(std::tuple<Query<Q>&...> const& doc, std::function<void(WriteResponse const&)>&& action = [](WriteResponse const&){}) {del(doc, getWriteConfig(), std::move(action));}
template<typename... Q>
void del(std::tuple<Query<Q>&...> const& doc, WriteConfig const& config, std::function<void(WriteResponse const&)>&& action = [](WriteResponse const&){});
template<typename T>
void find() {findAction<T, NoOp, NoOp, NoOp>(Query{NoOp{}}, NoOp{}, NoOp{}, getReadConfig());}
template<typename T>
void find(ReadConfig const& rConfig) {findAction<T, NoOp, NoOp, NoOp>(Query{NoOp{}}, NoOp{}, NoOp{}, rConfig);}
template<typename T, typename Q>
void find(Query<Q> const& filter) {findAction<T, Q, NoOp, NoOp>(filter, NoOp{}, NoOp{}, getReadConfig());}
template<typename T, typename Q>
void find(Query<Q> const& filter, ReadConfig const& rConfig) {findAction<T, Q, NoOp, NoOp>(filter, NoOp{}, NoOp{}, rConfig);}
template<typename T, typename Q, typename Proj>
void find(Query<Q> const& filter, Proj const& projection) {findAction<T, Q, Proj, NoOp>(filter, projection, NoOp{}, getReadConfig());}
template<typename T, typename Q, typename Proj>
void find(Query<Q> const& filter, Proj const& projection, ReadConfig const& rConfig){findAction<T, Q, Proj, NoOp>(filter, projection, NoOp{}, rConfig);}
template<typename T, typename Q, typename Sort>
void findSort(Query<Q> const& filter, Sort const& sort) {findAction<T, Q, NoOp, Sort>(filter, NoOp{}, sort, getReadConfig());}
template<typename T, typename Q, typename Sort>
void findSort(Query<Q> const& filter, Sort const& sort, ReadConfig const& rConfig) {findAction<T, Q, NoOp, Sort>(filter, NoOp{}, sort, rConfig);}
template<typename T, typename Q, typename Proj, typename Sort>
void findSort(Query<Q> const& filter, Proj const& projection, Sort const& sort) {findAction<T, Q, Proj, Sort>(filter, projection, sort, getReadConfig());}
template<typename T, typename Q, typename Proj, typename Sort>
void findSort(Query<Q> const& filter, Proj const& projection, Sort const& sort, ReadConfig const& rConfig)
{findAction<T, Q, Proj, Sort>(filter, projection, sort, rConfig);}
private:
template<typename T, typename Q, typename Proj, typename Sort>
void findAction(Query<Q> const& filter, Proj const& projection, Sort const& sort, ReadConfig const& readConfig);
};
But the implementation seems relatively trivial:
Part 4a: Insert
template<typename T>
void Collection::insert(T const& doc, WriteConfig const& config, std::function<void(WriteResponse const&)>&& action)
{
db.getStream() << Op_Msg(Inserter<T>{*this, doc, config});
WriteResponse response;
db.getStream() >> Op_Msg(response);
action(response);
}
Part 4B: Delete
template<typename... Q>
void Collection::del(std::tuple<Query<Q>&...> const& doc, WriteConfig const& config, std::function<void(WriteResponse const&)>&& action)
{
db.getStream() << Op_Msg(Deleter<std::tuple<Query<Q> const&...>>{*this, doc, config});
WriteResponse response;
db.getStream() >> Op_Msg(response);
action(response);
}
Part 4C: Find
template<typename T, typename Q, typename Proj, typename Sort>
void Collection::findAction(Query<Q> const& filter, Proj const& projection, Sort const& sort, ReadConfig const& config)
{
std::vector<T> result;
db.getStream() << Op_Msg(Finder<Q, Proj, Sort>{*this, filter, projection, sort, config});
FindResponse<T> response{result};
db.getStream() >> Op_MsgDebug(response);
std::int64_t cursor = response.cursor.id;
while (cursor != 0)
{
db.getStream() << Op_Msg(GetMore{*this, response, config});
GetMoreResponse<T> nextResponse{result};
db.getStream() >> Op_Msg(nextResponse);
cursor = nextResponse.cursor.id;
}
//action(result);
}
Part 5 Config objects
These object contain a set of optinal parameters that can be sent to Mongo. The config object holds each value plus a member "filter".
The "Filter" member is used by the serialization library to determine if the object should be placed on the output stream. So by marking it false it will not be serialized.
I have an idea for using std::optional
in the serialization library but have not implemented that yet.
The Config
and ConfigBuilder
is something I stole from a Java pattern. Its a bit of an experiment so that I can construct in imutable Config
object without having to have a billion different constructors.
Usage:
Config f = ConfgiBuild{}.option1(1).options2(2).option3(68).build();
Part 5a: Write Config
class WriteConfigBuilder;
class WriteConfig
{
bool ordered = true; // Optional Def: Stop on first failure
std::int32_t maxTimeMS = 0; // Optional Def: No timeout
WriteConcerns writeConcern; // Optional Note: Don't set in transactions
bool bypassDocumentValidation = false;// Optional Def: Validation done
std::string comment; // Optional comment add to logs
Filter filter;
public:
friend class WriteConfigBuilder;
WriteConfig()
{
filter["ordered"] = false;
filter["maxTimeMS"] = false;
filter["writeConcern"] = false;
filter["bypassDocumentValidation"] = false;
filter["collation"] = false;
filter["comment"] = false;
}
bool const& getOrdered() const {return ordered;}
std::int32_t const& getMaxTimeMS() const {return maxTimeMS;}
WriteConcerns const& getWriteConcern() const {return writeConcern;}
bool const& getBypassDocumentValidation() const {return bypassDocumentValidation;}
std::string const& getComment() const {return comment;}
Filter const& getFilter() const {return filter;}
};
class WriteConfigBuilder
{
WriteConfig result;
public:
WriteConfigBuilder()
{}
WriteConfigBuilder(WriteConfig&& config)
: result(std::move(config))
{}
WriteConfig build() {return result;}
WriteConfigBuilder& ordered(bool v) {result.ordered = v; result.filter["ordered"] = true; return *this;}
WriteConfigBuilder& maxTimeMS(std::int32_t v) {result.maxTimeMS = v; result.filter["maxTimeMS"] = true; return *this;}
WriteConfigBuilder& writeConcern(WriteConcerns const& v) {result.writeConcern = v; result.filter["writeConcern"] = true; return *this;}
WriteConfigBuilder& bypassDocumentValidation(bool v) {result.bypassDocumentValidation = v; result.filter["bypassDocumentValidation"] = true; return *this;}
WriteConfigBuilder& comment(std::string v) {result.comment = v; result.filter["comment"] = true; return *this;}
};
Part 5B: Read Config
class ReadConfigBuilder;
class ReadConfig
{
std::string hint;
std::int32_t skip;
std::int32_t limit;
std::int32_t batchSize;
bool singleBatch;
std::string comment;
std::int32_t maxTimeMS;
bool returnKey;
bool showRecordId;
bool tailable;
bool oplogReplay;
bool noCursorTimeout;
bool awaitData;
bool allowPartialResults;
bool allowDiskUse;
Filter filter;
public:
friend class ReadConfigBuilder;
ReadConfig()
{
filter["hint"] = false;
filter["skip"] = false;
filter["limit"] = false;
filter["batchSize"] = false;
filter["singleBatch"] = false;
filter["comment"] = false;
filter["maxTimeMS"] = false;
filter["readConcern"] = false;
filter["max"] = false;
filter["min"] = false;
filter["returnKey"] = false;
filter["showRecordId"] = false;
filter["tailable"] = false;
filter["oplogReplay"] = false;
filter["noCursorTimeout"] = false;
filter["awaitData"] = false;
filter["allowPartialResults"] = false;
filter["collation"] = false;
filter["allowDiskUse"] = false;
filter["let"] = false;
}
std::string const& getHint() const{return hint;}
std::int32_t const& getSkip() const{return skip;}
std::int32_t const& getLimit() const{return limit;}
std::int32_t const& getBatchSize() const{return batchSize;}
bool const& getSingleBatch() const{return singleBatch;}
std::string const& getComment() const{return comment;}
std::int32_t const& getMaxTimeMS() const{return maxTimeMS;}
bool const& getReturnKey() const{return returnKey;}
bool const& getShowRecordId() const{return tailable;}
bool const& getTailable() const{return oplogReplay;}
bool const& getOplogReplay() const{return noCursorTimeout;}
bool const& getNoCursorTimeout() const{return awaitData;}
bool const& getAwaitData() const{return allowPartialResults;}
bool const& getAllowPartialResults()const{return allowPartialResults;}
bool const& getAllowDiskUse() const{return allowDiskUse;}
Filter const& getFilter() const {return filter;}
};
class ReadConfigBuilder
{
ReadConfig result;
public:
ReadConfigBuilder()
{}
ReadConfigBuilder(ReadConfig&& config)
: result(std::move(config))
{}
ReadConfig build() {return result;}
ReadConfigBuilder& skip(std::int32_t v) {result.skip = v; result.filter["skip"] = true; return *this;}
ReadConfigBuilder& limit(std::int32_t v) {result.limit = v; result.filter["limit"] = true; return *this;}
ReadConfigBuilder& batchSize(std::int32_t v) {result.batchSize = v; result.filter["batchSize"] = true; return *this;}
ReadConfigBuilder& singleBatch(bool v) {result.singleBatch = v; result.filter["singleBatch"] = true; return *this;}
ReadConfigBuilder& comment(std::string v) {result.comment = v; result.filter["comment"] = true; return *this;}
ReadConfigBuilder& maxTimeMS(std::int32_t v) {result.maxTimeMS = v; result.filter["maxTimeMS"] = true; return *this;}
ReadConfigBuilder& returnKey(bool v) {result.returnKey = v; result.filter["returnKey"] = true; return *this;}
ReadConfigBuilder& showRecordId(bool v) {result.showRecordId = v; result.filter["showRecordId"] = true; return *this;}
ReadConfigBuilder& tailable(bool v) {result.tailable = v; result.filter["tailable"] = true; return *this;}
ReadConfigBuilder& oplogReplay(bool v) {result.oplogReplay = v; result.filter["oplogReplay"] = true; return *this;}
ReadConfigBuilder& noCursorTimeout(bool v) {result.noCursorTimeout = v; result.filter["noCursorTimeout"] = true; return *this;}
ReadConfigBuilder& awaitData(bool v) {result.awaitData = v; result.filter["awaitData"] = true; return *this;}
ReadConfigBuilder& allowPartialResults(bool v) {result.allowPartialResults = v;result.filter["allowPartialResults"] = true;return *this;}
ReadConfigBuilder& allowDiskUse(bool v) {result.allowDiskUse = v; result.filter["allowDiskUse"] = true; return *this;}
};
1 Answer 1
Remove support for big-endian systems
Unless you want your code to run natively on an IBM z/Architecture mainframe, there are basically no computers used anymore that run (let alone support) big-endian mode. I would recommend that you remove your code that supports big-endian machines, as it will not be used, thus rarely tested, and will only be a potential source of bugs and inefficiency.
Note that you are already assuming that bytes are 8 bits and that your code runs on machines that support 32-bit integers.
Don't make members mutable
if not necessary
I don't see why the protected member variables of Op_MsgObj
need to be mutable
. You would only use that for some very specific use cases, like needing a mutex to guard access, or if you do some caching of expensive calculations.
In your case, remove mutable
, then remove const
from the operator>>()
overload defined in Op_MsgObj
.
class Connection
can be replaced by a function
The class Connection
just constructs a stream. That's it. It could be replaced by a single function:
SocketStream connectToDB(std::string_view host, int port,
std::string_view username,
std::string_view password,
std::string_view database,
ThorsAnvil::DB::Access::Options const& options) {
SocketStream stream({host, port});
...
return stream;
}
Another option would be to make Connection
actually be a stream itself, perhaps by just publicly inheriting from SocketStream
. Or just fold the code into class DB
.
Weird use of callback functions
Collection::insert()
and Collection::del()
both take an action
parameter, which is a callback function that gets called with the response
object. I don't know why it is designed like this. Either I would just return response
from those functions so the caller can do what it wants with them without having to provide a callback function, or just remove it altogether if the caller is never going to do that.
There is a default callback declared for each overload, one of the overloads calls the other overload and moves the action along with it, wouldn't that create an infinite loop? It looks dodgy in any case. Why not just write:
WriteResponse insert(T const& doc) {
return insert(doc, getWriteConfig());
}
WriteResponse insert(T const& doc, WriteConfig const& config) {
db.getStream() << Op_Msg(Inserter<T>{*this, doc, config});
WriteResponse response;
db.getStream() >> Op_Msg(response);
return response;
}
About the builder pattern
Using a builder interface is nice, but you are implementing it incorrectly. The goal of the builder is to construct an object with all the right parameters in one go. That constructor can then verify the combination of all those parameters is correct, and there will never be an object in a half-configured, potentially invalid state. So either fix this:
class WriteConfig {
bool ordered; // default values not needed in this case
std::int32_t maxTimeMS;
...
public:
// no friend declaration needed
WriteConfig(bool ordered, std::int32_t maxTimeMS, ...)
: ordered(ordered)
, maxTimeMS(maxTimeMS)
, ...
{}
...
};
class WriteConfigBuilder {
bool m_ordered = true;
std::int32_t m_maxTimeMS = 0;
...
public:
// rule of zero
WriteConfig build() const {
return WriteConfig(m_ordered, m_maxTimeMS, ...);
}
WriteConfigBuilder& ordered(bool v) { m_ordered = v; }
WriteConfigBuilder& maxTimeMS(std::int32_t v) { m_maxTimeMS = v; }
...
};
Or if you don't need that atomic construction, just move the setters from the builder directly into the type of object you want to create:
class WriteConfig {
bool m_ordered = true;
std::int32_t m_maxTimeMS = 0;
...
public:
// rule of zero
WriteConfig& ordered(bool v) { m_ordered = v; }
WriteConfig& maxTimeMS(std::int32_t v) { m_maxTimeMS = v; }
...
};
In the latter case, it even simplifies things a bit, in your example main()
for example you'd then write:
collection.find<Person>(Query<AgeGreaterThan>{18}, ReadConfig{}.batchSize(5));
Lack of error checking?
I am seeing very little error checking in your code. The only hope is that all the stream operators called in your program throw exceptions on errors. The streams from the standard library do not however.
-
\$\begingroup\$ It's really irritating that there is no standard (just using the standard libraries) to detect endianness at compile time. I would like to do what you suggest and basically put a static assert that generates an error on big endian systems. \$\endgroup\$Loki Astari– Loki Astari2024年07月01日 16:15:40 +00:00Commented Jul 1, 2024 at 16:15
-
1\$\begingroup\$ Found it: !!!
#include <bit>
Thenstatic_assert(std::endian::little == std::endian::native);
\$\endgroup\$Loki Astari– Loki Astari2024年07月01日 16:49:15 +00:00Commented Jul 1, 2024 at 16:49 -
1\$\begingroup\$ github.com/Loki-Astari/ThorsMongo/issues/1 github.com/Loki-Astari/ThorsMongo/issues/2 github.com/Loki-Astari/ThorsMongo/issues/3 github.com/Loki-Astari/ThorsMongo/issues/4 github.com/Loki-Astari/ThorsMongo/issues/5 \$\endgroup\$Loki Astari– Loki Astari2024年07月08日 17:49:53 +00:00Commented Jul 8, 2024 at 17:49