2

Say I have the following header

#ifndef WINDOW_HPP
#define WINDOW_HPP
// includes...
namespace window
{
 struct Window
 {
 GLFWwindow *handle = nullptr;
 };
 struct WindowCreateInfo
 {
 };
 std::unique_ptr<Window> windowCreate(const WindowCreateInfo &info);
 void windowDestroy(Window *window);
 bool windowShouldClose(const Window *window);
} // namespace window
#endif

that is implemented like so

#include "window.hpp"
namespace window
{
 std::unique_ptr<Window> windowCreate(const WindowCreateInfo &info)
 {
 std::cout << "Creating GLFW window...\n";
 if (!glfwInit())
 {
 CHECK_GLFW_ERROR();
 throw std::runtime_error("glfwInit() failed.");
 }
 std::cout << "Successfully initialized GLFW\n";
 glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
 }
 void windowDestroy(Window *window)
 {
 glfwDestroyWindow(window->handle);
 }
 bool windowShouldClose(const Window *window)
 {
 // should I check for nullptr?
 // if (!window) throw std::runtime_error("...");
 const bool res = glfwWindowShouldClose(window->handle);
#ifdef DEBUG
 CHECK_GLFW_ERROR();
#endif
 return res;
 }
} // namespace window

My specific dilemma is in functions like windowDestroy and windowShouldClose. It's possible for a user to pass a nullptr, which would lead to a crash when the pointer is dereferenced.

Here's the context:

  • I am the sole user of this API within my application.

  • I have full control over the codebase and I'm confident that I will never intentionally pass a nullptr to these functions.

Even with this control, I'm trying to establish a robust design principle. I see two main approaches:

  1. The Defensive Approach: Explicitly check for nullptr and throw an exception.
void windowDestroy(Window *window)
{
 if (!window) {
 throw std::invalid_argument("windowDestroy: 'window' cannot be null.");
 }
 glfwDestroyWindow(window->handle);
}
  1. The Minimalist/Performance Approach: Don't check, and let the dereference crash. The logic is that a crash from dereferencing nullptr is immediate and obvious, while adding checks adds visual clutter for a case that should never happen.

My reasoning so far:

  • The defensive approach feels "safer" and provides a clearer, more professional error message.

  • The minimalist approach feels "leaner." Since a dereference will crash anyway, the explicit check seems redundant for an internal, controlled API. It's essentially enforcing a precondition that I, as the user, have already agreed to follow. In the case that the precondition is not satisfied, well too bad... you crash!

My question is: In the context of a private, internal API where the developer is also the user, which approach is generally considered more reasonable and idiomatic?

Is there a strong consensus, or does it ultimately boil down to a trade-off between explicit safety checks and minimalism? Are there other factors I should consider, like long-term maintenance if the project grows or the API is exposed to other developers in the future?

asked Oct 16 at 13:10
2

3 Answers 3

4

Even with the "nullptr check" your code is not safe. Because you don't prevent usage of Window *window after calling windowDestroy. The way of coding you show us here is typical for C (well, you don't have much choice there), but not for C++.

The rule of thumb is: wrap unsafe code with safe access. You already use std::unique_ptr<Window> (even though windowCreate doesn't return for some reason) so why don't you do:

void windowDestroy(std::unique_ptr<Window> window)

Note that I pass it by value, meaning it is moved so the compiler won't allow you to use it after calling windowDestroy.


This of course doesn't solve the "nullptr check" problem. But if you look at it closely, then it begs the question: why don't I use classes and methods instead of free functions? And this will help dealing with both problems at the same time.

First of all: you have to check whether the pointer is valid or not somewhere. Typical C libraries (like glfw) will return null to indicate an error. This should not be ignored.

So the real question isn't: "should I do the nullptr check?" but rather "where should I do the nullptr check?". You are using C++, so what I suggest is: change struct Window to class Window (so that fields are private by default), add a static method, something like createFromGlfw and put a null check there. You could do that in the constructor, but then you will have to throw an exception. With static method you can either throw an exception or return something like C++23 std::expected<T,E> to indicate an error. This way of thinking is typical to Rust (which does not have constructors) so I might be biased.

Then you can be sure that an instance of Window encapsulates a valid pointer (because that is the only way to create it). As long as you keep that invariant, you don't have to do nullptr checks anywhere else inside Window.

Just remember to delete copy constructor and copy assignment operators. Or actually just store std::unique_ptr<GLFWwindow, Deleter> (with a custom Deleter) inside Window. Ahh, the C++ and its many many ways do the same thing.


To combine both the first and second problem, you would turn windowDestroy function into the destructor of class Window. And then windowShouldClose becomes a class method, that doesn't take any parameters:

 bool windowShouldClose()
 {
 // I don't need to do null check here.
 // And moreover the pointer is always valid.
 const bool res = glfwWindowShouldClose(this->window->handle);
#ifdef DEBUG
 CHECK_GLFW_ERROR();
#endif
 return res;
 }

With this encapsulation, if something goes wrong you can be sure that the only place to search for a window related bug is the (hopefuly tiny) Window class. Not hundreds of thousands of lines of code that can potentially (mis)use it.


I have full control over the codebase and I'm confident that I will never intentionally pass a nullptr to these functions.

I'm sorry, but that's naive. Or maybe it depends on what you mean by "intentionally". But remember this: noone makes mistakes intentionally right? Yet, the number of bugs in the world is just overwhelming, even in most important projects like linux or openssl, which are developed by really skilled devs.

You cannot trust yourself more than you can trust the others. Do you think you will remember in 3 months how exactly this API is supposed to be used? The bigger the project is, the harder it is to remember all the nuanced relationships between pieces of code. And more importantly: why would you even want to remember such things, considering that you can properly express that in the language you are using?

The Minimalist/Performance Approach: Don't check, and let the dereference crash. The logic is that a crash from dereferencing nullptr is immediate and obvious

Is it though? There's a huge difference between throwing an exception and letting dereference crash. First of all, dereferencing nullptr is UB. Meaning it doesn't even have to crash, it can continue to work, but incorrectly. Secondly, even if it crashes, you will have a nasty traceback, hard to track, debug and fix. This might be simple in your case, but imagine when this source code becomes millions of lines long.

while adding checks adds visual clutter for a case that should never happen.

Visual clutter should never be an argument against doing something. I could understand if you said "well, runtime null check takes time" (even though in the given context it is a bad argument as well), but not "visual clutter". Never sacrifice safety because "it looks better that way".

Note that I do think that readability is important. Even very important. But it cannot be more important then the correctness of the code.

The defensive approach feels "safer" and provides a clearer, more professional error message.

This ^^^. Forget about "professional" (whatever that means). It is "safer" and "clearer" that matters.

It's essentially enforcing a precondition that I, as the user, have already agreed to follow.

Ahh, if we could only trust ourselves. Then there would be no bugs, and actually no language would be necessary, maybe except C (as a portable assembly).

In the case that the precondition is not satisfied, well too bad... you crash!

So you don't care about the code crashing? You should, you are the one writing it. :) And let me repeat again: it is a lot easier to debug and fix an exception instead of crash.


So the final big point of all of this is: use the tools that a given language/framework provides. These are here to help you, not to make your life harder.

answered Oct 23 at 9:18
4
  • Is dereferencing a null pointer exception potentially unsafe? In Java an exception would be thrown, which is not entirely unlike checking for null value and throwing one yourself. If a null pointer is a rare enough occurrence, it is not uncommon to let it "fail fast" in Java, if anything, with a giant try catch umbrella to catch any potential issue which would also include a null pointer exception further up in the call tree. Commented Oct 23 at 13:49
  • @Neil Java is a safe language. In the sense that the behaviour of everything is well defined, including null pointer dereference. But that's not the case with C++. In C++ the concrete behaviour is not defined. First of all it won't throw an exception. Most of the time it will simply crash the entire process with segfault. But that is not guaranteed to happen. In fact, depending on certain circumstances it won't happen. What may or may not happen is not well defined in C++. Anything can happen, including dereferencing succeeding and reading/writing data. And that's why it is dangerous. Commented Oct 23 at 18:31
  • 1
    Btw, this means that Java's "pointer dereference operation" isn't just "read/write from the address". The JVM probably does the null pointer check for you under the hood and throws an exception. It could potentially be implemented differently (signals?), but regardless it will involve some hidden work. Unlike Java, C++ tries to avoid any hidden costs. That's one of the reasons why C++ beats Java hard in terms of raw performance. But then safety becomes the price. You have to be very careful in everything you do, often even compiler won't help you (e.g. when you design your code as seen here). Commented Oct 23 at 18:49
  • @freakish Indeed, every JVM that I've looked into does use signals to handle null pointer checks. That's one of the reasons why it's somewhere between "really difficult and very fragile" and "impossible, it's never going to work" to try using a custom SIGSEGV handler when running a JVM in the same process. Yeah, I've tried. :-) Commented Oct 23 at 19:42
1

Background: I am guided in design decisions by DbC.

The OP two-line implementation is nice and is almost enough:

 void windowDestroy(Window *window)
 {
 glfwDestroyWindow(window->handle);
 }

I substantially agree with your reasoning. And yes, we wish to minimize visual clutter within the source code.

I'm confident that I will never intentionally pass a nullptr

That is the point where we diverge a little. Often I am smart and have confidence in what I write. But not always. I can always use a little help. Such as peer review from colleagues during the Pull Request process. Or from linters and compiler warnings. My concern is that I or my colleagues might forget, or some software layer might be inserted which does not know all the assumptions. So if compiler flags or automated tooling can help me write in a more conservative, more "obviously correct" style, then I will spend a little effort to access that kind of help. I take care to ensure each git commit will "lint clean".

enforcing a precondition that I ... have already agreed to follow

Yes, that is good as far as it goes. But alas the source code doesn't quite reflect that, as the signature admits of nullptr. You know the details, but you didn't write them all down. Here is the three-line implementation that I would typically write:

 void windowDestroy(Window *window)
 {
 assert(window != nullptr);
 glfwDestroyWindow(window->handle);
 }

Now we have a two-line signature that captures both type constraints and pre-conditions. We have made explicit the assumptions you had in your head, in a way that is visible to static analysis tools. And we've done it with a minimum of ceremony.

Usually I would add that line in response to some warning which "breaks the build", and I'm essentially trying to make the linter happy. Going from a line number in a stack trace to the source code is pretty self explanatory for a "cannot happen" situation. We don't need a custom message here.

If I have seen that callers sometimes mess up the contract, then I might throw an std::invalid_argument to add a bit of debug info up front. If e.g. a parameter mass must be positive I might include the received value as part of the diagnostic, to save a maintenance engineer the trouble of reproducing the error.


Here is a "defensive" approach that I would not put in a commit:

 void windowDestroy(Window *window)
 {
 if (window) {
 glfwDestroyWindow(window->handle);
 }
 }

Does it pass muster with static analysis? Sure.

But the trouble is that it suggests "nullptr is OK"; in contrast an assert says a given situation "cannot happen".

answered Oct 16 at 14:53
4
  • assert(window != nullptr);? So crashing the entire application is always the answer? Without context for the code's use, it's flat-out wrong to assume crashing the entire process is acceptable. Nevermind if that code is ever compiled with NDEBUG defined the check disappears. Commented Oct 23 at 19:37
  • You didn't grasp the meaning of "cannot happen". By hypothesis, all the caller sites are in the same codebase where we defined the library routine, and are available to static analysis. An if (window) ... lets a buggy call site off the hook, gives it a pass, doesn't break the build. OTOH assert tells static analysis loud and clear that a buggy caller which might pass in nullptr should definitely break the build, so we don't ship clearly broken code. I want no ambiguity about whether caller might be good or might be bad. If it could pass a nullptr, then we die in CI/CD, not in production. Commented Oct 23 at 22:00
  • LOL. You've never seen a bug from something that "cannot happen", I guess. I don't care what any API guarantees seem to say, if you're getting a pointer value from an uncontrolled caller there's no such thing as "cannot happen". And what's restricted today to a limited number of controlled callers that "ensure" things that "cannot happen" can all too easily change in the future. Avoiding reality by saying it "cannot happen"? Reality is not limited by what you can think of. Commented Oct 23 at 23:32
  • (cont) And if your CI/CD isn't complete enough to catch a bug, it's not complete enough to catch a bug no matter how the bug manifests itself. Tossing an error exception or returning an error code on a nullptr can be tested for just as easily as an assertion failure. Commented Oct 23 at 23:38
1

Another option is to re-design the API so that the error doesn't occur.

void windowDestroy(Window &window)
{
 glfwDestroyWindow(window.handle);
}
bool windowShouldClose(const Window &window)
 {
 const bool res = glfwWindowShouldClose(window.handle);
#ifdef DEBUG
 CHECK_GLFW_ERROR();
#endif
 return res;
 }

A downside is that the interface may be less familiar, but if you have the freedom to design the API it's an option.

freakish
3,0651 gold badge11 silver badges16 bronze badges
answered Oct 23 at 8:54

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.