Why did I put this message in the OOP department? Well, using reference semantics is something OOP languages do, some may even say it's something compiled OOPLs have to do -- but I still find it ugly.
We work so hard to establish abstraction boundaries, only to have programmers return references to internal arrays and such by mistake.
Well, anyway, that's life.
Beginning programmers should use functional languages or functional subsets of languages in order to avoid altogether the idea of call by reference and side effects. Eventually they should learn about these concepts, and regretably they should learn about C and pointer arithmetic too. More interesting, and a separate topic, is the idea of appropriate use of "exceptional values" in even functional languages.
Any interesting application in any language is going to exhibit a need for "exceptional data" sooner or later. For these situations, the languages equivalent of "null" should not be used.
The simplest example of this could be the Maybe type n Haskell:
data Maybe a = Just a | Nothing
but this is really another kind of "null".
Better, but more complex examples, are described in Ward Cunningham's Checks pattern language which is really language independent.
?String x = ...;
if (x != null) /* here x has type String */
The good thing about it is that it is quite straighforward compared to monads. It looks like mere case analysis.
You can also define higher level operators: x || "default" evaluates to the value of x unless it is null, in which case it is "default". This expression has of course type String.
This seems so obvious one has to ask why so few mainstream languages provide dedicated constructs for dealing with this issue.
The simplest example of this could be the Maybe type n Haskell:
data Maybe a = Just a | Nothingbut this is really another kind of "null".
Not quite.
First, Maybe a is distinct from a, so the programmer is forced to check it.
Second, Maybe (Maybe a) is distinct from Maybe a, so there can be multiple exceptional values.
Third, Maybe forms a monad, so error-checking code can be modularized and separated from applications which require it.
"Maybe a" is distinct from "a" - yes, another good point.
"We started with pointers but we moved on to something much more interesting." - and a very good point. I am also surprised how little attention this gets in the mainstream.
This was a minute detail in the language design, but it was crucial. The language (well, the first version anyway) didn't have if statements and such, so without the automatic special handling of missing and invalid data, the language would have been seriously crippled.
Even with if statements, this feature made programming much easier.
Ehud, what's the Haskell paper on exceptions?
In that context you don't need (nor does the language support) exceptions. Let me give an example. Suppose the age of some of the twon residents in your database is unknown (it doesn't matter why this is the situation. Assume this is out of your hands). You want a report listing all residents that will turn 18 this year.
All you need is that language, while doing calculations with birth dates etc. will propagate the fact that the information is missing.
The reason you can dispense with exceptions is that the handling of missing and invalid data is uniform, and embodied in the behaviour of the various operators.
Naturally, a general purpose language cannot choose this approach.