Default values can make programming easier. On the other hand, they can confuse us a bit. In C and C++ default values are considerable work. We need to know when the language (or our abstractions) are providing them and when they aren't. In some languages, things are clearer - numerics are always initialized to a flavor of zero.
Many people consider it good practice to never rely on the defaults, but as we increase the level of abstraction in programming, we can get a considerable amount of concision by taking advantage of them. If we need to look one element past our data, having a pad of zeros at the end of a range can be useful if zero is the right value in the face of our computations - but it isn't always.
The other day, I had an idea. What if we had a literal in programming languages that simply represents the identity element under all operations that support identity? For multiplication, identity would take on the value of 1. For addition, it would take on the value of 0. Under string concatenation, it would be the empty string.
My immediate thought is that this would essentially "break" mathematics. One factoring of an expression would have a different value than another factoring of an expression. On the other hand, maybe there are ways around that. Maybe this concept I'm describing can be best seen as a cast rather than a value?
At this point, I don't know. But, I think there might be something to this. What would the ramifications be?
Join the conversation on Hacker News: https://news.ycombinator.com/item?id=6935134