One of the tips for jslint
tool is:
++ and -- The ++ (increment) and -- (decrement) operators have been known to contribute to bad code by encouraging excessive trickiness. They are second only to faulty architecture in enabling to viruses and other security menaces. There is a plusplus option that prohibits the use of these operators.
I know that PHP constructs like $foo[$bar++]
may easily result in off-by-one errors, but I couldn't figure out a better way to control the loop than a:
while( a < 10 ) do { /* foo */ a++; }
or
for (var i=0; i<10; i++) { /* foo */ }
Is the jslint
highlighting them because there are some similar languages that lack the "++
" and "--
" syntax or handle it differently, or are there other rationales for avoiding "++
" and "--
" that I might be missing?
++
doesn't cause bugs. Using ++
in "tricky" ways can lead to bugs, especially if more than one person is maintaining the codebase, but that's not a problem with the operator, it's a problem with the programmer. I didn't learn JS at university (because it didn't exist yet), but so what? I did do C, which of course had ++
first, but that also gets a "so what?" I didn't go to university to learn a specific language, I went to learn good programming practices that I can apply to any language.
My view is to always use ++ and -- by themselves on a single line, as in:
i++;
array[i] = foo;
instead of
array[++i] = foo;
Anything beyond that can be confusing to some programmers and is just not worth it in my view. For loops are an exception, as the use of the increment operator is idiomatic and thus always clear.
I'm frankly confused by that advice. Part of me wonders if it has more to do with a lack of experience (perceived or actual) with javascript coders.
I can see how someone just "hacking" away at some sample code could make an innocent mistake with ++ and --, but I don't see why an experienced professional would avoid them.
++
in more convoluted expression in wich ++x is different from x++, thus resulting in something not easy to read. Crockford idea is not about 'can I do it?' it's about 'how can I avoid errors?'
There is a history in C of doing things like:
while (*a++ = *b++);
to copy a string, perhaps this is the source of the excessive trickery he is referring to.
And there's always the question of what
++i = i++;
or
i = i++ + ++i;
actually do. It's defined in some languages, and in other's there's no guarantee what will happen.
Those examples aside, I don't think there's anything more idiomatic than a for loop that uses ++
to increment. In some cases you could get away with a foreach loop, or a while loop that checked a different condtion. But contorting your code to try and avoid using incrementing is ridiculous.
x = a+++b
--> x = (a++)+b
--> x = a + b; a++
. The tokeniser is greedy.
If you read JavaScript The Good Parts, you'll see that Crockford's replacement for i++ in a for loop is i+=1 (not i=i+1). That's pretty clean and readable, and is less likely to morph into something "tricky."
Crockford made disallowing autoincrement and autodecrement an option in jsLint. You choose whether to follow the advice or not.
My own personal rule is to not do anything combined with autoincrement or autodecrement.
I've learned from years of experience in C that I don't get buffer overruns (or array index out of bounds) if I keep use of it simple. But I've discovered that I do get buffer overruns if I fall into the "excessively tricky" practice of doing other things in the same statement.
So, for my own rules, the use of i++ as the increment in a for loop is fine.
In a loop it's harmless, but in an assignment statement it can lead to unexpected results:
var x = 5;
var y = x++; // y is now 5 and x is 6
var z = ++x; // z is now 7 and x is 7
Whitespace between the variable and the operator can lead to unexpected results as well:
a = b = c = 1; a ++ ; b -- ; c; console.log('a:', a, 'b:', b, 'c:', c)
In a closure, unexpected results can be an issue as well:
var foobar = function(i){var count = count || i; return function(){return count++;}}
baz = foobar(1);
baz(); //1
baz(); //2
var alphabeta = function(i){var count = count || i; return function(){return ++count;}}
omega = alphabeta(1);
omega(); //2
omega(); //3
And it triggers automatic semicolon insertion after a newline:
var foo = 1, bar = 2, baz = 3, alpha = 4, beta = 5, delta = alpha
++beta; //delta is 4, alpha is 4, beta is 6
preincrement/postincrement confusion can produce off-by-one errors that are extremely difficult to diagnose. Fortunately, they are also complete unnecessary. There are better ways to add 1 to a variable.
References
JSLint Help: Increment and Decrement Operators
==
because you don't understand the difference between ==
and ===
.
a: 2 b: 0 c: 1
. I don't see anything weird or unexpected in the first example ("assignment statement") either.
Consider the following code
int a[10];
a[0] = 0;
a[1] = 0;
a[2] = 0;
a[3] = 0;
int i = 0;
a[i++] = i++;
a[i++] = i++;
a[i++] = i++;
since i++ gets evaluated twice the output is (from vs2005 debugger)
[0] 0 int
[1] 0 int
[2] 2 int
[3] 0 int
[4] 4 int
Now consider the following code :
int a[10];
a[0] = 0;
a[1] = 0;
a[2] = 0;
a[3] = 0;
int i = 0;
a[++i] = ++i;
a[++i] = ++i;
a[++i] = ++i;
Notice that the output is the same. Now you might think that ++i and i++ are the same. They are not
[0] 0 int
[1] 0 int
[2] 2 int
[3] 0 int
[4] 4 int
Finally consider this code
int a[10];
a[0] = 0;
a[1] = 0;
a[2] = 0;
a[3] = 0;
int i = 0;
a[++i] = i++;
a[++i] = i++;
a[++i] = i++;
The output is now :
[0] 0 int
[1] 1 int
[2] 0 int
[3] 3 int
[4] 0 int
[5] 5 int
So they are not the same, mixing both result in not so intuitive behavior. I think that for loops are ok with ++, but watch out when you have multiple ++ symbols on the same line or same instruction
The "pre" and "post" nature of increment and decrement operators can tend to be confusing for those who are not familiar with them; that's one way in which they can be tricky.
In my view, "Explicit is always better than implicit." Because at some point, you may got confused with this increments statement y+ = x++ + ++y
. A good programmer always makes his or her code more readable.
I've been watching Douglas Crockford's video on this and his explanation for not using increment and decrement is that
It has been used in the past in other languages to break the bounds of arrays and cause all manners of badness and That it is more confusing and inexperienced JS developers don't know exactly what it does.
Firstly arrays in JavaScript are dynamically sized and so, forgive me if I'm wrong, it is not possible to break the bounds of an array and access data that shouldn't be accessed using this method in JavaScript.
Secondly, should we avoid things that are complicated, surely the problem is not that we have this facility but the problem is that there are developers out there that claim to do JavaScript but don't know how these operators work?? It is simple enough. value++, give me the current value and after the expression add one to it, ++value, increment the value before giving me it.
Expressions like a ++ + ++ b, are simple to work out if you just remember the above.
var a = 1, b = 1, c;
c = a ++ + ++ b;
// c = 1 + 2 = 3;
// a = 2 (equals two after the expression is finished);
// b = 2;
I suppose you've just got to remember who has to read through the code, if you have a team that knows JS inside out then you don't need to worry. If not then comment it, write it differently, etc. Do what you got to do. I don't think increment and decrement is inherently bad or bug generating, or vulnerability creating, maybe just less readable depending on your audience.
Btw, I think Douglas Crockford is a legend anyway, but I think he's caused a lot of scare over an operator that didn't deserve it.
I live to be proven wrong though...
The most important rationale for avoiding ++ or -- is that the operators return values and cause side effects at the same time, making it harder to reason about the code.
For efficiency's sake, I prefer:
++i when not using the return value (no temporary)
i++ when using the return value (no pipeline stall)
I am a fan of Mr. Crockford, but in this case I have to disagree. ++i
is 25% less text to parse than i+=1
and arguably clearer.
Another example, more simple than some others with simple return of incremented value:
function testIncrement1(x) {
return x++;
}
function testIncrement2(x) {
return ++x;
}
function testIncrement3(x) {
return x += 1;
}
console.log(testIncrement1(0)); // 0
console.log(testIncrement2(0)); // 1
console.log(testIncrement3(0)); // 1
As you can see, no post-increment/decrement should be used at return statement, if you want this operator to influence the result. But return doesn't "catch" post-increment/decrement operators:
function closureIncrementTest() {
var x = 0;
function postIncrementX() {
return x++;
}
var y = postIncrementX();
console.log(x); // 1
}
I think programmers should be competent in the language they are using; use it clearly; and use it well. I don't think they should artificially cripple the language they are using. I speak from experience. I once worked literally next door to a Cobol shop where they didn't use ELSE 'because it was too complicated'. Reductio ad absurdam.
In my experience, ++i or i++ has never caused confusion other than when first learning about how the operator works. It is essential for the most basic for loops and while loops that are taught by any highschool or college course taught in languages where you can use the operator. I personally find doing something like what is below to look and read better than something with a++ being on a separate line.
while ( a < 10 ){ array[a++] = val }
In the end it is a style preference and not anything more, what is more important is that when you do this in your code you stay consistent so that others working on the same code can follow and not have to process the same functionality in different ways.
Also, Crockford seems to use i-=1, which I find to be harder to read than --i or i--
As mentioned in some of the existing answers (which annoyingly I'm unable to comment on), the problem is that x++ ++x evaluate to different values (before vs after the increment), which is not obvious and can be very confusing - if that value is used. cdmckay suggests quite wisely to allow use of increment operator, but only in a way that the returned value is not used, e.g. on its own line. I would also include the standard use within a for loop (but only in the third statement, whose return value is not used). I can't think of another example. Having been "burnt" myself, I would recommend the same guideline for other languages as well.
I disagree with the claim that this over-strictness is due to a lot of JS programmers being inexperienced. This is the exact kind of writing typical of "overly-clever" programmers, and I'm sure it's much more common in more traditional languages and with JS developers who have a background in such languages.
My 2cents is that they should be avoided in two cases:
1) When you have a variable that is used in many rows and you increase/decrease it on the first statement that uses it (or last, or, even worse, in the middle):
// It's Java, but applies to Js too
vi = list.get ( ++i );
vi1 = list.get ( i + 1 )
out.println ( "Processing values: " + vi + ", " + vi1 )
if ( i < list.size () - 1 ) ...
In examples like this, you can easily miss that the variable is auto-incremented/decremented or even remove the first statement. In other words, use it only in very short blocks or where the variable appears in the block on just a couple of close statements.
2) In case of multiple ++ and -- about the same variable in the same statement. It's very hard to remember what happens in cases like this:
result = ( ++x - --x ) * x++;
Exams and professional tests asks about examples like above and indeed I've stumbled upon this question while looking for documentation about one of them, but in real life one shouldn't be forced to think so much about a single line of code.
Is Fortran a C-like language? It has neither ++ nor --. Here is how you write a loop:
integer i, n, sum
sum = 0
do 10 i = 1, n
sum = sum + i
write(*,*) 'i =', i
write(*,*) 'sum =', sum
10 continue
The index element i is incremented by the language rules each time through the loop. If you want to increment by something other than 1, count backwards by two for instance, the syntax is ...
integer i
do 20 i = 10, 1, -2
write(*,*) 'i =', i
20 continue
Is Python C-like? It uses range and list comprehensions and other syntaxes to bypass the need for incrementing an index:
print range(10,1,-2) # prints [10,8.6.4.2]
[x*x for x in range(1,10)] # returns [1,4,9,16 ... ]
So based on this rudimentary exploration of exactly two alternatives, language designers may avoid ++ and -- by anticipating use cases and providing an alternate syntax.
Are Fortran and Python notably less of a bug magnet than procedural languages which have ++ and --? I have no evidence.
I claim that Fortran and Python are C-like because I have never met someone fluent in C who could not with 90% accuracy guess correctly the intent of non-obfuscated Fortran or Python.
The operators mean different things when used as prefixes versus suffixes, which can cause hard-to-find bugs. Consider the following example, using bubbleSort:
function bubbleSort(array) {
if(array.length === 1) return array;
let end = array.length - 2;
do {
for (let i = 0; i < array.length; i += 1) {
if (array[i] > array[i + 1]) {
swap(array, i, i + 1);
}
}
} while (end--);
}
bubbleSort([6,5]);
Let's imagine in the course of running our program, we pass a two-item value into our sort function. The code runs fine as-is: the "do/while" loop first executes before reaching the condition. However, the program sees that end
is falsy and exits the loop before decrementing the variable.
Now consider the following code, where the --
symbol is used as a prefix, rather than a suffix. This code will enter an infinite loop:
function bubbleSort(array) {
if(array.length === 1) return array;
let end = array.length - 2;
do {
for (let i = 0; i < array.length; i += 1) {
if (array[i] > array[i + 1]) {
swap(array, i, i + 1);
}
}
} while (--end);
}
bubbleSort([6,5]);
Now when we hit the while condition, we decrement the end value before checking it. This returns -1, which in Javascript, is a truthy value.
I don't have a strong opinion on their use one way or the other, but I just wanted to show how they can cause real bugs when used carelessly.
Argument for ++ is that ++ is not applicable for strings which will cause TypeError, which is almost always preferable.
Success story sharing
i
variable later becomes a composite class, in which case you'd be needlessly creating a temporary object. I find the postfix operator more aesthetically pleasing though.