The following code produces the output "Hello World!" (no really, try it).
public static void main(String... args) {
// The comment below is not a typo.
// \u000d System.out.println("Hello World!");
}
The reason for this is that the Java compiler parses the Unicode character \u000d
as a new line and gets transformed into:
public static void main(String... args) {
// The comment below is not a typo.
//
System.out.println("Hello World!");
}
Thus resulting into a comment being "executed".
Since this can be used to "hide" malicious code or whatever an evil programmer can conceive, why is it allowed in comments?
Why is this allowed by the Java specification?
Unicode decoding takes place before any other lexical translation. The key benefit of this is that it makes it trivial to go back and forth between ASCII and any other encoding. You don't even need to figure out where comments begin and end!
As stated in JLS Section 3.3 this allows any ASCII based tool to process the source files:
[...] The Java programming language specifies a standard way of transforming a program written in Unicode into ASCII that changes a program into a form that can be processed by ASCII-based tools. [...]
This gives a fundamental guarantee for platform independence (independence of supported character sets) which has always been a key goal for the Java platform.
Being able to write any Unicode character anywhere in the file is a neat feature, and especially important in comments, when documenting code in non-latin languages. The fact that it can interfere with the semantics in such subtle ways is just an (unfortunate) side-effect.
There are many gotchas on this theme and Java Puzzlers by Joshua Bloch and Neal Gafter included the following variant:
Is this a legal Java program? If so, what does it print? \u0070\u0075\u0062\u006c\u0069\u0063\u0020\u0020\u0020\u0020 \u0063\u006c\u0061\u0073\u0073\u0020\u0055\u0067\u006c\u0079 \u007b\u0070\u0075\u0062\u006c\u0069\u0063\u0020\u0020\u0020 \u0020\u0020\u0020\u0020\u0073\u0074\u0061\u0074\u0069\u0063 \u0076\u006f\u0069\u0064\u0020\u006d\u0061\u0069\u006e\u0028 \u0053\u0074\u0072\u0069\u006e\u0067\u005b\u005d\u0020\u0020 \u0020\u0020\u0020\u0020\u0061\u0072\u0067\u0073\u0029\u007b \u0053\u0079\u0073\u0074\u0065\u006d\u002e\u006f\u0075\u0074 \u002e\u0070\u0072\u0069\u006e\u0074\u006c\u006e\u0028\u0020 \u0022\u0048\u0065\u006c\u006c\u006f\u0020\u0077\u0022\u002b \u0022\u006f\u0072\u006c\u0064\u0022\u0029\u003b\u007d\u007d
(This program turns out to be a plain "Hello World" program.)
In the solution to the puzzler, they point out the following:
More seriously, this puzzle serves to reinforce the lessons of the previous three: Unicode escapes are essential when you need to insert characters that can’t be represented in any other way into your program. Avoid them in all other cases.
Source: Java: Executing code in comments?!
Since this hasn’t addressed yet, here an explanation, why the translation of Unicode escapes happens before any other source code processing:
The idea behind it was that it allows lossless translations of Java source code between different character encodings. Today, there is widespread Unicode support, and this doesn’t look like a problem, but back then it wasn’t easy for a developer from a western country to receive some source code from his Asian colleague containing Asian characters, make some changes (including compiling and testing it) and sending the result back, all without damaging something.
So, Java source code can be written in any encoding and allows a wide range of characters within identifiers, character and String
literals and comments. Then, in order to transfer it losslessly, all characters not supported by the target encoding are replaced by their Unicode escapes.
This is a reversible process and the interesting point is that the translation can be done by a tool which doesn’t need to know anything about the Java source code syntax as the translation rule is not dependent on it. This works as the translation to their actual Unicode characters inside the compiler happens independently to the Java source code syntax as well. It implies that you can perform an arbitrary number of translation steps in both directions without ever changing the meaning of the source code.
This is the reason for another weird feature which hasn’t even mentioned: the \uuuuuuxxxx
syntax:
When a translation tool is escaping characters and encounters a sequence that is already an escaped sequence, it should insert an additional u
into the sequence, converting \ucafe
to \uucafe
. The meaning doesn’t change, but when converting into the other direction, the tool should just remove one u
and replace only sequences containing a single u
by their Unicode characters. That way, even Unicode escapes are retained in their original form when converting back and forth. I guess, no-one ever used that feature…
native2ascii
doesn't seem to use the \uu...xxxx
syntax,
native2ascii
was intended to help preparing resource bundles by converting them to iso-latin-1 as Properties.load
was fixed to read latin-1 only. And there, the rules are different, no \uuu…
syntax and no early processing stage. In property files, property=multi\u000aline
is indeed the same as property=multi\nline
. (Contradicting to the phrase “using Unicode escapes as defined in section 3.3 of The Java™ Language Specification” of the documentation)
\u
escapes to generate characters in the U+0000–007F range. (All such characters can be represented natively by all the national encodings that were relevant in the 1990s—well, maybe except some of the control characters, but you don't need those to write Java anyway.)
I'm going to completely ineffectually add the point, just because I can't help myself and I haven't seen it made yet, that the question is invalid since it contains a hidden premise which is wrong, namely that the code is in a comment!
In Java source code \u000d is equivalent in every way to an ASCII CR character. It is a line ending, plain and simple, wherever it occurs. The formatting in the question is misleading, what that sequence of characters actually syntactically corresponds to is:
public static void main(String... args) {
// The comment below is no typo.
//
System.out.println("Hello World!");
}
IMHO the most correct answer is therefore: the code executes because it isn't in a comment; it's on the next line. "Executing code in comments" is not allowed in Java, just like you would expect.
Much of the confusion stems from the fact that syntax highlighters and IDEs aren't sophisticated enough to take this situation into account. They either don't process the unicode escapes at all, or they do it after parsing the code instead of before, like javac
does.
The \u000d
escape terminates a comment because \u
escapes are uniformly converted to the corresponding Unicode characters before the program is tokenized. You could equally use \u0057\u0057
instead of //
to begin a comment.
This is a bug in your IDE, which should syntax-highlight the line to make it clear that the \u000d
ends the comment.
This is also a design error in the language. It can't be corrected now, because that would break programs that depend on it. \u
escapes should either be converted to the corresponding Unicode character by the compiler only in contexts where that "makes sense" (string literals and identifiers, and probably nowhere else) or they should have been forbidden to generate characters in the U+0000–007F range, or both. Either of those semantics would have prevented the comment from being terminated by the \u000d
escape, without interfering with the cases where \u
escapes are useful—note that that includes use of \u
escapes inside comments as a way to encode comments in a non-Latin script, because the text editor could take a broader view of where \u
escapes are significant than the compiler does. (I am not aware of any editor or IDE that will display \u
escapes as the corresponding characters in any context, though.)
There is a similar design error in the C family,1 where backslash-newline is processed before comment boundaries are determined, so e.g.
// this is a comment \
this is still in the comment!
I bring this up to illustrate that it happens to be easy to make this particular design error, and not realize that it's an error until it is too late to correct it, if you are used to thinking about tokenization and parsing the way compiler programmers think about tokenization and parsing. Basically, if you have already defined your formal grammar and then someone comes up with a syntactic special case — trigraphs, backslash-newline, encoding arbitrary Unicode characters in source files limited to ASCII, whatever — that needs to be wedged in, it's easier to add a transformation pass before the tokenizer than it is to redefine the tokenizer to pay attention to where it makes sense to use that special case.
1 For pedants: I am aware that this aspect of C was 100% intentional, with the rationale — I am not making this up — that it would allow you to mechanically force-fit code with arbitrarily long lines onto punched cards. It was still an incorrect design decision.
\u
was less absurd than the decision to follow C's lead in using leading zeroes for octal notation. While octal notation is sometimes useful, I've yet to hear anyone articulate an argument why a leading zero is a good way of indicating it.
\u
as pre-tokenization transformation if it were forbidden to produce characters in the U+0000..U+007F range. It's the combination of "this works everywhere" and "this aliases ASCII characters with syntactic significance" that demotes it from awkward to flat-out wrong.
//
single-line comment didn't exist. And since C has a statement terminator that is not a new line, it would mostly be used for long strings, except that as far as I can determine "string literal concatenation" was there from K&R.
This was an intentional design choice that goes all the way back to the original design of Java.
To those folks who ask "who wants Unicode escapes in comments?", I presume they are folks whose native language uses the Latin character set. In other words, it is inherent in the original design of Java that folks could use arbitrary Unicode characters wherever legal in a Java program, most typically in comments and strings.
It is arguably a shortcoming in programs (like IDEs) used to view the source text that such programs cannot interpret the Unicode escapes and display the corresponding glyph.
I agree with @zwol that this is a design mistake; but I'm even more critical of it.
\u
escape is useful in string and char literals; and that's the only place that it should exist. It should be handled the same way as other escapes like \n
; and "\u000A"
should mean exactly "\n"
.
There is absolutely no point of having \uxxxx
in comments - nobody can read that.
Similarly, there's no point of using \uxxxx
in other part of the program. The only exception is probably in public APIs that are coerced to contain some non-ascii chars - what's the last time we've seen that?
The designers had their reasons in 1995, but 20 years later, this appears to be a wrong choice.
(question to readers - why does this question keep getting new votes? is this question linked from somewhere popular?)
int \u5431
when you can do int 整
UTF-8
support in 1995). You just have to call one method and don’t want to install the Asian language support pack of your operating system (remember, the nineties) for that single method…
The only people who can answer why Unicode escapes were implemented as they were are the people who wrote the specification.
A plausible reason for this is that there was the desire to allow the entire BMP as possible characters of Java source code. This presents a problem though:
You want to be able to use any BMP character.
You want to be able to input any BMP charater reasonably easy. A way to do this is with Unicode escapes.
You want to keep the lexical specification easy for humans to read and write, and reasonably easy to implement as well.
This is incredibly difficult when Unicode escapes enter the fray: it creates a whole load of new lexer rules.
The easy way out is to do lexing in two steps: first search and replace all Unicode escapes with the character it represents, and then parse the resulting document as if Unicode escapes don't exist.
The upside to this is that it's easy to specify, so it makes the specification simpler, and it's easy to implement.
The downside is, well, your example.
"The reason for this is that the Java compiler parses the Unicode character \u000d as a new line".
If true, then that's precisely where the error occurs.
Java compilers should perhaps refuse to compile this source, because (as Java source code) it is ill-formed, thus either bad to begin with, tampered with en route, or mutated by something in the tool-chain that does not understand the transformation rules. They should not blindly transform it.
If the editor in question is an ASCII-only tool, then said editor is doing the right thing--treating the Unicode escape sequence as a meaningless string of characters in (an ill-formed) comment.
If the editor in question is a Unicode-aware tool, then it is also doing the right thing--leaving the Unicode escape sequence "as is", and treating it as a meaningless string of characters in (an ill-formed) comment.
Lossless, reversible conversion requires transformations that map 1-1 onto--thus the intersection of the two sets must be empty. Here the two sets in question can overlap even if no characters are modified by a correctly implemented escape-ify-ing transformation because escaped-Unicode in the range (000-07F) might already be present in the input stream.
If the goal is lossless, reversible conversion between Unicode and ASCII, the requirement for transforming to/from ASCII is to escape-ify/re-encode any Unicode characters greater than hex 007F, and leave the rest alone.
Having done that, a language that is Unicode aware will treat escaped-Unicode characters as an error anywhere other than inside a comment or a string--they must not be converted within comments, but they must be converted within strings--therefore conversion must not happen until after lexical analysis has turned the source into tokens (i.e. lexemes), allowing conversions to be done in a type-safe manner.
Success story sharing
\u000d
and the part after it should have code highlights.// C:\user\...
which leads to a compile error since\user
isn't a valid Unicode escape sequence.\u000d
is highlighted partially. After pressing Ctrl+Shift+F the character is replaced with new line and rest of line is wrapped\u002A/
should end the comment.