An interesting document on Java’s short comings (from C developer’s perspective) was written some time ago (about 2000? ) but many of the arguments issues are as true (or not) today as they were ten years ago.
The original Java Sucks posting.
Review of short comings
Java doesn’t have free().
The author lists this as a benefit and 99% of the time is a win. There are times when not having it is a downside, when you wish escape analysis
would eliminate, recycle or free immediately an object you know isn’t needed any more (IMHO the JIT / javac should be able to work it out in theory)
lexically scoped local functions
The closest Java has is anonymous methods. This is a poor cousin to Closures (coming in Java 8), but it can be made to do the same thing.
No macro system
Many of the useful tricks you can do with macros, Java can do for you dynamically. Not needing a macro system is an asset because you don’t need to know when Java will give you the same optimisations. There is an application start up cost that macros don’t have and you can’t do the really obfuscated stuff, but this is probably a good thing.
Explicitly Inlined functions
The JIT can inline methods for you. Java can inline methods from shared libraries, even if they are updated dynamically. This does come at a run time cost, but its nicer not to need to worry about this IMHO.
I find lack of function pointers a huge pain
Function pointers makes in lining methods more difficult for the compiler. If you are using object orientated programming, I don’t believe you need these. For other situations, I believe Closures in Java 8 is likely to be nicer.
The fact that static methods aren’t really class methods is pretty dumb
I imagine most Java developers have come across this problem at some stage. IMHO: The nicest solution is to move the “static” functionality to its own class and not use static methods if you want polymorphism.
It’s far from obvious how one hints that a method should be inlined, or otherwise go real fast
Make it small and call it lots of times.
Two identical byte arrays aren’t equal and don’t hash the same
I agree that its pretty ugly design choice not to make arrays proper objects. They inherit from Object, but don’t have useful implementation for toString, equals, hashCode, compareTo. clone() and getClass() are the most useful methods. You can use helper methods instead, but with many different helper classes called Array, Arrays, ArrayUtil, ArrayUtils in different packages its all a mess for a new developer to deal with.
Hashtable/HashMap does allow you to provide a hashing function
This is also a pain if you want to change the behaviour. IMHO, The best solution is to write a wrapper class which implements equals/hashCode, but this adds overhead.
iterate the characters in a String without implicitly involving half a dozen method calls per character
There is now String.toCharArray() but this creates a copy you don’t need and is not eliminated by escape analysis. When it is, this is the obvious solution.
The same applies to “The other alternative is to convert the String to a byte first, and iterate the bytes, at the cost of creating lots of random garbage”
overhead added by Unicode support in those cases where I’m sure that there are no non-ASCII characters.
Java 6 has a solution to this which is -XX:+UseCompressedStrings. Unfortunately Java 7 has dropped support for this feature. I have no idea why as this option improves performance (as well as reducing memory usage) in test I have done.
Interfaces seem a huge, cheesy copout for avoiding multiple inheritance; they really seem like they were grafted on as an afterthought.
I prefer a contract which only lists functionality offered without adding implementation. The newer Virtual Extension Methods in Java 8 will provide default implementations without state. In some cases this will be very useful.
There’s something kind of screwy going on with type promotion
The problem here is solved by co-variant return types which Java 5.0+ now supports.
You can’t write a function which expects and Object and give it a short
Today you have auto-boxing. The author complains that Short and short are not the same thing. For efficiency purposes this can make surprisingly little difference in some cases with auto-boxing. In some cases it does make a big difference, and I don’t foresee Java optimising this transparently in the near future.
it’s a total pain that one can’t iterate over the contents of an array without knowing intimate details about its contents
Its rare you really need to do this IMHO. You can use Array.getLength(array) and Array.get(array, n) to handle a generic array. Its ugly but you can do it. Its one of the helper class which should really be methods on the array itself IMHO.
The only way to handle overflow is to use BigInteger (and rewrite your code)
Languages like Scala support operators for BigInteger and it has been suggested that Java should too. I believe overflow detection is also being considered for Java 8/9.
I miss typedef
This allows you to use primitives and still get type safety. IMHO, the real issue is that the JIT cannot detect that a type is just a wrapper for a primitive (or two) and eliminate the need for the wrapped class. This would provide the benefits of typedef without changing the syntax and make the code more Object Orientated.
I think the available idioms for simulating enum and :keywords are fairly lame
Java 5.0+ has enum which are first class objects and are surprising powerful.
there’s no efficient way to implement `assert’
assert is now built in. To implement it yourself is made efficient by the JIT. (Probably not tens years ago)
By having `new’ be the only possible interface to allocation, … there are a whole class of ancient, well-known optimizations that one just cannot perform.
This should be performed by the JIT IMHO. Unfortunately, it rarely does, but this is improving.
The finalization system is lame.
Most people agree its best avoided. Perhaps it could be more powerful and reliable. ARM (Automatic Resource Management) may be the answer.
Relatedly, there are no “weak pointers.”
Java has always had weak, soft and phantom references, but I suspect this is not what is meant here. ??
You can’t close over anything but final variables in an inner class!
There is true of anonymous inner classes, but not nested inner classes referring to fields. Closures might not have this restriction but its likely to be just as confusing. Being used to the requirement for final variables, I don’t find this problem esp. as my IDE will correct the code as required for me.
The access model with respect to the mutability (or read-only-ness) of objects blows
The main complaint appears to be that there are ways of treating final fields as mutable. This is required for de-serialization and dependency injectors. As long as you realise that you have two possible behaviours, one lower level than the other, it is far more useful than it is a problem.
The language also should impose the contract that literal constants are immutable.
Literal constants are immutable. It appears the author would like to expand what is considered a literal constant. It would be useful IMHO, to support const in the way C++ does. const is a keyword in Java and the ability to define immutable versions of classes without creating multiple implementations or read only wrappers would be more productive.
The locking model is broken.
The memory overhead of locking concern is really an implementation detail. Its up to the JVM to decide how large the header is and whether it can be locked. The other concern is that there is no control over who can obtain a lock. The common work around for this is to encapsulate your lock, which is what you would have to do in any case.
In theory the lock can be optimised away. Currently this only happens when the whole object is optimised way.
There is no way to signal without throwing
For this, I use a listener pattern with an onError method. There is no support in the language for this, but I don’t see the need to.
Doing foo.x should be defined to be equivalent to foo.x(),
Perhaps foo.x => foo.getX() would be a better choice, rather like C# does.
Compilers should be trivially able to inline zero-argument accessor methods to be inline object+offset loads.
The JIT does this, rather than the compiler. This allows the calling code to be changed after the callee has been compiled.
The notion of methods “belonging” to classes is lame.
This is a “cool” feature which some languages support. In a more dynamic environment, this can look nicer. The down side is that you can piece of code for a class all over the place and you would have to have some way of managing duplicates in different libraries. e.g. library A defines a new printString() method and library B also defines a printString method for the same class. You would need to make each library see its own copy and have some way of determining which version library C would want when it calls this method.
It comes with hash tables, but not qsort
It comes with an “optimised merge sort” which is designed to be faster.
String has length+24 bytes of overhead over byte
That is without considering that each of the two objects are aligned to an 8 byte boundary (making it higher). If that sounds bad, consider that malloc can be 16-byte aligned with a minimum size of 32 bytes. If you use a shared_ptr to a byte (to give you similar resource management) it can be much larger in C++ than Java.
The only reason for this overhead is so that String.substring() can return strings which share the same value array.
This is not correct. The problem is that Java doesn’t support variable sized objects (apart from arrays). This means that String object is a fixed size and to have variable sized field, you have to have another object. Its not great either way.
String.substring can be a source of “memory leak”
You have to know to take an explicit copy of you are going to retain a substring of a larger string. This is ugly, however the benefits usually out weight the down side. What would be a better solution is to be able to optimise the code so that a defensive copy was taken by default, except when the defensive copy is not needed (it is optimised away)
The file manipulation primitives are inadequate
The file system information has been improved in Java 7. I don’t think these options are available, but can be easily inferred if you need to know this.
here is no robust way to ask “am I running on Windows” or “am I running on Unix.’
There are System properties os.name, os.arch, os.version which have always been there.
There is no way to access link() on Unix, which is the only reliable way to implement file locking.
This was added in Java 7 Creating a Hard Link
There is no way to do ftruncate(), except by copying and renaming the whole file.
You can use RandomAccessFile.truncate(). Adding in Java 1.4.
Is “%10s %03d” really too much to ask?
It was added in Java 5.0
supports DataInput and DataOutput, FileInputStream and FileOutputStream can be wrapped in DataInputStream and DataOutputStream. They can be made to support the same interfaces. I have never come across a situation where I would want to use both classes in a single method.
markSupported is stupid
True. There are a number of stupid methods which are only there for historical purposes. Another being Object.wait(millis, nanos) on every object (even arrays) and yet the nanos is never really used.
What in the world is the difference between System and Runtime?
I agree it appears arbitrary and in some cases doubled up. System.gc() actually calls Runtime.getRuntime().gc() and yet is called System GC even in internal code. In hind site they should really be one class with monitoring functionality moved to JMX.
What in the world is application-level crap like checkPrintJobAccess() doing in the base language class library
So your SecurityManager can control whether you can perform printing. (Without having to have an Application level Security Manager as well) Not sure is this really prevents the need to have Application level security.