Featured FREE Whitepapers

What's New Here?


JVM PermGen – where art thou?

This post covers some basics of JVM memory structure and quickly peeks into PermGen to find out where it has disappeared since advent of Java SE 8. Bare Basics The JVM is just another process running on your system and the magic begins with the java command. Like any OS process, it needs memory for its run time operations. Remember – the JVM itself is a software abstraction of a hardware on top of which Java programs run and boast of OS independence and WORA (write once run anywhere).   Quick coverage of the JVM memory structure As per the spec, JVM is divided into 5 virtual memory segments.Heap Method (non heap) JVM Stack Native Stack PC RegistersHeapEvery object allocated in your Java program requires to be stored in the memory. The heap is the area where all the instantiated objects get stored. Yes – blame the new operator for filling up your Java heap! Shared by all threads The JVM throws java.lang.OutOfMemoryError when it’s exhausted Use the -Xms and -Xmx JVM options to tune the Heap sizeSub-divided intoEden (Young) – New object or the ones with short life expectancy exist in this area and it is regulated using the -XX:NewSize and -XX:MaxNewSize parameters. GC (garbage collector) minor sweeps this space Survivor – The objects which are still being referenced manage to survive garbage collection in the Eden space end up in this area. This is regulated via the -XX:SurvivorRatio JVM option Old (Tenured) – This is for objects which survive long garbage collections in both the Eden and Survivor space (due to lingering references of course). A special garbage collector takes care of this space. Object de-alloaction in the tenured space is taken care of by GC majorMethod AreaAlso called the non heap area (in HotSpot JVM implementation) It is divided into 2 major sub spacesPermanent Generation – This area stores class related data from class definitions, structures, methods, field, method (data and code) and constants. Can be regulated using -XX:PermSize and -XX:MaxPermSize. IT can cause java.lang.OutOfMemoryError: PermGen space if it runs out if space. Code Cache – The cache area is used to store compiled code. The compiled code is nothing but native code (hardware specific) and is taken care of by the JIT (Just In Time) compiler which is specific to the Oracle HotSpot JVM. JVM StackHas a lot to do with methods in the Java classes Stores local variables and regulates method invocation, partial result and return values Each thread in Java has its own (private) copy of the stack and is not accessible to other threads. Tuned using -Xss JVM optionNative StackUsed for native methods (non Java code) Per thread allocationPC RegistersProgram counter specific to a particular thread Contains addresses for JVM instructions which are being exceuted (undefined in case of native methods)So, that’s about it for the JVM memory segment basics. Coming to back to the Permanent Generation. So where is PermGen ??? Essentially, the PermGen has been completely removed and replaced by another memory area known as the Metaspace. Metaspace – quick factsIt’s part of the native heap memory Can be tuned using -XX:MetaspaceSize and -XX:MaxMetaspaceSize Clean up initiation driven by XX:MetaspaceSize option i.e. when the MetaspaceSize is reached. java.lang.OutOfMemoryError: Metadata space will be received if the native space is exhausted The PermGen related JVM options i.e. -XX:PermSize and -XX:MaxPermSize will be ignored if presentThis was obviously just the tip of the iceberg. For comprehensive coverage of the JVM, there is no reference better than the specification itself ! You can also exploreThe Java Language Specification What’s new in Java 8 ?Cheers !!!Reference: JVM PermGen – where art thou? from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

Caveats of HttpURLConnection

Does this piece of code look ok to you?                     HttpURLConnection connection = null; try { connection = (HttpURLConnection) url.openConnection(); try (InputStream in = url.getInputStream()) { return streamToString(in); } } finally { if (connection != null) connection.disconnect(); } Looks good – it opens a connection, reads from it, closes the input stream, releases the connection, and that’s it. But while running some performance tests, and trying to figure out a bottleneck issue, we found out that disconnect() is not as benign as it seems – when we stopped disconnecting our connections, there were twice as many outgoing connections. Here’s the javadoc: Indicates that other requests to the server are unlikely in the near future. Calling disconnect() should not imply that this HttpURLConnection instance can be reused for other requests.And on the class itslef: Calling the disconnect() method may close the underlying socket if a persistent connection is otherwise idle at that time.This is still unclear, but gives us a hint that there’s something more. After reading a couple of stackoverflow and java.net answers (1, 2, 3, 4) and also the android documentation of the same class, which is actually different from the Oracle implementation, it turns out that .disconnect() actually closes (or may close, in the case of android) the underlying socket. Then we can find this bit of documentation (it is linked in the javadoc, but it’s not immediately obvious that it matters when calling disconnect), which gives us the whole picture: The keep.alive property (default: true) indicates that sockets can be reused by subsequent requests. That works by leaving the connection to the server (which supports keep alive) open, and then the overhead of opening a socket is no longer needed. By default, up to 5 such sockets are reused (per destination). You can increase this pool size by setting the http.maxConnections property. However, after increasing that to 10, 20 and 50, there was no visible improvement in the number of outgoing requests. However, when we switched from HttpURLConnection to apache http client, with a pooled connection manager, we had 3 times more outgoing connections per second. And that’s without fine-tuning it. Load testing, i.e. bombarding a target server with as many requests as possible, sounds like a niche use-case. But in fact, if your application invokes a web service, either within your stack, or an external one, as part of each request, then you have the same problem – you will be able to make fewer requests per second to the target server, and consequently, respond to fewer requests per second to your users. The advice here is: almost always prefer apache http client – it has a way better API and it seems way better performance, without the need to understand how exactly it functions underneath. But be careful of the same caveats there as well – check pool size and connection reuse. If using HttpURLConnection, do not disconnect your connections after you read their response, consider increasing the socket pool size, and be careful of related problems.Reference: Caveats of HttpURLConnection from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

The Hidden Cost Of Estimation

“Why would you want a rough estimate, when I can do a more precise one?” And really, if we can do something better, why do it half way? There’s a simple answer, but I’ll give it after the long detailed one. Let’s start by asking again:   Why estimate at all? There’s a whole #NoEstimates discussion, whether we need estimations or not. Unless your organization is mature enough to handle the truth, someone will want an estimation, believing he can do something with it: approve the task, delay it, budget for it, plan subsequent operations. That someone needs information to make decisions, and is basing them on the numbers we give. In reality, unless there are orders of magnitude between expected results in estimation it wouldn’t matter. If we had a deadline for 6 months, and the estimation is 8 months, the project will probably be approved, knowing that we can remove  scope from it. If we estimated a project will take a year, there’s going to be a 3 month buffer between it and the next one, because “we know how it works”. Things usually go forward regardless of our estimation. If however, we estimate we need 5 times the budget than what we thought we needed, this may cause the project to cancel. In summary, the upfront estimation serves making decision. In fact, if you just go with the discussion, and leave the number out, you can reach the same decisions. So why do we need the numbers? Numbers are good proxies. They are simple, manageable, and we can draw wonderful graphs with them. The fact they are wrong, or can be right in very small number of cases is really irrelevant because we like numbers. Still, someone high up asked for them, shouldn’t we give them the best answer we can? Indeed we should. But we need to define what is the “best answer”, and how we get it. How do we estimate? How do we get to “it will take 3 months” answer? We rely on past experience. We apply our experience, hopefully, or someone else’s experience to compare similar projects from our past to the ones we’re estimating. We may have even collected data so our estimates are not based on our bad memory. Software changes all the time, so even past numbers should be modified. We don’t know how to factor in the things we don’t know how to do, or the “unknown unknowns” that will bite us, so we multiply it by a factor until a consensus is reached. We forget stuff, we assume stuff, but in the end we get to the “3 months” answer we can live with. Sometimes. How about the part we do know about – we can estimate that one more precisely. We can break it down to design, technology, risk and estimate it “better”. We can. But there’s a catch. Suppose after we finished the project, we find that 30% of it, included the “unknown unknowns” stuff.  We could have estimated the other 70% very precisely, but the whole estimation would still be volatile. (I’m being very conservative here, the “unknown unknowns” at the time of estimation is what makes most of a project). The simple answer So here is what we know:Estimation is mostly wrong People still want them It takes time to estimate Precise estimation costs more Precise and rough estimation have the same statistical meaning because of unknowns.That means that we need “good enough” estimates. These are the ones that cost less, and give a good enough, trusted basis for decision for the people who ask for it. Fredkin’s Paradox talks about how the closer the options we need to decide between, it takes longer for us to decide, while the difference in impact of choosing between the two becomes negligible. Effective estimation recognizes the paradox, and tries the fight it: because the impact of variations in the estimates, there’s no need to further deliberate them. If you get to the same quality of answer, you should go for the cheaper option. Precise estimates are costly, and you won’t get a benefit from making them more precise. In fact, as a product manager, I wouldn’t ask for precise estimates, because they cost me money and time not being spent on actual delivery. Working software over comprehensive documentation, remember?Reference: The Hidden Cost Of Estimation from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

The HVT Analysis Approach

In my career it took me some time to understand and be convinced of the importance of doing analysis. I still remember my first job experience, I just wanted to quickly write some code and refactor it n-times to get better results. Did not you? Things are different today and I am writing this post to share with you my personal approach to analysis. It is not something new, it is just my tailored method based on experience and well known methodologies. In general you get as input a problem domain (the observed system with all the entities and rules involved) and must produce as output a solution domain (the designed system that solve the original analysis problem). So a good start is to:Get a clear view of the problematic to solve, Reduce the problem domain to a minimum knowledge spaceIf I should quickly define Analysis I would say that it is about finding details, categorizing knowledge and solving problems. The HVT approach is made of three steps:Horizontal Analysis (Layers) Vertical Analysis (Integration) Transverse Analysis (Influences)To briefly describe the method I will use examples; those will be based on a quite common case study, that is building an house. In this case study our goal is to identify all the necessary requirements that are needed by an architect to design the building. Phase 1: The Horizontal Analysis Horizontally we search for common aspects, grouping them in named Layers with minimal coupling between them (and ideally no cyclic dependencies). The scope is to detect all the functional requirements and have a good vision of their functional context and boundaries. In our example we could define the following layers:Geography: The ground has to be solid; It must be far from floods; Must be close to green areas; Must be close to schools; … Infrastructure: Has to be connected to electricity and water provider; Has to be connected to city sewers; Must have a very fast internet connection; … Technology: Most devices must be remotely controllable; Doors have to open with a human recognition technology; Must produce solar electricity; … Security & Ergonomics: A modern alarm system has to be installed; Interior design has to be safe for babies; It must be accessible for old people; …  Phase 2: The Vertical Analysis Vertically we study Integration. Integration means putting things to work together; If things are intrinsically interoperable then integration efforts will be minimized. And this is our scope.  For this step we choose profiles and we do analysis by formulating integration questions. In our example:Adult: Will an adult have easily access to all the necessary devices? Does furniture fits well for him? Is the kitchen comfortable?… Baby: Are stairs well protected? Will he have enough space to play? Is the chosen floor easy to clean?… Apartment: Does the apartment easily access infrastructure services? Does it has an homogeneous design? Does it has enough light? …A negative answer probably means a loop back in previous phase to make some adaptations to increase interoperability.      Phase 3: The Transverse Analysis The last step is the more complicated, and not always needed. Its purpose is to study the indirect influences of different layers spanning between different profiles.  As for phase 2 we do it by analyzing the ongoing model and formulating questions. In our example some questions could be:Will the WI-FI required by an adult be dangerous for a baby? Maybe it will be better to have less signal power in his sleeping room. Will be an adult able to sleep when his child is playing an electric guitar? Maybe the child room should be acoustically isolated.This process is sometime difficult to solve, because it will surely bring you to find conflicts between profiles that sometime do not want to (or simply cannot) renounce to their needs.   Conclusion Even if the study case was very simple I hope you get an idea of what it means to move it to more complex domains. For example in Software Architecture clients are stakeholders, needs are functional and non functional requirements, profiles are applications or components, and an indirect influence can be a compatibility matrix of the used technologies. It is not easy to explain this approach in few lines, there are thousand of words I missed so if you have questions or any advice do not hesitate to leave your comment and share it.Reference: The HVT Analysis Approach from our JCG partner Marco Di Stefano at the Refactoring Ideas blog....

Name of the class

In Java every class has a name. Classes are in packages and this lets us programmers work together avoiding name collision. I can name my class A and you can also name your class A so long as long they are in different packages, they work together fine. If you looked at the API of the class Class you certainly noticed that there are three different methods that give you the name of a class:        getSimpleName() gives you the name of the class without the package. getName() gives you the name of the class with the full package name in front. getCanonicalName() gives you the canonical name of the class.Simple is it? Well, the first is simple and the second is also meaningful unless there is that disturbing canonical name. That is not evident what that is. And if you do not know what canonical name is, you may feel some disturbance in the force of your Java skills for the second also. What is the difference between the two? If you want a precise explanation, visit the chapter 6.7 of Java Language Specification. Here we go with something simpler, aimed simpler to understand though not so thorough. Let’s see some examples: package pakage.subpackage.evensubberpackage; import org.junit.Assert; import org.junit.Test;public class WhatIsMyName { @Test public void classHasName() { final Class<?> klass = WhatIsMyName.class; final String simpleNameExpected = "WhatIsMyName"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName"; Assert.assertEquals(nameExpected, klass.getName()); Assert.assertEquals(nameExpected, klass.getCanonicalName()); } ... This “unit test” just runs fine. But as you can see there is no difference between name and canonical name in this case. (Note that the name of the package is pakage and not package. To test your java lexical skills answer the question why?) Let’s have a look at the next example from the same junit test file: @Test public void arrayHasName() { final Class<?> klass = WhatIsMyName[].class; final String simpleNameExpected = "WhatIsMyName[]"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "[Lpakage.subpackage.evensubberpackage.WhatIsMyName;"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName[]"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Now there are differences. When we talk about arrays the simple name signals it appending the opening and closing brackets, just like we would do in Java source code. The “normal” name looks a bit weird. It starts with an L and semicolon is appended. This reflects the internal representation of the class names in the JVM. The canonical name changed similar to the simple name: it is the same as before for the class having all the package names as prefix with the brackets appended. Seems that getName() is more the JVM name of the class and getCanonicalName() is more like the fully qualified name on Java source level. Let’s go on with still some other example (we are still in the same file): class NestedClass{} @Test public void nestedClassHasName() { final Class<?> klass = NestedClass.class; final String simpleNameExpected = "NestedClass"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$NestedClass"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName.NestedClass"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } The difference is the dollar sign in the name of the class. Again the “name” is more what is used by the JVM and canonical name is what is Java source code like. If you compile this code, the Java compiler will generate the files:WhatIsMyName.class and WhatIsMyName$NestedClass.classEven though the class is named nested class it actually is an inner class. However in the naming there is no difference: a static or non-static class inside another class is just named the same. Now let’s see something even more interesting: @Test public void methodClassHasName() { class MethodClass{}; final Class<?> klass = MethodClass.class; final String simpleNameExpected = "MethodClass"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$1MethodClass"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = null; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } This time we have a class inside a method. Not a usual scenario, but valid from the Java language point of view. The simple name of the class is just that: the simple name of the class. No much surprise. The “normal” name however is interesting. The Java compiler generates a JVM name for the class and this name contains a number in it. Why? Because nothing would stop me having a class with the same name in another method in our test class and inserting a number is the way to prevent name collisions for the JVM. The JVM does not know or care anything about inner and nested classes or classes defined inside a method. A class is just a class. If you compile the code you will probably see the file WhatIsMyName$1MethodClass.class generated by javac. I had to add “probably” not because I count the possibility of you being blind, but rather because this name is actually the internal matter of the Java compiler. It may choose different name collision avoiding strategy, though I know no compiler that differs from the above. The canonical name is the most interesting. It does not exist! It is null. Why? Because you can not access this class from outside the method defining it. It does not have a canonical name. Let’s go on. What about anonymous classes. They should not have name. After all, that is why they are called anonymous. @Test public void anonymousClassHasName() { final Class<?> klass = new Object(){}.getClass(); final String simpleNameExpected = ""; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "pakage.subpackage.evensubberpackage.WhatIsMyName$1"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = null; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Actually they do not have simple name. The simple name is empty string. They do, however have name, made up by the compiler. Poor javac does not have other choice. It has to make up some name even for the unnamed classes. It has to generate the code for the JVM and it has to save it to some file. Canonical name is again null. Are we ready with the examples? No. We have something simple (a.k.a. primitive) at the end. Java primitives. @Test public void intClassHasName() { final Class<?> klass = int.class; final String intNameExpected = "int"; Assert.assertEquals(intNameExpected, klass.getSimpleName()); Assert.assertEquals(intNameExpected, klass.getName()); Assert.assertEquals(intNameExpected, klass.getCanonicalName()); } If the class represents a primitive, like int (what can be simpler than an int?) then the simple name, “the” name and the canonical names are all int the name of the primitive. Just as well an array of a primitive is very simple is it? @Test public void intArrayClassHasName() { final Class<?> klass = int[].class; final String simpleNameExpected = "int[]"; Assert.assertEquals(simpleNameExpected, klass.getSimpleName()); final String nameExpected = "[I"; Assert.assertEquals(nameExpected, klass.getName()); final String canonicalNameExpected = "int[]"; Assert.assertEquals(canonicalNameExpected, klass.getCanonicalName()); } Well, it is not simple. The name is [I, which is a bit mysterious unless you read the respective chapter of the JVM specification. Perhaps I talk about that another time. Conclusion The simple name of the class is simple. The “name” returned by getName() is the one interesting for JVM level things. The getCanonicalName() is the one that looks most like Java source.You can get the full source code of the example above from the gist e789d700d3c9abc6afa0 from GitHub.Reference: Name of the class from our JCG partner Peter Verhas at the Java Deep blog....

Typical Mistakes in Java Code

This page contains most typical mistakes I see in the Java code of people working with me. Static analysis (we’re using qulice can’t catch all of the mistakes for obvious reasons, and that’s why I decided to list them all here. Let me know if you want to see something else added here, and I’ll be happy to oblige. All of the listed mistakes are related to object-oriented programming in general and to Java in particular.       Class Names Read this short “What is an Object?” article. Your class should be an abstraction of a real life entity with no “validators”, “controllers”, “managers”, etc. If your class name ends with an “-er” — it’s a bad design. And, of course, utility classes are anti-patterns, like StringUtils, FileUtils, and IOUtils from Apache. The above are perfect examples of terrible designs. Read this follow up post: OOP Alternative to Utility Classes. Of course, never add suffixes or prefixes to distinguish between interfaces and classes. For example, all of these names are terribly wrong: IRecord, IfaceEmployee, or RecordInterface. Usually, interface name is the name of a real-life entity, while class name should explain its implementation details. If there is nothing specific to say about an implementation, name it Default, Simple, or something similar. For example: class SimpleUser implements User {}; class DefaultRecord implements Record {}; class Suffixed implements Name {}; class Validated implements Content {}; Method Names Methods can either return something or return void. If a method returns something, then its name should explain what it returns, for example (don’t use the get prefix ever): boolean isValid(String name); String content(); int ageOf(File file); If it returns void, then its name should explain what it does. For example: void save(File file); void process(Work work); void append(File file, String line); There is only one exception to the rule just mentioned — test methods for JUnit. They are explained below. Test Method Names Method names in JUnit tests should be created as English sentences without spaces. It’s easier to explain by example: /** * HttpRequest can return its content in Unicode. * @throws Exception If test fails */ public void returnsItsContentInUnicode() throws Exception { } It’s important to start the first sentence of your JavaDoc with the name of the class you’re testing followed by can. So, your first sentence should always be similar to “somebody can do something”. The method name will state exactly the same, but without the subject. If I add a subject at the beginning of the method name, I should get a complete English sentence, as in above example: “HttpRequest returns its content in unicode”. Pay attention that the test method doesn’t start with can.Only JavaDoc comments start with ‘can.’ Additionally, method names shouldn’t start with a verb. It’s a good practice to always declare test methods as throwing Exception. Variable Names Avoid composite names of variables, like timeOfDay, firstItem, or httpRequest. I mean with both — class variables and in-method ones. A variable name should be long enough to avoid ambiguity in its scope of visibility, but not too long if possible. A name should be a noun in singular or plural form, or an appropriate abbreviation. For example: List<String> names; void sendThroughProxy(File file, Protocol proto); private File content; public HttpRequest request; Sometimes, you may have collisions between constructor parameters and in-class properties if the constructor saves incoming data in an instantiated object. In this case, I recommend to create abbreviations by removing vowels (see how USPS abbreviates street names). Another example: public class Message { private String recipient; public Message(String rcpt) { this.recipient = rcpt; } } In many cases, the best hint for a name of a variable can ascertained by reading its class name. Just write it with a small letter, and you should be good: File file; User user; Branch branch; However, never do the same for primitive types, like Integer number or String string. You can also use an adjective, when there are multiple variables with different characteristics. For instance: String contact(String left, String right); Constructors Without exceptions, there should be only one constructor that stores data in object variables. All other constructors should call this one with different arguments. For example: public class Server { private String address; public Server(String uri) { this.address = uri; } public Server(URI uri) { this(uri.toString()); } } One-time Variables Avoid one-time variables at all costs. By “one-time” I mean variables that are used only once. Like in this example: String name = "data.txt"; return new File(name); This above variable is used only once and the code should be refactored to: return new File("data.txt"); Sometimes, in very rare cases — mostly because of better formatting — one-time variables may be used. Nevertheless, try to avoid such situations at all costs. Exceptions Needless to say, you should never swallow exceptions, but rather let them bubble up as high as possible. Private methods should always let checked exceptions go out. Never use exceptions for flow control. For example this code is wrong: int size; try { size = this.fileSize(); } catch (IOException ex) { size = 0; } Seriously, what if that IOException says “disk is full”? Will you still assume that the size of the file is zero and move on? Indentation For indentation, the main rule is that a bracket should either end a line or be closed on the same line (reverse rule applies to a closing bracket). For example, the following is not correct because the first bracket is not closed on the same line and there are symbols after it. The second bracket is also in trouble because there are symbols in front of it and it is not opened on the same line: final File file = new File(directory, "file.txt"); Correct indentation should look like: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join( Arrays.asList("a", "b") ) ), "separator" ); The second important rule of indentation says that you should put as much as possible on one line – within the limit of 80 characters. The example above is not valid since it can be compacted: StringUtils.join( Arrays.asList( "first line", "second line", StringUtils.join(Arrays.asList("a", "b")) ), "separator" ); Redundant Constants Class constants should be used when you want to share information between class methods, and this information is a characteristic (!) of your class. Don’t use constants as a replacement of string or numeric literals — very bad practice that leads to code pollution. Constants (as with any object in OOP) should have a meaning in a real world. What meaning do these constants have in the real world: class Document { private static final String D_LETTER = "D"; // bad practice private static final String EXTENSION = ".doc"; // good practice } Another typical mistake is to use constants in unit tests to avoid duplicate string/numeric literals in test methods. Don’t do this! Every test method should work with its own set of input values. Use new texts and numbers in every new test method. They are independent. So, why do they have to share the same input constants? Test Data Coupling This is an example of data coupling in a test method: User user = new User("Jeff"); // maybe some other code here MatcherAssert.assertThat(user.name(), Matchers.equalTo("Jeff")); On the last line, we couple "Jeff" with the same string literal from the first line. If, a few months later, someone wants to change the value on the third line, he/she has to spend extra time finding where else "Jeff" is used in the same method. To avoid this data coupling, you should introduce a variable. Related Posts You may also find these posts interesting:Why NULL is Bad? Objects Should Be Immutable OOP Alternative to Utility Classes Avoid String Concatenation Simple Java SSH ClientReference: Typical Mistakes in Java Code from our JCG partner Yegor Bugayenko at the About Programming blog....

Cross-dysfunctional teams

Every agile enthusiast will tell you how powerful a self-empowered cross-functional team can be. Once you have one, it brings complete team accountability from product idea to customer support, it naturally grows with continuous improvement, and finds self motivation in innovation and delivery of customer value. It’s a beautiful and powerful concept; the practical implementation sometimes is not so beautiful and more often than not what you get, is a cross-dysfunctional team. Let’s have a look at the cross-dysfunctional examples I have experienced.  Pseudo specialist cross-dysfunctional team Developer: “I am a developer I am not meant to test, the testers test!” Tester: “I don’t need to know anything about how the product is designed, I only care about how the customers use it!” Business Analyst: “I am not technical, I can’t help you guys!” “As Long As It Works for us” cross-dysfunctional team Developer: “It works in our environments, it’s operations responsibility to make it work in production” Tester: ”Listen, it worked in UAT, it must be a configuration issue, or a missing firewall hole and nothing I could have spotted during testing…” Customer: “Hello! Nothing works here…”   Abdicating cross-dysfunctional team Developer: “The architect told me to do it like this” Tester: “Feck it, let the Test manager deal with it” Business Analyst: “I don’t think there is any value in this story, but the Product Owner wants it, so get on with it and develop it!”   Continuous Decline cross-dysfunctional team Developer: “No point in doing retrospectives, things are always the same” Tester: “We DON’T HAVE TIME to try new things!” Business Analyst: “We do it like this, because that’s how we do things here, and that’s it!”   Disintegrated cross-dysfunctional team Developer: “My code works perfectly, it’s their system that doesn’t work, who cares” Tester: “We have 100% coverage, all our tests pass and we have no bugs, if the developers of system X are idiots, there is nothing we can do about it” Customer: “And still, nothing works here…”     Nazi cross dysfunctional team Developer: “Testers are failed programmers, they shouldn’t be called engineers” Tester: “Developers are only able to produce bugs, the world would be better with more testers and less developers” Business Analysts: “I don‘t know why I bother talking to testers and developers, they are total idiots” Do you recognise your team in one of the categories above? What have you done until now to help your team change? Little? Nothing? But you are still bitching about it, aren’t you? Remember, you are very powerful and can become the change that you want to see.Reference: Cross-dysfunctional teams from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....

Monitoring Akka with Kamon

I like the JVM a lot because there are a lot of tools available for inspecting a running JVM instance at runtime. The Java Mission Control (jmc) is one of my favorite tools, when it comes to monitor threads, hot methods and memory allocation. However these tools are of limited use, when monitoring an event-driven, message-based system like Akka. A thread is almost meaningless as it could have processed any kind of message. Luckly there are some tools out there to fill this gap. Even though the Akka docs are really extensive and useful, there isn’t a lot about monitoring. I’m more a Dev than a Ops guy, so I will only give a brief and “I thinks it does this” introduction to the monitoring-storage-gathering-displaying-stuff. The Big Picture First of all, when we are done we will have this infrastructure runningThanks to docker we don’t have to configure anything on the right hand-side to get started. Kamon Starting on the left of the picture. Kamon is a library which uses AspectJ to hook into methods calls made by the ActorSystem and record events of different types. The Kamon docs have some big gaps, but you can get a feeling of what is possible. I will not make any special configuration and just use the defaults to get started as fast as possible. StatsD – Graphite A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services. Kamon provides also other backends (datadog, newrelic) to report to. For this tutorial we stick with the free StatsD server and Graphite as Backend Service. Grafana Grafana is a frontend for displaying your stats logged to Graphite. You have a nice Demo you can play around with. However I will give a detailed instruction on how to add your metrics in our Grafana dashboard. Getting started First we need an application we can monitor. I’m using my akka-kamon-activator. Checkout the code: git clone git@github.com:muuki88/activator-akka-kamon.git The application contains two message generators: one for peaks and one for constant load. Two types of actors handle these messages. One creates random numbers and the child actors calculate the prime factors. Kamon Dependencies and sbt-aspectj First we add the kamon dependencies via val kamonVersion = "0.3.4"   libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-actor" % "2.3.5", "io.kamon" %% "kamon-core" % kamonVersion, "io.kamon" %% "kamon-statsd" % kamonVersion, "io.kamon" %% "kamon-log-reporter" % kamonVersion, "io.kamon" %% "kamon-system-metrics" % kamonVersion, "org.aspectj" % "aspectjweaver" % "1.8.1" ) Next we configure the sbt-aspectj-plugin to weave our code at compile time. First add the plugin to your plugins.sbt addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.9.4") And now we configure it aspectjSettings   javaOptions <++= AspectjKeys.weaverOptions in Aspectj   // when you call "sbt run" aspectj weaving kicks in fork in run := true Last step is to configure what should be recorded. Open up your application.conf where your akka configuration resides. Kamon uses the kamon configuration key. kamon {   # What should be recorder metrics { filters = [ { # actors we should be monitored actor { includes = [ "user/*", "user/worker-*" ] # a list of what should be included excludes = [ "system/*" ] # a list of what should be excluded } },   # not sure about this yet. Looks important { trace { includes = [ "*" ] excludes = [] } } ] }   # ~~~~~~ StatsD configuration ~~~~~~~~~~~~~~~~~~~~~~~~   statsd { # Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere. hostname = "" port = 8125   # Interval between metrics data flushes to StatsD. It's value must be equal or greater than the # kamon.metrics.tick-interval setting. flush-interval = 1 second   # Max packet size for UDP metrics data sent to StatsD. max-packet-size = 1024 bytes   # Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics # collection for your desired entities must be activated under the kamon.metrics.filters settings. includes { actor = [ "*" ] trace = [ "*" ] dispatcher = [ "*" ] }   simple-metric-key-generator { # Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows # this pattern: # application.host.entity.entity-name.metric-name application = "yourapp" } } } Our app is ready to run. But first, we deploy our monitoring backend. Monitoring Backend As we saw in the first picture, we need a lot of stuff running in order to store our log events. The libraries and components used are most likely reasonable and you (or the more Ops than Dev guy) will have to configure it. But for the moment we just fire them up all at once in a simple docker container. I don’t put them in detached mode so I see what’s going on. docker run -v /etc/localtime:/etc/localtime:ro -p 80:80 -p 8125:8125/udp -p 8126:8126 -p 8083:8083 -p 8086:8086 -p 8084:8084 --name kamon-grafana-dashboard muuki88/grafana_graphite:latest My image is based on a fork from the original docker image by kamon. Run and build the Dashboard Now go to your running Grafana instance at localhost. You see a default, which we will use to display the average time-in-mailbox. Click on the title of the graph ( First Graph (click title to edit ). Now select the metrics like this:And that’s it!Reference: Monitoring Akka with Kamon from our JCG partner Nepomuk Seiler at the mukis.de blog....

AngularJS Tutorial: Getting Started with AngularJS

AngularJS is a popular JavaScript framework for building Single Page Applications (SPAs). AngularJS provides the following features which makes developing web apps easy:            Two way data binding Dependency Injection Custom HTML Directives Easy integration with REST webservices using $http, $resource, Restangular etc Support for Testingand many more… Though there are lot more features than the above mentioned list, these are the very commonly used features. I am not going to explain what 2-way data binding is, how $scope works here because there are tons of material already on web. As a Java developer, I will be using SpringBoot based RESTful back-end services. If you want you can use JavaEE/JAX-RS to build REST back-end services. Also you might like using NetBeans as it has wonderful AngularJS auto-completion support out of the box. So lets get start coding AngularJS HelloWorld application. Create index.html with the following content and start your server and point your browser to http://localhost:8080/hello-angularjs/index.html <html> <head> <title>Hello AngularJS</title> <meta charset="UTF-8"> <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> </head> <body ng-app> <p>Enter Name: <input type="text" ng-model="myname"> </p> <p>Hello {{myname}}!!</p> </body> </html> Now start typing in input text and your Hello {{myname}} would immediately reflect the value you are entering in input text field. Ok, we are done with “HelloWorld” ceremony and warm up! We have used AngularJS CDN URL for loading AngularJS library. We can download AngularJS from https://angularjs.org/ add the angular.min.js script. But we will be using WebJars (http://www.webjars.org/) which provides the popular javascript libraries as maven jar modules along with transitive dependencies. If we want to use Twitter Bootstrap we should include jQuery also. But using WebJars I need to configure only bootstrap jar dependency and it will pull jquery dependency for me. Let us create a SpringBoot project by selecting File -> New -> Spring Starter Project, select “Web” and “Data JPA” modules and Finish. If you are not using STS then you can create this starter template from http://start.spring.io/ and download it as zip. We will be using Bootstrap and font-awesome javascript libraries to build our Web UI. Lets configure H2 database, AngularJS, Bootstrap and font-awesome libraries as WebJar maven dependencies in pom.xml. As it is a SpringBoot jar type application we will put all our html pages in src/main/resources/public folder and all our javascripts, css, images in src/main/resources/static folder.Now modify the AngularJS CDN reference to <script src=”webjars/angularjs/1.2.19/angular.js”></script>. Lets include the bootstrap and font-awesome css/js in our index.html. Also we will be using angular-route module for page navigation and hence we need to include angular-route.js as well. Lets create app.js file which contains our main angularjs module configuration in src/main/resources/static/js folder. Also create controllers.js, services.js, filters.js, directives.js in the same folder and include them in index.html.SpringBoot will serve the static content from src/main/resources/static folder. <!doctype html> <html> <head> <meta charset="utf-8"/> <title>DashBoard</title> <link rel="stylesheet" href="webjars/bootstrap/3.2.0/css/bootstrap.css"/> <link rel="stylesheet" href="webjars/font-awesome/4.1.0/css/font-awesome.css"/> <link rel="stylesheet" href="css/styles.css"/></head> <body ng-app> <p>Enter Name: <input type="text" ng-model="myname"> </p> <p>Hello {{myname}}!!</p> <script src="webjars/jquery/1.11.1/jquery.js"></script> <script src="webjars/bootstrap/3.2.0/js/bootstrap.js"></script> <script src="webjars/angularjs/1.2.19/angular.js"></script> <script src="webjars/angularjs/1.2.19/angular-route.js"></script> <script src="js/app.js"></script> <script src="js/controllers.js"></script> <script src="js/services.js"></script></body> </html> In Application.java file add the following RequestMapping to map context root to index.html. package com.sivalabs.app;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping;@Configuration @ComponentScan @EnableAutoConfiguration public class Application {public static void main(String[] args) { SpringApplication.run(Application.class, args); } }@Controller class HomeController { @RequestMapping("/") public String home() { return "index.html"; } } Now run this Application.java as stand-alone class and go to http://localhost:8080/. It should work same as earlier. Now we have basic setup ready. Lets build a very simple Todo application. Create a JPA entity Todo.java, its Spring Data JPA repository interface and TodoController to perform Read/Create/Delete operations. package com.sivalabs.app.entities; @Entity public class Todo { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer id; private String description; @Temporal(TemporalType.TIMESTAMP) private Date createdOn = new Date(); //setters and getters }package com.sivalabs.app.repos; public interface TodoRepository extends JpaRepository<Todo, Integer>{ }package com.sivalabs.app.controllers;import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import com.sivalabs.app.entities.Todo; import com.sivalabs.app.repos.TodoRepository;@RestController @RequestMapping("/todos") public class TodoController {@Autowired private TodoRepository todoRepository;@RequestMapping(value="", method=RequestMethod.GET) public List<Todo> persons() { return todoRepository.findAll(); } @RequestMapping(value="", method=RequestMethod.POST) public Todo create(@RequestBody Todo todo) { return todoRepository.save(todo); } @RequestMapping(value="/{id}", method=RequestMethod.DELETE) public void delete(@PathVariable("id") Integer id) { todoRepository.delete(id); } } Create DatabasePopulator to setup some initial data. package com.sivalabs.app;import java.util.Arrays; import java.util.Date; import javax.annotation.PostConstruct; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import com.sivalabs.app.entities.Todo; import com.sivalabs.app.repos.TodoRepository;@Component public class DatabasePopulator { @Autowired private TodoRepository todoRepository; @PostConstruct void init() { try { Todo t1 = new Todo(null, "Task 1", new Date()); Todo t2 = new Todo(null, "Task 2", new Date()); Todo t3 = new Todo(null, "Task 3", new Date()); this.todoRepository.save(Arrays.asList(t1,t2,t3)); } catch (Exception e) { e.printStackTrace(); } } } Now our back-end RESTful web services ready at the following URLs. GET – http://localhost:8080/todos for getting list of Todos POST - http://localhost:8080/todos for creating new Todo DELETE - http://localhost:8080/todos/1 to delete Todo(id:1) Lets create our main angularjs module ‘myApp‘ and configure our application routes in app.js file. var myApp = angular.module('myApp',['ngRoute']);myApp.config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) { $routeProvider .when('/home', { templateUrl: 'templates/home.html', controller: 'TodoController' }) .otherwise({ redirectTo: 'home' }); }]); Now update index.html to hookup myApp module at the root of page using and use to load the current route template. <!doctype html> <html ng-app="myApp"> <head> <meta charset="utf-8"/> <title>DashBoard</title> <link rel="stylesheet" href="webjars/bootstrap/3.2.0/css/bootstrap.css"/> <link rel="stylesheet" href="webjars/font-awesome/4.1.0/css/font-awesome.css"/> <link rel="stylesheet" href="css/styles.css"/></head> <body> <div class="container"> <div ng-view></div> </div><script src="webjars/jquery/1.11.1/jquery.js"></script> <script src="webjars/bootstrap/3.2.0/js/bootstrap.js"></script> <script src="webjars/angularjs/1.2.19/angular.js"></script> <script src="webjars/angularjs/1.2.19/angular-route.js"></script> <script src="js/app.js"></script> <script src="js/controllers.js"></script> <script src="js/services.js"></script> </body> </html> Create home.html template in src/main/resources/public/templates folder. <div class="col-md-8 col-md-offset-2"><form class="form-horizontal" role="form"> <div class="form-group form-group-md"> <div class="col-md-10"> <input type="text" class="form-control" ng-model="newTodo.description">  </div> <button class="btn btn-primary" ng-click="addTodo(newTodo)">Add</button> </div> </form><table class="table table-striped table-bordered table-hover"> <thead> <tr> <th width="70%">Item</th> <th>Date</th> <th>Delete</th> </tr> </thead> <tbody> <tr ng-repeat="todo in todos"> <td>{{todo.description}}</td> <td>{{todo.createdOn | date}}</td> <td><button class="btn btn-danger" ng-click="deleteTodo(todo)">Delete</button></td> </tr> </tbody> </table> <br/> </div> It is a very simple html page with some bootstrap styles and we are using some angularjs features. We are using ng-repeat directive to iterate through array of Todo JSON objects, ng-click directive to bind a callback function to button click. To invoke REST services we will use angularjs built-in $http service. $http service resides in angular-route.js, don’t forget to include it in index.html. $http.verb('URI') .success(success_callback_function(data, status, headers, config){ //use data }) .error(error_callback_function(data, status, headers, config) { alert('Error loading data'); });For example: to make GET /todos REST call: $http.get('todos') .success(function(data, status, headers, config) { //use data }) .error(function(data, status, headers, config) { alert('Error loading data'); });Create TodoController in controllers.js file. In TodoController we will create functions to load/create/delete Todos. angular.module('myApp') .controller('TodoController', [ '$scope', '$http', function ($scope, $http) { $scope.newTodo = {}; $scope.loadTodos = function(){ $http.get('todos') .success(function(data, status, headers, config) { $scope.todos = data; }) .error(function(data, status, headers, config) { alert('Error loading Todos'); }); }; $scope.addTodo = function(){ $http.post('todos',$scope.newTodo) .success(function(data, status, headers, config) { $scope.newTodo = {}; $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error saving Todo'); }); }; $scope.deleteTodo = function(todo){ $http.delete('todos/'+todo.id) .success(function(data, status, headers, config) { $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error deleting Todo'); }); }; $scope.loadTodos(); }]); Now point your browser to http://localhost:8080/. You should see list of Todos and New Todo Entry form and Delete option for each Todo item. By now we get some hands-on with AngularJS basic features. In next post I will explain using multiple routes, multiple controllers and services. Stay tuned!Reference: AngularJS Tutorial: Getting Started with AngularJS from our JCG partner Siva Reddy at the My Experiments on Technology blog....

Programming Language Job Trends Part 3 – August 2014

After a slight delay we finally get to the third part of the programming language job trends. Today we review Erlang, Groovy, Scala, Lisp, and Clojure. If you do not see some of the more popular languages, take a look at Part 1 and Part 2. Lisp is included almost as a baseline, because it has had sustained usage for decades but never enough to get into the mainstream. Go and Haskell are still not included due to the noise in the data and the current lack of demand. Most likely, Go will be included in the next update assuming I can craft a good query for it. If there is an emerging language that you think should be included, please let me know in the comments. To start, we look at the long term trends from Indeed.com:      Much like the previous two parts of this series, there is a definite downward trend for this group of languages. The trend is not nearly as negative as the previous two posts, but it is there. Groovy demand seems to be a bit cyclical, with new peaks every few months, and it still leads this pack. Scala has followed the same general trend as Groovy and keeping a large lead on the rest of the pack. Clojure has stayed fairly flat for the past two years and that allowed it to take a lead over Erlang. Erlang has slowly declined since its peak in early 2011, barely maintaining a lead over Lisp. Lisp is in a very slow decline over the past 4 years. Unlike the previous two parts, the short-term trends from SimplyHired.com provide decent data:  As you can see, SimplyHired is showing similar downward trends to Indeed for Groovy and Scala. However, the Clojure, Erlang and Lisp trends look much flatter. Clojure has been leading Erlang and Lisp since the middle of 2013 and looks to be increasing its lead while the others decline. Here, Lisp is in a flatter trend which lets it overtake Erlang in the past few months. Erlang seems to be in a bit of a lull after a slight rise at the beginning of 2014. Lastly, we look at the relative growth from Indeed.com:  Groovy maintains a very high growth trend, but it is definitely lessening in 2014. Scala is showing very strong growth at just over 10,000% and being somewhat flat overall since late 2011. Much like the other graphs, Clojure growth is outpacing Erlang, sitting above 5000%. Erlang growth is on a negative trend, falling below 500% for the first time since early 2011. Lisp is basically not registering on this graph as it is not really growing, staying barely positive. While most of these languages continue to grow, the trends for Erlang are not a good thing. Steadily decreasing growth for the past 3 years points to a language that will eventually become niche. Given that the overall demand was never that high, future prospects for Erlang are unpleasant. Overall, these trends and the previous two installments make industry growth look flat. Granted, much of this is due to the breadth of languages being used, but even emerging languages are not seeing the same type of increasing growth. If you look at languages like Go and Haskell, the trends are not that much better. Go is definitely growing but Haskell is not. It is possible that both of these languages get included in our next update. Clojure growth is definitely interesting as it seems to be one of the few positive trends in all of the job trends. I would not be surprised if Clojure starts separating itself from the bottom pack before the next update.Reference: Programming Language Job Trends Part 3 – August 2014 from our JCG partner Rob Diana at the Regular Geek blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: