Featured FREE Whitepapers

What's New Here?


Pizza problem – builder vs decorator

Problem Statement We need to build the software for a pizza company who wants to prepare different types of pizzas, e.g Chicken Pizza, Flat Bread, Pepperoni Pizza with Extra Cheese, put add on toppings on it. Lets try to see which design pattern suits this problem statement and under what scenario. Traditionally, for pizza problem, builder pattern is most commonly used. However there are some examples using decorator as well, both the approaches are correct but there is difference in use case. Builder is an object creation pattern whereas decorator is used to change the already built object at runtime. Lets try to understand this by the examples :Builder Pattern : Here the use case is that pizza is prepared in one go with set specifications. Lets see the Pizza Class : public class Pizza {private float totalPrice = 0;private Size size; private Topping topping; private Crust crust; private Cheese cheese;public Size getSize() { return size; }public void setSize(Size size) { this.size = size; }public Topping getTopping() { return topping; }public void setTopping(Topping topping) { this.topping = topping; }public Crust getCrust() { return crust; }public void setCrust(Crust crust) { this.crust = crust; }public Cheese getCheese() { return cheese; }public void setCheese(Cheese cheese) { this.cheese = cheese; }public float getTotalPrice() { return totalPrice; }public void addToPrice(float price) { this.totalPrice = totalPrice + price; } } The 4 enum classes : public enum Cheese { AMERICAN { public float getCost() { return 40; } }, ITALIAN { public float getCost() { return 60; } };public abstract float getCost();}public enum Crust {THIN { public float getCost(){ return 70; } } , STUFFED{ public float getCost(){ return 90; } };public abstract float getCost(); }public enum Size { MEDIUM { public float getCost() { return 100; } }, LARGE { public float getCost() { return 160; } };public abstract float getCost(); }public enum Topping {PEPPERONI { public float getCost(){ return 30; } }, CHICKEN{ public float getCost(){ return 35; } }, MUSHROOM{ public float getCost(){ return 20; } };public abstract float getCost(); } The PizzaBuilder Class : public class PizzaBuilder {Pizza pizza = new Pizza();public PizzaBuilder withTopping(Topping topping) { pizza.setTopping(topping); pizza.addToPrice(topping.getCost()); return this; }public PizzaBuilder withSize(Size size) { pizza.setSize(size); pizza.addToPrice(size.getCost()); return this; }public PizzaBuilder withCrust(Crust crust) { pizza.setCrust(crust); pizza.addToPrice(crust.getCost()); return this; }public Pizza build() { return pizza; }public double calculatePrice() { return pizza.getTotalPrice(); }} The Test Case : public class PizzaBuilderTest {@Test public void shouldBuildThinCrustChickenPizza(){ Pizza pizza = new PizzaBuilder().withCrust(Crust.THIN).withTopping(Topping.CHICKEN).withSize(Size.LARGE).build(); assertEquals(Topping.CHICKEN,pizza.getTopping()); assertEquals(Size.LARGE,pizza.getSize()); assertEquals(Crust.THIN,pizza.getCrust()); assertEquals(265.0,pizza.getTotalPrice(),0); } }Decorator Pattern : Decorator Pattern is used to add or remove additional functionalities or responsibilities from the object dynamically without impacting the original object. The use case would be that some base pizza is first prepared and then different specifications are added to it. Here, we need an interface (Pizza) for the BasicPizza (Concrete Component) that we want to decorate and a class PizzaDecorator that contains reference field of Pizza (decorated) interface. The Pizza Interface: public interface Pizza { public String bakePizza(); public float getCost(); } The Base Pizza Implementation : public class BasePizza implements Pizza{public String bakePizza() { return "Basic Pizza"; } public float getCost(){ return 100; } } PizzaDecorator class : public class PizzaDecorator implements Pizza { Pizza pizza; public PizzaDecorator(Pizza newPizza) { this.pizza = newPizza; }public String bakePizza() { return pizza.bakePizza(); }@Override public float getCost() { return pizza.getCost(); } } The 2 decorators : Mushroom and Pepperoni public class Mushroom extends PizzaDecorator {public Mushroom(Pizza newPizza) { super(newPizza); }@Override public String bakePizza() { return super.bakePizza() + " with Mashroom Topings"; }@Override public float getCost() { return super.getCost()+80; } } public class Pepperoni extends PizzaDecorator {public Pepperoni(Pizza newPizza) { super(newPizza); }@Override public String bakePizza() { return super.bakePizza() + " with Pepperoni Toppings"; }@Override public float getCost() { return super.getCost()+110; } } The test Case: public class PizzaDecoratorTest {@Test public void shouldMakePepperoniPizza(){ Pizza pizza = new Pepperoni(new BasePizza()); assertEquals("Basic Pizza with Pepperoni Toppings",pizza.bakePizza()); assertEquals(210.0,pizza.getCost(),0); } }The difference Patterns like builder and factory (and abstract factory) are used in creation of objects. And the patterns like decorator (also called as structural design patterns) are used for extensibility or to provide structural changes to already created objects. Both types of patterns largely favour composition over inheritance, so giving this as a differentiator for using builder instead of decorator will not make any sense. Both give behaviour upon runtime rather than inheriting it. So, one should use builder if he wants to limit the object creation with certain properties/features. For example there are 4-5 attributes which are mandatory to be set before the object is created or we want to freeze object creation until certain attributes are not set yet. Basically, use it instead of constructor – as Joshua Bloch states in Effective Java, 2nd Edition. The builder exposes the attributes the generated object should have, but hides how to set them. Decorator is used to add new features of an existing object to create a new object. There is no restriction of freezing the object until all its features are added. Both are using composition so they might look similar but they differ largely in their use case and intention. Another approach could be to use Factory Pattern; if we do not want to expose the attributes and want the creation of a certain pizza “magically” inside then based on some attributes. We will explore this implementation using Factory Pattern in the later post.Reference: Pizza problem – builder vs decorator from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....

Resolving the dual ItemClick conundrum for the Android Gallery View

Yes, the Gallery View is deprecated, however currently there really isn’t an out of the box solution to what the gallery view provides, which is a center locked horizontal scrolling list of items. The Twoway-view project has some potential in addressing this need, but it needs time to mature a bit. There are many issues with the Gallery view, but one of the most significant is if you need the ability to show context menus for an item. If you are using it with a D-Pad (say in an Android TV application), and you long press on an item using D-PAD OK, or BUTTON-A, you’ll get both ItemLongClick and ItemClick events fired by the gallery. This has been a long standing bug within the app, an has plagued me for years. I finally tracked it down and created the following gist that can be applied to a custom version of the Gallery. /** * This change should work in both the original Gallery code as well as in the vairous open source * extensions and variations. * * Find the displatchLongPress method in the Gallery class file. and update it to the following. * * Make sure that longPressHandled is a field available to all the other methods. * */ boolean longPressHandled = false; private boolean dispatchLongPress(View view, int position, long id) { longPressHandled = false;if (mOnItemLongClickListener != null) { longPressHandled = mOnItemLongClickListener.onItemLongClick(this, mDownTouchView, mDownTouchPosition, id); }if (!longPressHandled) { mContextMenuInfo = new AdapterContextMenuInfo(view, position, id); longPressHandled = super.showContextMenuForChild(this); }if (longPressHandled) { performHapticFeedback(HapticFeedbackConstants.LONG_PRESS); }return longPressHandled; }/** * Find the onKeyUp method and change it to the following. * * Since dispatchLongPress is fired before the onKeyUp event, we have * to check to see if the onItemLongClickListener handled the event. * If so do not performItemClick, and always reset the longPressHandled to false. * */ @Override public boolean onKeyUp(int keyCode, KeyEvent event) { switch (keyCode) { case KeyEvent.KEYCODE_DPAD_CENTER: case KeyEvent.KEYCODE_ENTER: {if (mReceivedInvokeKeyDown) { if (mItemCount > 0) {dispatchPress(mSelectedChild); postDelayed(new Runnable() { @Override public void run() { dispatchUnpress(); } }, ViewConfiguration.getPressedStateDuration());int selectedIndex = mSelectedPosition - mFirstPosition;if (!longPressHandled) { performItemClick(getChildAt(selectedIndex), mSelectedPosition, mAdapter.getItemId(mSelectedPosition)); } } }// Clear the flag mReceivedInvokeKeyDown = false; longPressHandled = false;return true; } }return super.onKeyUp(keyCode, event); } Basically, what was happening is that the onKeyUp event that fired the ItemClick event never checked to see if the dispatchLongPress event had already handled the event. So it was always firing the ItemClick event. The gist makes this small change, and actually eliminates numerous work arounds that had to be done to get context menus to show with the gallery. This bug only affects you if you needed to use a D-PAD, Remote Control, or Game Controller as an input device. Normal touch screen events are handled correctly.Reference: Resolving the dual ItemClick conundrum for the Android Gallery View from our JCG partner David Carver at the Intellectual Cramps blog....

15 Java Socket Programming, Networking Interview Questions and Answers

Networking and Socket Programming is one of the most important areas of Java programming language, especially for those programmers, who are working in client/server based applications. Knowledge of important protocols e.g. TCP and UDP in detail is very important, especially if you are, for example, in the business of writing high frequency trading applications, which communicate via FIX Protocol or native exchange protocol. In this article, we will discuss some of the frequently asked questions on networking and socket programming, mostly based around TCP IP protocol. This article is kinda light on NIO though, as it doesn’t include questions regarding multiplexing, selectors, ByteBuffer and FileChannel etc., but it does include classic questions like difference between IO and NIO. Main focus of this post is to make a Java developer familiar with the low level parts e.g. how TCP and UDP protocol works, socket options and writing multi-threaded servers in Java. Questions discussed here are not really tied up to the Java programming language, and can be used in any programming language, which allows programmers to write client-server applications. By the way, If you are going for an interview in an Investment bank for a core Java developer role, you better prepare well on Java NIO, Socket Programming, TCP, UDP and Networking along with other popular topics e.g. multi-threading, Collections API and Garbage Collection tuning. You can also contribute any question, which is asked to you or related to socket programming and networking and can be useful for Java interviews. Java Networking and Socket Programming Questions Answers Here is my list of 15 interview questions related to networking basics, internet protocol and socket programming in Java. Though it doesn’t contain basic questions form API e.g. Server, ServerSocket, it focus on the high level concept of writing scalable server in Java using NIO selectors and how to implement that using threads, the limitations and issues etc. I will probably add few more questions based on some best practices while writing socket based application in Java. If you know a good question on this topic, feel free to suggest. 1. Difference between TCP and UDP protocol? There are many differences between TCP (Transmission control Protocol) and UDP (User Datagram Protocol), but the main is that TCP is connection oriented, while UDP is connection less. This means TCP provides a guaranteed delivery of messages in the order they were sent, while UDP doesn’t provide any delivery guarantee. Because of this guarantee, TCP is slower than UDP, as it needs to perform more work. TCP is best suited for messages, which you can’t afford to loss, e.g. order and trade messages in electronic trading, wire transfer in banking and finance etc. UDP is more suited for media transmission, where loss of one packet, known as datagrams is affordable and doesn’t affect quality of service. This answer is enough for most of the interviews, but you need to be more detailed when you are interviewing as Java developer for high frequency trading desk. Some of the points which many candidates forget to mention is about order and data boundary. In TCP, messages are guaranteed to be delivered in the same order as they are sent but data boundary is not preserved, which means multiple messages can be combined and sent together, or the receiver may receive one part of the message in one packet and other part of the message in next packet. Though application will receive full message and in the same order. TCP protocol will do the assembling of the message for you. On the other hand, UDP sends full messages in a datagram packet, if a client receives the packet it is guaranteed that it will get the full message, but there is no guarantee that packets will come in same order they are sent. In short, you must mention the following differences between TCP and UDP protocol while answering during interview :TCP is guaranteed delivery, UDP is not guaranteed. TCP guarantees order of messages, UDP doesn’t. Data boundary is not preserved in TCP, but UDP preserves it. TCP is slower compared to UDP.For a more detailed answer, see my post 9 differences between TCP and UDP protocol. 2. How does TCP handshake works? Three messages are exchanged as part of a TCP head-shake e.g. Initiator sends SYN, upon receiving this Listener sends SYN-ACK, and finally initiator replied with ACK, at this point TCP connection is moved to ESTABLISHED state. This process is easily understandable by looking at the following diagram.3. How do you implement reliable transmission in UDP protocol? This is usually a follow-up of a previous interview question. Though UDP doesn’t provide delivery guarantee at protocol level, you can introduce your own logic to maintain reliable messaging e.g. by introducing sequence numbers and retransmission. If the receiver finds that it has missed a sequence number, it can ask for a replay of that message from the Server. TRDP protocol, which is used in Tibco Rendezvous (a popular high speed messaging middle-ware) uses UDP for faster messaging and provides reliability guarantee by using sequence number and retransmission. 4. What is Network Byte Order? How does two host communicate if they have different byte-ordering? There are two ways to store two bytes in memory, little endian (least significant byte at the starting address) and big endian (most significant byte at the starting address). They are collectively known as host byte order. For example, an Intel processor stores the 32-bit integer as four consecutive bytes in memory in the order 1-2-3-4, where 1 is the most significant byte. IBM PowerPC processors would store the integer in the byte order 4-3-2-1. Networking protocols such as TCP are based on a specific network byte order, which uses big-endian byte ordering. If two machines are communicating with each other and they have different byte ordering, they are converted to network byte order before sending or after receiving. Therefore, a little endian micro-controller sending to a UDP/IP network must swap the order in which bytes appear within multi byte values before the values are sent onto the network, and must swap the order in which bytes appear in multi byte values received from the network before the values are used. In short, you can also say network byte order is standard of storing byte during transmission, and it uses big endian byte ordering mechanism. 5. What is Nagle’s algorithm? If an interviewer is testing your knowledge of TCP/IP protocol then it’s very rare for him not to ask this question. Nagle’s algorithm is way of improving performance of TCP/IP protocol and networks by reducing the number of TCP packets that need to be sent over network. It works by buffering small packets until the buffer reaches Maximum Segment Size. This is because small packets, which contain only 1 or 2 bytes of data, have more overhead in terms of a TCP header, which is of 40 bytes. These small packets can also lead to congestion in a slow network. Nagle’s algorithm tries to improve the efficiency of the TCP protocol by buffering them, in order to send a larger packet. Also Nagle’s algorithm has a negative effect on non small writes, so if you are writing large data on packets than it’s better to disable Nagle’s algorithm. In general, Nagle’s algorithm is a defense against a careless application, which sends lots of small packets to network, but it will not benefit or have a negative effect on a well written application, which properly takes care of buffering. 6. What is TCP_NODELAY? TCP_NODELAY is an option to disable Nagle’s algorithm, provided by various TCP implementations. Since Nagle’s algorithm performs badly with TCP delayed acknowledgement algorithm, it’s better to disable Nagle’s when you are doing write-write-read operation. Where a read after two successive write on socket may get delayed up-to 500 millisecond, until the second write has reached the destination. If latency is more of concern compared to bandwidth usage, e.g. in a network based multi-player game where users want to see action from other players immediately, it’s better to bypass Nagle’s delay by using TCP_NODELAY flag. 7. What is multicasting or multicast transmission? Which Protocol is generally used for multicast? TCP or UDP? Multi-casting or multicast transmission is one to many distribution, where a message is delivered to a group of subscribers simultaneously in a single transmission from publisher. Copies of messages are automatically created in other network elements e.g. Routers, but only when the topology of network requires it. Tibco Rendezvous supports multicast transmission. Multi-casting can only be implemented using UDP, because it sends full data as datagram package, which can be replicated and delivered to other subscribers. Since TCP is a point-to-point protocol, it can not deliver messages to multiple subscriber, until it has link between each of them. Though, UDP is not reliable, and messages may be lost or delivered out of order. Reliable multicast protocols such as Pragmatic General Multicast (PGM) have been developed to add loss detection and retransmission on top of IP multicast. IP multicast is widely deployed in enterprises, commercial stock exchanges, and multimedia content delivery networks. A common enterprise use of IP multicast is for IPTV applications. 8. What is the difference between Topic and Queue in JMS? The main difference between Topic and Queue in Java Messaging Service comes when we have multiple consumers to consumer messages. If we set-up multiple listener thread to consume messages from Queue, each messages will be dispatched to only one thread and not all thread. On the other hand in case of Topic each subscriber gets it’s own copy of message. 9. What is the difference between IO and NIO? The main difference between NIO and IO is that NIO provides asynchronous, non blocking IO, which is critical to write faster and scalable networking systems. On the other hand, most of the utilities from the IO classes are blocking and slow. NIO takes advantage of asynchronous system calls in UNIX systems such as the select() system call for network sockets. Using select(), an application can monitor several resources at the same time and can also poll for network activity without blocking. The select() system call identifies if data is pending or not, then read() or write() may be used knowing that they will complete immediately. 10. How do you write a multi-threaded server in Java? A multi-threaded server is the one which can serve multiple clients without blocking. Java provides excellent support to develop such a server. Prior to Java 1.4, you could write multi-threaded server using traditional socket IO and threads. This had a severe limitation on scalability, because it creates a new thread for each connection and you can only create a fixed number of threads, depending upon machine’s and platform’s capability. Though this design can be improved by using thread pools and worker threads, it still a resource intensive design. After JDK 1.4 and NIO’s introduction, writing scalable and multi-threaded server become bit easier. You can easily create it in single thread by using Selector, which takes advantage of asynchronous and non-blocking IO model of Java NIO. 11. What is an ephemeral port? In TCP/IP, a connection usually contains four things, Server IP, Server port, Client IP and Client Port. Out of these four, 3 are well known in most of the time, what is not known is client port, this is where ephemeral ports come into picture. Ephemeral ports are dynamic ports assigned by your machine’s IP stack, from a specified range, known as ephemeral port range, when a client connection explicitly doesn’t specify a port number. These are short lived, temporary ports, which can be reused once connection is closed. However, most of IP software, doesn’t reuse an ephemeral port, until the whole range is exhausted. Similar to TCP, UDP protocol also uses an ephemeral port, while sending datagrams . In Linux, the ephemeral port range is from 32768 to 61000, while in windows the default ephemeral port range is 1025 to 5000. Similarly, different operating systems have different ephemeral port ranges. 11. What is sliding window protocol? Sliding window protocol is a technique for controlling transmitted data packets between two network computers where reliable and sequential delivery of data packets is required, such as provided by Transmission Control Protocol (TCP). In the sliding window technique, each packet includes a unique consecutive sequence number, which is used by the receiving computer to place data in the correct order. The objective of the sliding window technique is to use the sequence numbers to avoid duplicate data and to request missing data. 12. When do you get “too many files open” error? Just like a File connection, a Socket Connection also needs file descriptors. Since every machine has limited number of file descriptors, it’s possible that they may run out of file descriptors. When this happens, you will see the “too many files open“ error. You can check how many file descriptors per process are allowed on a UNIX based system by executing the ulimit -n command or simply count entries on /proc/ /fd/. 14) What is TIME_WAIT state in TCP protocol? When does a socket connection goes to TIME_WAIT state? When one end of a TCP Connection closes it by making system call, it goes into TIME_WAIT state. Since TCP packets can arrive in the wrong order, the port must not be closed immediately in order to allow late packets to arrive. That’s why that end of the TCP connection goes into TIME_WAIT state. For example, if a client closes a socket connection, then it will go to TIME_WAIT state, similarly if server closes connection then you will see TIME_WAIT there. You can check the status of your TCP and UDP sockets by using these networking commands in UNIX. 15. What will happen if you have too many socket connections in TIME_WAIT state on Server? When a socket connection or port goes into TIME_WAIT state, it doesn’t release the file descriptor associated with it. The file descriptor is only released when TIME_WAIT state is gone, i.e. after some specified configured time. If too many connections are in TIME_WAIT state than your Server may ran out of file descriptors and start throwing “too many files open” error, and stop accepting new connections. That’s all about in this list of networking and socket programming interview questions and answers. Though I have originally intended this list for Java programmers it is equally useful for any programmer. In fact, this is bare minimum knowledge of sockets and protocols every programmer should have. I have found that C and C++ programmers are better answering these questions than an average Java programmer. One reason for this may be because Java programmers have got so many useful libraries, e.g. Apache MINA, which handle all the low level work for them. Anyway, knowledge of fundamentals is very important and everything else is just an excuse, but at the same time I also recommend using tried and tested libraries like Apache MINA for production code.Reference: 15 Java Socket Programming, Networking Interview Questions and Answers from our JCG partner Javin Paul at the Javarevisited blog....

Is pairing for everybody?

Pair programming is a great way to share knowledge. But every developer is different, does pairing work for everyone? Pairing helps a team normalise its knowledge – what one person knows, everyone else learns through pairing: keyboard shortcuts, techniques, practices, third party libraries as well as the details of the source code you’re working in. This pushes up the average level of the team and stops knowledge becoming siloed. Pairing also helps with discipline: it’s a lot harder to argue that you don’t need a unit test when there’s someone sitting next to you, literally acting as your conscience. It’s also a lot harder to just do the quick and dirty hack to get on to the next task, when the person sitting next to you has taken control of the keyboard to stop you committing war crimes against the source code. The biggest problem most teams face is basically one of communication: coordinating, in detail, the activities of a team of developers is difficult. Ideally, every developer would know everything that is going on across the team – but this clearly isn’t practical. Instead, we have to draw boundaries to make it easier to reason about the system as a whole, without knowing the whole system to the same level of detail. I’ll create an API, some boundary layer, and we each work to our own side of it. I’ll create the service, you sort out the user interface. I’ll sort out the network protocol, you sort out the application layer. You have to introduce an architectural boundary to simplify the communication and coordination. Your architecture immediately reflects the relationships of the developers building it. Whereas on teams that pair, these boundaries can be softer. They still happen, but the boundary becomes softer because as pairs rotate you see both sides of any boundary so it doesn’t become a black box you don’t know about and can’t change. One day I’m writing the user interface code, the next I’m writing the service layer that feeds it. This is how you spot inconsistencies and opportunities to fix the architecture and take advantage of implementation details on both sides. Otherwise this communication is hard. Continuous pair rotation means you can get close to the ideal that each developer knows, broadly, what is happening everywhere. However, let’s be honest: pairing isn’t for everyone. I’ve worked with some people who were great at pairing, who were a pleasure to work with. People who had no problem explaining their thought process and no ego to get bruised when you point out the fatal flaw in their idea. People who spot when you’ve lost the train of thought and pick up where you drifted off from. A good pairing session becomes very social. A team that is pairing can sound very noisy. It can be one of the hardest things to get used to when you start pairing: I seem to spend my entire day arguing and talking. When are we gonna get on and write some damned code? But that just highlights how little of the job is actually typing in source code. Most of the day is figuring out which change to make and where. A single line of code can take hours of arguing to get right and in the right place. But programming tends to attract people who are less sociable than others – and let’s face it, we’re a pretty anti-social bunch: I spend my entire day negotiating with a machine that works in 1s and 0s. Not for me the subtle nuances of human communication, it either compiles or it doesn’t. I don’t have to negotiate or try and out politick the compiler. I don’t have to deal with the compiler having “one of those days” (well, I say that, sometimes I swear…). I don’t have to take the compiler to one side and offer comforting words because its cat died. I don’t have to worry about hurting the compiler’s feelings because I made the same mistake for the hundredth time: “yes of course I’m listening to you, no I’m not just ignoring you. Of course I value your opinions, dear. But seriously, this is definitely an IList of TFoo!” So it’s no surprise that among the great variety of programmers you meet, some are extrovert characters who relish the social, human side of working in a team of people, building software. As well as the introvert characters who relish the quiet, private, intellectual challenge of crafting an elegant solution to a fiendish problem. And so to pairing: any team will end up with a mixture of characters. The extroverts will tend to enjoy pairing, while the introverts will tend to find it harder and seek to avoid it. This isn’t necessarily a question of education or persuasion, the benefits are relatively intangible and more introverted developers may find the whole process less enjoyable than working solo. It sounds trite: but happy developers are productive developers. There’s no point doing anything that makes some of your peers unhappy. All teams need to agree rules. For example, some people like eating really smelly food in an open plan office. Good teams tend to agree rules about this kind of behaviour; everyone agrees that small sacrifices for an individual make a big difference for team harmony. However, how do you resolve a difference of opinion with pairing? As a team decision, pairing is a bit all or nothing. Either we agree to pair on everything, so there’s no code ownership, regular rotation and we learn from each other. Or we don’t, and we each become responsible for our own dominion. We can’t agree that those that want to pair will go into the pairing room so as not to upset everyone else. One option is to simply require that everyone on your team has to love pairing. I don’t know about you: hiring good people is hard. The last thing I want to do is start excluding people who could otherwise be productive. Isn’t it better to at least have somebody doing something, even if they’re not pairing? Another option is to force developers to pair, even if they find it difficult or uncomfortable. But is that really going to be productive? Building resentment and unhappiness is not going to create a high performance team. Of course, the other extreme is just as likely to cause upset: if you stop all pairing, then those that want to will feel resentful and unhappy. And what about the middle ground? Can you have a team where some people pair while others work on their own? It seems inevitable that Conway’s law will come into play: the structure of the software will reflect the structure of the team. It’s very difficult for there to be overlap between developers working on their own and developers that are pairing. For exactly the same reason it’s difficult for a group of individual developers to overlap on the same area of code at the same time: you’ll necessarily introduce some architectural boundary to ease coordination. This means you still end up with a collection of silos, some owned by individual developers, some owned by a group of developers. Does this give you the best compromise? Or the worst of both worlds? What’s your experience? What have you tried? What worked, what didn’t?Reference: Is pairing for everybody? from our JCG partner David Green at the Actively Lazy blog....

jinfo: Command-line Peeking at JVM Runtime Configuration

In several recent blogs (in my reviews of the books Java EE 7 Performance Tuning and Optimization and WildFly Performance Tuning in particular), I have referenced my own past blog posts on certain Oracle JDK command-line tools. I was aghast to discover that I had never exclusively addressed the nifty jinfo tool and this post sets to rectify that troubling situation. I suspect that the reasons I chose not to write about jinfo previously include limitations related to jinfo discussed in my post VisualVM: jinfo and So Much More. In the Java SE 8 version of jinfo running on my machine, the primary limitation of jinfo on Windows that I discussed in the post Acquiring JVM Runtime Information has been addressed. In particular, I noted in that post that the -flags option was not supported on Windows version of jinfo at that time. As the next screen snapshot proves, that is no longer the case (note the use of jps to acquire the Java process ID to instruct jinfo to query).As the above screen snapshot demonstrates, the jinfo -flags command and option show the flags the explicitly specified JVM options of the Java process being monitored. If I want to find out about other JVM flags that are in effect implicitly (automatically), I can run java -XX:+PrintFlagsFinal to see all default JVM options. I can then query for any one of these against a running JVM process to find out what that particular JVM is using (same default or overridden different value). The next screen snapshot demonstrates how a small portion of the output provided from running java -XX:+PrintFlagsFinal.Let’s suppose I notice a flag called PrintHeapAtGC in the above output and want to know if it’s set in my particular Java application (-XX:+PrintHeapAtGC means it’s set and -XX:-PrintHeapAtGC means it’s not set). I can have jinfo tell me what its setting is (note my choice to use jcmd instead of jps in this case to determine the Java process ID):Because of the subtraction sign (-) instead of an addition sign (+) after the colon and before “PrintHeapAtGC”, we know this is turned off for the Java process with the specified ID. It turns out that jinfo does more than let us look; it also let’s us touch. The next screen snapshot shows changing this option using jinfo.As the previous screen snapshot indicates, I can turn off and on the boolean-style JVM options by simply using the same command to view the flag’s setting but preceding the flag’s name with the addition sign (+) to turn it on or with the substraction sign (-) to turn it off. In the example just shown, I turned off the PrintGCDateStamps, turned it back on again, and monitored its setting between those changes. Not all JVM options are boolean conditions. In those cases, their new values are assigned to them by concatenating the equals sign (=) and new value after the flag name. It’s also important to note that the target JVM (the one you’re trying to peek at and touch with jinfo will not allow you to change all its JVM option settings). In such cases, you’ll likely see a stack trace with message “Command failed in target VM.” In addition to displaying a currently running JVM’s options and allowing the changing of some of these, jinfo also allows one to see system properties used by that JVM as name/value pairs. This is demonstrated in the next screen snapshot with a small fraction of the output shown.Perhaps the easiest way to run jinfo is to simply provide no arguments other than the PID of the Java process in question and have both JVM options (non-default and command-line) and system properties displayed. Running jinfo -help provides brief usage details. Other important details are found in the Oracle documentation on the jinfo tool. These details includes the common (when it comes to these tools) reminder that this tool is “experimental and unsupported” and “might not be available in future releases of the JDK.” We are also warned that jinfo on Windows requires availability of dbgeng.dll or installed Debugging Tools For Windows. Although I have referenced the handy jinfo command line tool previously in posts VisualVM: jinfo and So Much More and Acquiring JVM Runtime Information, it is a handy enough tool to justify a post of its very own. As a command-line tool, it enjoys benefits commonly associated with command-line tools such as being relatively lightweight, working well with scripts, and working in headless environments.Reference: jinfo: Command-line Peeking at JVM Runtime Configuration from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

My Favorite IntelliJ IDEA Features

I have been a long time user (and customer) of IntelliJ IDEA. I think I have started using it around 2005 or 2006, version 5.0 at the time. I was an Eclipse user back then. A few of my colleagues recommended it to me, and at first I was not convinced, but after trying it out I was impressed. Now in 2014, IntelliJ IDEA is still my IDE of choice. The intent of this post is not to start an IDE war, but to focus on a few of IDEA features that sometimes other IDEA users are not aware of.       Darcula Theme The Darcula Theme changes your user interface to a dark look and feel. Well, maybe this is nothing new for you, but I would like to point two major advantages. First, it causes much less stress to your eyes. Give it a try! After a few hours using the dark look if you switch to the default one again you’re probably going to feel your eyes burning for a few minutes. Second, if you’re a mobility addict and you’re always running on battery, the dark look can also help your battery to last longer.Postfix completion Postfix completion is the feature that I always wanted and I didn’t even know it. Postfix completion allows you to change already typed expressions. How many times all of us have cursed for having to return back to add a missing cast? Or because we actually wanted to System.out the expression? Well, Postfix completion fixes that. For instance for the System.out, you type the expression: someVar You can now type: someVar.sout And the expression is transformed to: System.out.println(someVar); Check this awesome post in IntelliJ Blog for additional information about Postfix completion. Frameworks and Technologies Support In the Java world, you have a lot of frameworks and technologies available. Most likely you will come across to many of them in your developer work. Sometimes, it’s a nightmare to deal with the additional layer and the required configuration for everything to work correctly. Look at Maven for instance, it’s a pain to find which dependency to import when you need a class. IDEA Maven support, allows you to search for the class in your local repository and add the correct dependency to your pom.xml file. Just type the name of the class, press Alt + Enter and Add Maven Dependency:Pick the library you need. It’s added automatically to your pom.xml.You have support for Java EE, Spring, GWT, Maven and many others. Check here for a full list. Inject Language With Inject Language, it’s possible to have syntax, error highlighting and code completion for a large number of languages into String literals. I use GWT a lot, and this allows me to be able to write safe HTML into the String HTML parameters of the API, like this:Other examples include, SQL, CSS, Javascript, Groovy, Scala and many others. Try it out by yourself by pressing Alt + Enter on a String statement and then Inject Language. Presentation Mode Did you ever had the need to make a presentation about code using your IDE and the audience is not able to see it properly due to font size? And then you have to interrupt your presentation to adjust it. Sometimes you don’t even remember where to adjust it. Wouldn’t be easier to just have a dedicate presentation mode? Just go to View menu and then Enter Presentation Mode option. Conclusion I do believe that choosing an IDE is a matter of personal preference and you should stick with the one you feel more productive for the task that you have to complete. I still use Eclipse when I have to deal with BPM stuff. Some of these features also exist on the other IDE’s, but I have the impression by chatting with other developers that they don’t know about their existence. Explore your development environment and I’m pretty sure you will learn something new. I’m always learning new stuff in IntelliJ IDEA.Reference: My Favorite IntelliJ IDEA Features from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

5 Things I’ve learnt being a scrum master

I’ve been a scrum master now for about 6 months. Having been involved in scrum previously as a product owner, as well as a developer, moving into this role has really opened my eyes to some of the more political and arguably awkward elements of trying to get rid of impediments. Stay calm when others aren’t Something that I think is really key about being a scrum master, you have to be thick skinned. You have to not only push back, but when people whine at you and bring office politics into play, it’s vital that you remember exactly what the end game is, to meet the sprint goal! I should also point out, this doesn’t mean ruling with an iron fist, as a scrum master, you still succeed with the team and fail with the team. You can tell when something isn’t working and using an agile methodology shouldn’t be a painful process, it should be motivating. You are neither a manager, nor just a developer It’s quite an interesting position to be in as from my experience, it’s not unrealistic to get your hands dirty in some code while being a scrum master. You have to be strong enough to fend people off from poaching your team members even if they’re higher up the food chain to yourself. You aren’t “in charge” of the team, but you do have a responsibility to push back on poachers. Don’t be afraid to say “no” If your product owner is telling you to put 60 points into a sprint, when you know the velocity you’ve been hitting for the past 4 sprints has consistently been 40 points, don’t be afraid to say “what you’re asking is unattainable”. It’s much better to be honest early on and push back. Blindly saying yes, promising to the customer and then having to deal with the consequences later on isn’t where anyone wants to be. Make sure stand ups are to the point This might be like telling grandma to suck eggs, but it’s vital that stand ups really are short, sharp and to the point. There’s nothing worse than listening to a stand up where everyone in the back of their mind is thinking “I wish s/he’d just get to the point!”. This is a situation where you have to stick to your guns and if people get offended, they get offended. You have to tell the person, “we don’t need to know any of the extra detail, we just need to know from a high level; what it is you’re going to achieve today, did you achieve what you set out to do yesterday and importantly, do you have any impediments”. Keep things moving Sometimes things in a sprint can get stale, tasks can get stuck in one condition, you need to keep them moving! As an example, If there’s a task that’s been coded for half a day but not released to testing, find out why. You never know, the CI server might be down, there might be a problem releasing, you need to get it out before it becomes an impediment. Keeping the task board fresh and a true representation of what’s actually happening in your sprint can really boost morale if you know there’s only a few tasks left and you’re literally “sprinting” to the finish!Reference: 5 Things I’ve learnt being a scrum master from our JCG partner David Gray at the Code Mumble blog....

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them. We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation. That’s code isolation. But test isolation is different. An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We’ve already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it. Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests. To give an example, I summon the witness for the prosecution: The Singleton. Here’s some basic code using a singleton: public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); }public static Counter GetInstance() { return instance; }public int GetValue() { return count++; } } Pretty simple: The static instance is initialized in a call to Init. We can write these tests: [TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); }[TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); } Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case. By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we’ve written them, but not necessarily. While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run. Do you know this trick? public class MyTests: BaseTest { ///... Why not put all common code in a base class, then derive the test class from it? Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue. Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”. In order to avoid isolation problems:Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test. Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations. Don’t inherit. Test base classes create interdependence and hurt isolation. Mocking. Use mocking to control any shared dependency. Clean up. Make sure that tests clean up after themselves. Or, instead before every run.Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.Reference: Test Attribute #10 – Isolation from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Java Concurrency Tutorial – Atomicity and race conditions

Atomicity is one of the key concepts in multi-threaded programs. We say a set of actions is atomic if they all execute as a single operation, in an indivisible manner. Taking for granted that a set of actions in a multi-threaded program will be executed serially may lead to incorrect results. The reason is due to thread interference, which means that if two threads execute several steps on the same data, they may overlap. The following Interleaving example shows two threads executing several actions (prints in a loop) and how they are overlapped:       public class Interleaving { public void show() { for (int i = 0; i < 5; i++) { System.out.println(Thread.currentThread().getName() + " - Number: " + i); } } public static void main(String[] args) { final Interleaving main = new Interleaving(); Runnable runner = new Runnable() { @Override public void run() { main.show(); } }; new Thread(runner, "Thread 1").start(); new Thread(runner, "Thread 2").start(); } } When executed, it will produce unpredictable results. As an example: Thread 2 - Number: 0 Thread 2 - Number: 1 Thread 2 - Number: 2 Thread 1 - Number: 0 Thread 1 - Number: 1 Thread 1 - Number: 2 Thread 1 - Number: 3 Thread 1 - Number: 4 Thread 2 - Number: 3 Thread 2 - Number: 4 In this case, nothing wrong happens since they are just printing numbers. However, when you need to share the state of an object (its data) without synchronization, this leads to the presence of race conditions. Race condition Your code will have a race condition if there’s a possibility to produce incorrect results due to thread interleaving. This section describes two types of race conditions:Check-then-act Read-modify-writeTo remove race conditions and enforce thread safety, we must make these actions atomic by using synchronization. Examples in the following sections will show what the effects of these race conditions are. Check-then-act race condition This race condition appears when you have a shared field and expect to serially execute the following steps:Get a value from a field. Do something based on the result of the previous check.The problem here is that when the first thread is going to act after the previous check, another thread may have interleaved and changed the value of the field. Now, the first thread will act based on a value that is no longer valid. This is easier seen with an example. UnsafeCheckThenAct is expected to change the field number once. Following calls to changeNumber method, should result in the execution of the else condition: public class UnsafeCheckThenAct { private int number; public void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final UnsafeCheckThenAct checkAct = new UnsafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } But since this code is not synchronized, it may (there's no guarantee) result in several modifications of the field: T13 | Changed T17 | Changed T35 | Not changed T10 | Changed T48 | Not changed T14 | Changed T60 | Not changed T6 | Changed T5 | Changed T63 | Not changed T18 | Not changed Another example of this race condition is lazy initialization. A simple way to correct this is to use synchronization. SafeCheckThenAct is thread-safe because it has removed the race condition by synchronizing all accesses to the shared field. public class SafeCheckThenAct { private int number; public synchronized void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final SafeCheckThenAct checkAct = new SafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } Now, executing this code will always produce the same expected result; only a single thread will change the field: T0 | Changed T54 | Not changed T53 | Not changed T62 | Not changed T52 | Not changed T51 | Not changed ...In some cases, there will be other mechanisms which perform better than synchronizing the whole method but I won’t discuss them in this post. Read-modify-write race condition Here we have another type of race condition which appears when executing the following set of actions:Fetch a value from a field. Modify the value. Store the new value to the field.In this case, there’s another dangerous possibility which consists in the loss of some updates to the field. One possible outcome is: Field’s value is 1. Thread 1 gets the value from the field (1). Thread 1 modifies the value (5). Thread 2 reads the value from the field (1). Thread 2 modifies the value (7). Thread 1 stores the value to the field (5). Thread 2 stores the value to the field (7).As you can see, update with the value 5 has been lost. Let’s see a code sample. UnsafeReadModifyWrite shares a numeric field which is incremented each time: public class UnsafeReadModifyWrite { private int number; public void incrementNumber() { number++; } public int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final UnsafeReadModifyWrite rmw = new UnsafeReadModifyWrite(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(6000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Can you spot the compound action which causes the race condition? I’m sure you did, but for completeness, I will explain it anyway. The problem is in the increment (number++). This may appear to be a single action but in fact, it is a sequence of three actions (get-increment-write). When executing this code, we may see that we have lost some updates: 2014-08-08 09:59:18,859|UnsafeReadModifyWrite|Final number (should be 10_000): 9996 Depending on your computer it will be very difficult to reproduce this update loss, since there’s no guarantee on how threads will interleave. If you can’t reproduce the above example, try UnsafeReadModifyWriteWithLatch, which uses a CountDownLatch to synchronize thread’s start, and repeats the test a hundred times. You should probably see some invalid values among all the results: Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 997 Final number (should be 1_000): 999 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 This example can be solved by making all three actions atomic. SafeReadModifyWriteSynchronized uses synchronization in all accesses to the shared field: public class SafeReadModifyWriteSynchronized { private int number; public synchronized void incrementNumber() { number++; } public synchronized int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteSynchronized rmw = new SafeReadModifyWriteSynchronized(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Let’s see another example to remove this race condition. In this specific case, and since the field number is independent to other variables, we can make use of atomic variables. SafeReadModifyWriteAtomic uses atomic variables to store the value of the field: public class SafeReadModifyWriteAtomic { private final AtomicInteger number = new AtomicInteger(); public void incrementNumber() { number.getAndIncrement(); } public int getNumber() { return this.number.get(); } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteAtomic rmw = new SafeReadModifyWriteAtomic(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Following posts will further explain mechanisms like locking or atomic variables. Conclusion This post explained some of the risks implied when executing compound actions in non-synchronized multi-threaded programs. To enforce atomicity and prevent thread interleaving, one must use some type of synchronization.You can take a look at the source code at github.Reference: Java Concurrency Tutorial - Atomicity and race conditions from our JCG partner Xavier Padro at the Xavier Padró's Blog blog....

A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL)

Have you ever wondered about the use-case behind SQL’s ANY (also: SOME) and ALL keywords? You have probably not yet encountered these keywords in the wild. Yet they can be extremely useful. But first, let’s see how they’re defined in the SQL standard. The easy part:           8.7 <quantified comparison predicate>FunctionSpecify a quantified comparison.Format<quantified comparison predicate> ::= <row value constructor> <comp op> <quantifier> <table subquery><quantifier> ::= <all> | <some> <all> ::= ALL <some> ::= SOME | ANY Intuitively, such a quantified comparison predicate can be used as such: -- Is any person of age 42? 42 = ANY (SELECT age FROM person)-- Are all persons younger than 42? 42 > ALL (SELECT age FROM person) Let’s keep with the useful ones. Observe that you have probably written the above queries with a different syntax, as such: -- Is any person of age 42? 42 IN (SELECT age FROM person)-- Are all persons younger than 42? 42 > (SELECT MAX(age) FROM person) In fact, you’ve used the <in predicate>, or a greater than predicate with a <scalar subquery> and an aggregate function. The IN predicate It’s not a coincidence that you might have used the <in predicate> just like the above <quantified comparison predicate> using ANY. In fact, the <in predicate> is specified just like that: 8.4 <in predicate>Syntax Rules2) Let RVC be the <row value constructor> and let IPV be the <in predicate value>.3) The expressionRVC NOT IN IPVis equivalent toNOT ( RVC IN IPV )4) The expressionRVC IN IPVis equivalent toRVC = ANY IPV Precisely! Isn’t SQL beautiful? Note, the implicit consequences of 3) lead to a very peculiar behaviour of the NOT IN predicate with respect to NULL, which few developers are aware of. Now, it’s getting awesome So far, there is nothing out of the ordinary with these <quantified comparison predicate>. All of the previous examples can be emulated with “more idiomatic”, or let’s say, “more everyday” SQL. But the true awesomeness of <quantified comparison predicate> appears only when used in combination with <row value expression> where rows have a degree / arity of more than one: -- Is any person called "John" of age 42? (42, 'John') = ANY (SELECT age, first_name FROM person)-- Are all persons younger than 55? -- Or if they're 55, do they all earn less than 150'000.00? (55, 150000.00) > ALL (SELECT age, wage FROM person) See the above queries in action on PostgreSQL in this SQLFiddle. At this point, it is worth mentioning that few databases actually support…row value expressions, or… quantified comparison predicates with row value expressionsEven if specified in SQL-92, it looks as most databases still take their time to implement this feature 22 years later. Emulating these predicates with jOOQ But luckily, there is jOOQ to emulate these features for you. Even if you’re not using jOOQ in your project, the following SQL transformation steps can be useful if you want to express the above predicates. Let’s have a look at how this could be done in MySQL: -- This predicate (42, 'John') = ANY (SELECT age, first_name FROM person)-- ... is the same as this: EXISTS ( SELECT 1 FROM person WHERE age = 42 AND first_name = 'John' ) What about the other predicate? -- This predicate (55, 150000.00) > ALL (SELECT age, wage FROM person)-- ... is the same as these: ---------------------------- -- No quantified comparison predicate with -- Row value expressions available (55, 150000.00) > ( SELECT age, wage FROM person ORDER BY 1 DESC, 2 DESC LIMIT 1 )-- No row value expressions available at all NOT EXISTS ( SELECT 1 FROM person WHERE (55 < age) OR (55 = age AND 150000.00 <= wage) ) Clearly, the EXISTS predicate can be used in pretty much every database to emulate what we’ve seen before. If you just need this for a one-shot emulation, the above examples will be sufficient. If, however, you want to more formally use <row value expression> and <quantified comparison predicate>, you better get SQL transformation right. Read on about SQL transformation in this article here.Reference: A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL) from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: