What's New Here?

java-logo

115 Java Interview Questions and Answers – The ULTIMATE List

In this tutorial we will discuss about different types of questions that can be used in a Java interview, in order for the employer to test your skills in Java and object-oriented programming in general. In the following sections we will discuss about object-oriented programming and its characteristics, general questions regarding Java and its functionality, collections in Java, garbage collectors, exception handling, Java applets, Swing, JDBC, Remote Method Invocation (RMI), Servlets and JSP. Let’s go…!   Table of ContentsObject Oriented Programming (OOP) General Questions about Java Java Threads Java Collections Garbage Collectors Exception Handling Java Applets Swing JDBC Remote Method Invocation (RMI) Servlets JSPObject Oriented Programming (OOP) Java is a computer programming language that is concurrent, class-based and object-oriented. The advantages of object oriented software development are shown below:Modular development of code, which leads to easy maintenance and modification. Reusability of code. Improved reliability and flexibility of code. Increased understanding of code.Object-oriented programming contains many significant features, such as encapsulation, inheritance, polymorphism and abstraction. We analyze each feature separately in the following sections. Encapsulation Encapsulation provides objects with the ability to hide their internal characteristics and behavior. Each object provides a number of methods, which can be accessed by other objects and change its internal data. In Java, there are three access modifiers: public, private and protected. Each modifier imposes different access rights to other classes, either in the same or in external packages. Some of the advantages of using encapsulation are listed below:The internal state of every objected is protected by hiding its attributes. It increases usability and maintenance of code, because the behavior of an object can be independently changed or extended. It improves modularity by preventing objects to interact with each other, in an undesired way.You can refer to our tutorial here for more details and examples on encapsulation. Polymorphism Polymorphism is the ability of programming languages to present the same interface for differing underlying data types. A polymorphic type is a type whose operations can also be applied to values of some other type. Inheritance Inheritance provides an object with the ability to acquire the fields and methods of another class, called base class. Inheritance provides re-usability of code and can be used to add additional features to an existing class, without modifying it. Abstraction Abstraction is the process of separating ideas from specific instances and thus, develop classes in terms of their own functionality, instead of their implementation details. Java supports the creation and existence of abstract classes that expose interfaces, without including the actual implementation of all methods. The abstraction technique aims to separate the implementation details of a class from its behavior. Differences between Abstraction and Encapsulation Abstraction and encapsulation are complementary concepts. On the one hand, abstraction focuses on the behavior of an object. On the other hand, encapsulation focuses on the implementation of an object’s behavior. Encapsulation is usually achieved by hiding information about the internal state of an object and thus, can be seen as a strategy used in order to provide abstraction. General Questions about Java 1. What is JVM ? Why is Java called the “Platform Independent Programming Language” ? A Java virtual machine (JVM) is a process virtual machine that can execute Java bytecode. Each Java source file is compiled into a bytecode file, which is executed by the JVM. Java was designed to allow application programs to be built that could be run on any platform, without having to be rewritten or recompiled by the programmer for each separate platform. A Java virtual machine makes this possible, because it is aware of the specific instruction lengths and other particularities of the underlying hardware platform. 2. What is the Difference between JDK and JRE ? The Java Runtime Environment (JRE) is basically the Java Virtual Machine (JVM) where your Java programs are being executed. It also includes browser plugins for applet execution. The Java Development Kit (JDK) is the full featured Software Development Kit for Java, including the JRE, the compilers and tools (like JavaDoc, and Java Debugger), in order for a user to develop, compile and execute Java applications. 3. What does the “static” keyword mean ? Can you override private or static method in Java ? The static keyword denotes that a member variable or method can be accessed, without requiring an instantiation of the class to which it belongs. A user cannot override static methods in Java, because method overriding is based upon dynamic binding at runtime and static methods are statically binded at compile time. A static method is not associated with any instance of a class so the concept is not applicable. 4. Can you access non static variable in static context ? A static variable in Java belongs to its class and its value remains the same for all its instances. A static variable is initialized when the class is loaded by the JVM. If your code tries to access a non-static variable, without any instance, the compiler will complain, because those variables are not created yet and they are not associated with any instance. 5. What are the Data Types supported by Java ? What is Autoboxing and Unboxing ? The eight primitive data types supported by the Java programming language are:byte short int long float double boolean charAutoboxing is the automatic conversion made by the Java compiler between the primitive types and their corresponding object wrapper classes. For example, the compiler converts an int to an Integer, a double to a Double, and so on. If the conversion goes the other way, this operation is called unboxing. 6. What is Function Overriding and Overloading in Java ? Method overloading in Java occurs when two or more methods in the same class have the exact same name, but different parameters. On the other hand, method overriding is defined as the case when a child class redefines the same method as a parent class. Overridden methods must have the same name, argument list, and return type. The overriding method may not limit the access of the method it overrides. 7. What is a Constructor, Constructor Overloading in Java and Copy-Constructor ? A constructor gets invoked when a new object is created. Every class has a constructor. In case the programmer does not provide a constructor for a class, the Java compiler (Javac) creates a default constructor for that class. The constructor overloading is similar to method overloading in Java. Different constructors can be created for a single class. Each constructor must have its own unique parameter list. Finally, Java does support copy constructors like C++, but the difference lies in the fact that Java doesn’t create a default copy constructor if you don’t write your own. 8. Does Java support multiple inheritance ? No, Java does not support multiple inheritance. Each class is able to extend only on one class, but is able to implement more than one interfaces. 9. What is the difference between an Interface and an Abstract class ? Java provides and supports the creation both of abstract classes and interfaces. Both implementations share some common characteristics, but they differ in the following features:All methods in an interface are implicitly abstract. On the other hand, an abstract class may contain both abstract and non-abstract methods. A class may implement a number of Interfaces, but can extend only one abstract class. In order for a class to implement an interface, it must implement all its declared methods. However, a class may not implement all declared methods of an abstract class. Though, in this case, the sub-class must also be declared as abstract. Abstract classes can implement interfaces without even providing the implementation of interface methods. Variables declared in a Java interface is by default final. An abstract class may contain non-final variables. Members of a Java interface are public by default. A member of an abstract class can either be private, protected or public. An interface is absolutely abstract and cannot be instantiated. An abstract class also cannot be instantiated, but can be invoked if it contains a main method.Also check out the Abstract class and Interface differences for JDK 8. 10. What are pass by reference and pass by value ? When an object is passed by value, this means that a copy of the object is passed. Thus, even if changes are made to that object, it doesn’t affect the original value. When an object is passed by reference, this means that the actual object is not passed, rather a reference of the object is passed. Thus, any changes made by the external method, are also reflected in all places. Java Threads 11. What is the difference between processes and threads ? A process is an execution of a program, while a Thread is a single execution sequence within a process. A process can contain multiple threads. A Thread is sometimes called a lightweight process. 12. Explain different ways of creating a thread. Which one would you prefer and why ? There are three ways that can be used in order for a Thread to be created:A class may extend the Thread class. A class may implement the Runnable interface. An application can use the Executor framework, in order to create a thread pool.The Runnable interface is preferred, as it does not require an object to inherit the Thread class. In case your application design requires multiple inheritance, only interfaces can help you. Also, the thread pool is very efficient and can be implemented and used very easily. 13. Explain the available thread states in a high-level. During its execution, a thread can reside in one of the following states:Runnable: A thread becomes ready to run, but does not necessarily start running immediately. Running: The processor is actively executing the thread code. Waiting: A thread is in a blocked state waiting for some external processing to finish. Sleeping: The thread is forced to sleep. Blocked on I/O: Waiting for an I/O operation to complete. Blocked on Synchronization: Waiting to acquire a lock. Dead: The thread has finished its execution.14. What is the difference between a synchronized method and a synchronized block ? In Java programming, each object has a lock. A thread can acquire the lock for an object by using the synchronized keyword. The synchronized keyword can be applied in a method level (coarse grained lock) or block level of code (fine grained lock). 15. How does thread synchronization occurs inside a monitor ? What levels of synchronization can you apply ? The JVM uses locks in conjunction with monitors. A monitor is basically a guardian that watches over a sequence of synchronized code and ensuring that only one thread at a time executes a synchronized piece of code. Each monitor is associated with an object reference. The thread is not allowed to execute the code until it obtains the lock. 16. What’s a deadlock ? A condition that occurs when two processes are waiting for each other to complete, before proceeding. The result is that both processes wait endlessly. 17. How do you ensure that N threads can access N resources without deadlock ? A very simple way to avoid deadlock while using N threads is to impose an ordering on the locks and force each thread to follow that ordering. Thus, if all threads lock and unlock the mutexes in the same order, no deadlocks can arise. Java Collections 18. What are the basic interfaces of Java Collections Framework ? Java Collections Framework provides a well designed set of interfaces and classes that support operations on a collections of objects. The most basic interfaces that reside in the Java Collections Framework are:Collection, which represents a group of objects known as its elements. Set, which is a collection that cannot contain duplicate elements. List, which is an ordered collection and can contain duplicate elements. Map, which is an object that maps keys to values and cannot contain duplicate keys.19. Why Collection doesn’t extend Cloneable and Serializable interfaces ? The Collection interface specifies groups of objects known as elements. Each concrete implementation of a Collection can choose its own way of how to maintain and order its elements. Some collections allow duplicate keys, while some other collections don’t. The semantics and the implications of either cloning or serialization come into play when dealing with actual implementations. Thus, the concrete implementations of collections should decide how they can be cloned or serialized. 20. What is an Iterator ? The Iterator interface provides a number of methods that are able to iterate over any Collection. Each Java Collection contains the iterator method that returns an Iterator instance. Iterators are capable of removing elements from the underlying collection during the iteration. 21. What differences exist between Iterator and ListIterator ? The differences of these elements are listed below:An Iterator can be used to traverse the Set and List collections, while the ListIterator can be used to iterate only over Lists. The Iterator can traverse a collection only in forward direction, while the ListIterator can traverse a List in both directions. The ListIterator implements the Iterator interface and contains extra functionality, such as adding an element, replacing an element, getting the index position for previous and next elements, etc.22. What is difference between fail-fast and fail-safe ? The Iterator's fail-safe property works with the clone of the underlying collection and thus, it is not affected by any modification in the collection. All the collection classes in java.util package are fail-fast, while the collection classes in java.util.concurrent are fail-safe. Fail-fast iterators throw a ConcurrentModificationException, while fail-safe iterator never throws such an exception. 23. How HashMap works in Java ? A HashMap in Java stores key-value pairs. The HashMap requires a hash function and uses hashCode and equals methods, in order to put and retrieve elements to and from the collection respectively. When the put method is invoked, the HashMap calculates the hash value of the key and stores the pair in the appropriate index inside the collection. If the key exists, its value is updated with the new value. Some important characteristics of a HashMap are its capacity, its load factor and the threshold resizing. 24. What is the importance of hashCode() and equals() methods ? A HashMap in Java uses the hashCode and equals methods to determine the index of the key-value pair. These methods are also used when we request the value of a specific key. If these methods are not implemented correctly, two different keys might produce the same hash value and thus, will be considered as equal by the collection. Furthermore, these methods are also used to detect duplicates. Thus, the implementation of both methods is crucial to the accuracy and correctness of the HashMap. 25. What differences exist between HashMap and Hashtable ? Both the HashMap and Hashtable classes implement the Map interface and thus, have very similar characteristics. However, they differ in the following features:A HashMap allows the existence of null keys and values, while a Hashtable doesn’t allow neither null keys, nor null values. A Hashtable is synchronized, while a HashMap is not. Thus, HashMap is preferred in single-threaded environments, while a Hashtable is suitable for multi-threaded environments. A HashMap provides its set of keys and a Java application can iterate over them. Thus, a HashMap is fail-fast. On the other hand, a Hashtable provides an Enumeration of its keys. The Hashtable class is considered to be a legacy class.26. What is difference between Array and ArrayList ? When will you use Array over ArrayList ? The Array and ArrayList classes differ on the following features:Arrays can contain primitive or objects, while an ArrayList can contain only objects. Arrays have fixed size, while an ArrayList is dynamic. An ArrayListprovides more methods and features, such as addAll, removeAll, iterator, etc. For a list of primitive data types, the collections use autoboxing to reduce the coding effort. However, this approach makes them slower when working on fixed size primitive data types.27. What is difference between ArrayList and LinkedList ? Both the ArrayList and LinkedList classes implement the List interface, but they differ on the following features:An ArrayList is an index based data structure backed by an Array. It provides random access to its elements with a performance equal to O(1). On the other hand, a LinkedList stores its data as list of elements and every element is linked to its previous and next element. In this case, the search operation for an element has execution time equal to O(n). The Insertion, addition and removal operations of an element are faster in a LinkedList compared to an ArrayList, because there is no need of resizing an array or updating the index when an element is added in some arbitrary position inside the collection. A LinkedList consumes more memory than an ArrayList, because every node in a LinkedList stores two references, one for its previous element and one for its next element.Check also our article ArrayList vs. LinkedList. 28. What is Comparable and Comparator interface ? List their differences. Java provides the Comparable interface, which contains only one method, called compareTo. This method compares two objects, in order to impose an order between them. Specifically, it returns a negative integer, zero, or a positive integer to indicate that the input object is less than, equal or greater than the existing object. Java provides the Comparator interface, which contains two methods, called compare and equals. The first method compares its two input arguments and imposes an order between them. It returns a negative integer, zero, or a positive integer to indicate that the first argument is less than, equal to, or greater than the second. The second method requires an object as a parameter and aims to decide whether the input object is equal to the comparator. The method returns true, only if the specified object is also a comparator and it imposes the same ordering as the comparator. 29. What is Java Priority Queue ? The PriorityQueue is an unbounded queue, based on a priority heap and its elements are ordered in their natural order. At the time of its creation, we can provide a Comparator that is responsible for ordering the elements of the PriorityQueue. A PriorityQueue doesn’t allow null values, those objects that doesn’t provide natural ordering, or those objects that don’t have any comparator associated with them. Finally, the Java PriorityQueue is not thread-safe and it requires O(log(n)) time for its enqueing and dequeing operations. 30. What do you know about the big-O notation and can you give some examples with respect to different data structures ? The Big-O notation simply describes how well an algorithm scales or performs in the worst case scenario as the number of elements in a data structure increases. The Big-O notation can also be used to describe other behavior such as memory consumption. Since the collection classes are actually data structures, we usually use the Big-O notation to chose the best implementation to use, based on time, memory and performance. Big-O notation can give a good indication about performance for large amounts of data. 31. What is the tradeoff between using an unordered array versus an ordered array ? The major advantage of an ordered array is that the search times have time complexity of O(log n), compared to that of an unordered array, which is O (n). The disadvantage of an ordered array is that the insertion operation has a time complexity of O(n), because the elements with higher values must be moved to make room for the new element. Instead, the insertion operation for an unordered array takes constant time of O(1). 32. What are some of the best practices relating to the Java Collection framework ?Choosing the right type of the collection to use, based on the application’s needs, is very crucial for its performance. For example if the size of the elements is fixed and know a priori, we shall use an Array, instead of an ArrayList. Some collection classes allow us to specify their initial capacity. Thus, if we have an estimation on the number of elements that will be stored, we can use it to avoid rehashing or resizing. Always use Generics for type-safety, readability, and robustness. Also, by using Generics you avoid the ClassCastException during runtime. Use immutable classes provided by the Java Development Kit (JDK) as a key in a Map, in order to avoid the implementation of the hashCode and equals methods for our custom class. Program in terms of interface not implementation. Return zero-length collections or arrays as opposed to returning a null in case the underlying collection is actually empty.33. What’s the difference between Enumeration and Iterator interfaces ? Enumeration is twice as fast as compared to an Iterator and uses very less memory. However, the Iterator is much safer compared to Enumeration, because other threads are not able to modify the collection object that is currently traversed by the iterator. Also, Iteratorsallow the caller to remove elements from the underlying collection, something which is not possible with Enumerations. 34. What is the difference between HashSet and TreeSet ? The HashSet is Implemented using a hash table and thus, its elements are not ordered. The add, remove, and contains methods of a HashSet have constant time complexity O(1). On the other hand, a TreeSet is implemented using a tree structure. The elements in a TreeSet are sorted, and thus, the add, remove, and contains methods have time complexity of O(logn). Garbage Collectors 35. What is the purpose of garbage collection in Java, and when is it used ? The purpose of garbage collection is to identify and discard those objects that are no longer needed by the application, in order for the resources to be reclaimed and reused. 36. What does System.gc() and Runtime.gc() methods do ? These methods can be used as a hint to the JVM, in order to start a garbage collection. However, this it is up to the Java Virtual Machine (JVM) to start the garbage collection immediately or later in time. 37. When is the finalize() called ? What is the purpose of finalization ? The finalize method is called by the garbage collector, just before releasing the object’s memory. It is normally advised to release resources held by the object inside the finalize method. 38. If an object reference is set to null, will the Garbage Collector immediately free the memory held by that object ? No, the object will be available for garbage collection in the next cycle of the garbage collector. 39. What is structure of Java Heap ? What is Perm Gen space in Heap ? The JVM has a heap that is the runtime data area from which memory for all class instances and arrays is allocated. It is created at the JVM start-up. Heap memory for objects is reclaimed by an automatic memory management system which is known as a garbage collector. Heap memory consists of live and dead objects. Live objects are accessible by the application and will not be a subject of garbage collection. Dead objects are those which will never be accessible by the application, but have not been collected by the garbage collector yet. Such objects occupy the heap memory space until they are eventually collected by the garbage collector. 40. What is the difference between Serial and Throughput Garbage collector ? The throughput garbage collector uses a parallel version of the young generation collector and is meant to be used with applications that have medium to large data sets. On the other hand, the serial collector is usually adequate for most small applications (those requiring heaps of up to approximately 100MB on modern processors). 41. When does an Object becomes eligible for Garbage collection in Java ? A Java object is subject to garbage collection when it becomes unreachable to the program in which it is currently used. 42. Does Garbage collection occur in permanent generation space in JVM ? Garbage Collection does occur in PermGen space and if PermGen space is full or cross a threshold, it can trigger a full garbage collection. If you look carefully at the output of the garbage collector, you will find that PermGen space is also garbage collected. This is the reason why correct sizing of PermGen space is important to avoid frequent full garbage collections. Also check our article Java 8: PermGen to Metaspace. Exception Handling 43. What are the two types of Exceptions in Java ? Which are the differences between them ? Java has two types of exceptions: checked exceptions and unchecked exceptions. Unchecked exceptions do not need to be declared in a method or a constructor’s throws clause, if they can be thrown by the execution of the method or the constructor, and propagate outside the method or constructor boundary. On the other hand, checked exceptions must be declared in a method or a constructor’s throws clause. See here for tips on Java exception handling. 44. What is the difference between Exception and Error in java ? Exception and Error classes are both subclasses of the Throwable class. The Exception class is used for exceptional conditions that a user’s program should catch. The Error class defines exceptions that are not excepted to be caught by the user program. 45. What is the difference between throw and throws ? The throw keyword is used to explicitly raise a exception within the program. On the contrary, the throws clause is used to indicate those exceptions that are not handled by a method. Each method must explicitly specify which exceptions does not handle, so the callers of that method can guard against possible exceptions. Finally, multiple exceptions are separated by a comma. 45. What is the importance of finally block in exception handling ? A finally block will always be executed, whether or not an exception is actually thrown. Even in the case where the catch statement is missing and an exception is thrown, the finally block will still be executed. Last thing to mention is that the finally block is used to release resources like I/O buffers, database connections, etc. 46. What will happen to the Exception object after exception handling ? The Exception object will be garbage collected in the next garbage collection. 47. How does finally block differ from finalize() method ? A finally block will be executed whether or not an exception is thrown and is used to release those resources held by the application. Finalize is a protected method of the Object class, which is called by the Java Virtual Machine (JVM) just before an object is garbage collected. Java Applets 48. What is an Applet ? A java applet is program that can be included in a HTML page and be executed in a java enabled client browser. Applets are used for creating dynamic and interactive web applications. 49. Explain the life cycle of an Applet. An applet may undergo the following states:Init: An applet is initialized each time is loaded. Start: Begin the execution of an applet. Stop: Stop the execution of an applet. Destroy: Perform a final cleanup, before unloading the applet.50. What happens when an applet is loaded ? First of all, an instance of the applet’s controlling class is created. Then, the applet initializes itself and finally, it starts running. 51. What is the difference between an Applet and a Java Application ? Applets are executed within a java enabled browser, but a Java application is a standalone Java program that can be executed outside of a browser. However, they both require the existence of a Java Virtual Machine (JVM). Furthermore, a Java application requires a main method with a specific signature, in order to start its execution. Java applets don’t need such a method to start their execution. Finally, Java applets typically use a restrictive security policy, while Java applications usually use more relaxed security policies. 52. What are the restrictions imposed on Java applets ? Mostly due to security reasons, the following restrictions are imposed on Java applets:An applet cannot load libraries or define native methods. An applet cannot ordinarily read or write files on the execution host. An applet cannot read certain system properties. An applet cannot make network connections except to the host that it came from. An applet cannot start any program on the host that’s executing it.53. What are untrusted applets ? Untrusted applets are those Java applets that cannot access or execute local system files. By default, all downloaded applets are considered as untrusted. 54. What is the difference between applets loaded over the internet and applets loaded via the file system ? Regarding the case where an applet is loaded over the internet, the applet is loaded by the applet classloader and is subject to the restrictions enforced by the applet security manager. Regarding the case where an applet is loaded from the client’s local disk, the applet is loaded by the file system loader. Applets loaded via the file system are allowed to read files, write files and to load libraries on the client. Also, applets loaded via the file system are allowed to execute processes and finally, applets loaded via the file system are not passed through the byte code verifier. 55. What is the applet class loader, and what does it provide ? When an applet is loaded over the internet, the applet is loaded by the applet classloader. The class loader enforces the Java name space hierarchy. Also, the class loader guarantees that a unique namespace exists for classes that come from the local file system, and that a unique namespace exists for each network source. When a browser loads an applet over the net, that applet’s classes are placed in a private namespace associated with the applet’s origin. Then, those classes loaded by the class loader are passed through the verifier.The verifier checks that the class file conforms to the Java language specification . Among other things, the verifier ensures that there are no stack overflows or underflows and that the parameters to all bytecode instructions are correct. 56. What is the applet security manager, and what does it provide ? The applet security manager is a mechanism to impose restrictions on Java applets. A browser may only have one security manager. The security manager is established at startup, and it cannot thereafter be replaced, overloaded, overridden, or extended. Swing 57. What is the difference between a Choice and a List ? A Choice is displayed in a compact form that must be pulled down, in order for a user to be able to see the list of all available choices. Only one item may be selected from a Choice. A List may be displayed in such a way that several List items are visible. A List supports the selection of one or more List items. 58. What is a layout manager ? A layout manager is the used to organize the components in a container. 59. What is the difference between a Scrollbar and a JScrollPane ? A Scrollbar is a Component, but not a Container. A ScrollPane is a Container. A ScrollPane handles its own events and performs its own scrolling. 60. Which Swing methods are thread-safe ? There are only three thread-safe methods: repaint, revalidate, and invalidate. 61. Name three Component subclasses that support painting. The Canvas, Frame, Panel, and Applet classes support painting. 62. What is clipping ? Clipping is defined as the process of confining paint operations to a limited area or shape. 63. What is the difference between a MenuItem and a CheckboxMenuItem ? The CheckboxMenuItem class extends the MenuItem class and supports a menu item that may be either checked or unchecked. 64. How are the elements of a BorderLayout organized ? The elements of a BorderLayout are organized at the borders (North, South, East, and West) and the center of a container. 65. How are the elements of a GridBagLayout organized ? The elements of a GridBagLayout are organized according to a grid. The elements are of different sizes and may occupy more than one row or column of the grid. Thus, the rows and columns may have different sizes. 66. What is the difference between a Window and a Frame ? The Frame class extends the Window class and defines a main application window that can have a menu bar. 67. What is the relationship between clipping and repainting ? When a window is repainted by the AWT painting thread, it sets the clipping regions to the area of the window that requires repainting. 68. What is the relationship between an event-listener interface and an event-adapter class ? An event-listener interface defines the methods that must be implemented by an event handler for a particular event. An event adapter provides a default implementation of an event-listener interface. 69. How can a GUI component handle its own events ? A GUI component can handle its own events, by implementing the corresponding event-listener interface and adding itself as its own event listener. 70. What advantage do Java's layout managers provide over traditional windowing systems ? Java uses layout managers to lay out components in a consistent manner, across all windowing platforms. Since layout managers aren't tied to absolute sizing and positioning, they are able to accomodate platform-specific differences among windowing systems. 71. What is the design pattern that Java uses for all Swing components ? The design pattern used by Java for all Swing components is the Model View Controller (MVC) pattern. JDBC 72. What is JDBC ? JDBC is an abstraction layer that allows users to choose between databases. JDBC enables developers to write database applications in Java, without having to concern themselves with the underlying details of a particular database. 73. Explain the role of Driver in JDBC. The JDBC Driver provides vendor-specific implementations of the abstract classes provided by the JDBC API. Each driver must provide implementations for the following classes of the java.sql package:Connection, Statement, PreparedStatement, CallableStatement, ResultSet and Driver. 74. What is the purpose Class.forName method ? This method is used to method is used to load the driver that will establish a connection to the database. 75. What is the advantage of PreparedStatement over Statement ? PreparedStatements are precompiled and thus, their performance is much better. Also, PreparedStatement objects can be reused with different input values to their queries. 76. What is the use of CallableStatement ? Name the method, which is used to prepare a CallableStatement. A CallableStatement is used to execute stored procedures. Stored procedures are stored and offered by a database. Stored procedures may take input values from the user and may return a result. The usage of stored procedures is highly encouraged, because it offers security and modularity.The method that prepares a CallableStatement is the following: CallableStament.prepareCall(); 77. What does Connection pooling mean ? The interaction with a database can be costly, regarding the opening and closing of database connections. Especially, when the number of database clients increases, this cost is very high and a large number of resources is consumed.A pool of database connections is obtained at start up by the application server and is maintained in a pool. A request for a connection is served by a connection residing in the pool. In the end of the connection, the request is returned to the pool and can be used to satisfy future requests. Remote Method Invocation (RMI) 78. What is RMI ? The Java Remote Method Invocation (Java RMI) is a Java API that performs the object-oriented equivalent of remote procedure calls (RPC), with support for direct transfer of serialized Java classes and distributed garbage collection. Remote Method Invocation (RMI) can also be seen as the process of activating a method on a remotely running object. RMI offers location transparency because a user feels that a method is executed on a locally running object. Check some RMI Tips here. 79. What is the basic principle of RMI architecture ? The RMI architecture is based on a very important principle which states that the definition of the behavior and the implementation of that behavior, are separate concepts. RMI allows the code that defines the behavior and the code that implements the behavior to remain separate and to run on separate JVMs. 80. What are the layers of RMI Architecture ? The RMI architecture consists of the following layers:Stub and Skeleton layer: This layer lies just beneath the view of the developer. This layer is responsible for intercepting method calls made by the client to the interface and redirect these calls to a remote RMI Service. Remote Reference Layer: The second layer of the RMI architecture deals with the interpretation of references made from the client to the server's remote objects. This layer interprets and manages references made from clients to the remote service objects. The connection is a one-to-one (unicast) link. Transport layer: This layer is responsible for connecting the two JVM participating in the service. This layer is based on TCP/IP connections between machines in a network. It provides basic connectivity, as well as some firewall penetration strategies.81. What is the role of Remote Interface in RMI ? The Remote interface serves to identify interfaces whose methods may be invoked from a non-local virtual machine. Any object that is a remote object must directly or indirectly implement this interface. A class that implements a remote interface should declare the remote interfaces being implemented, define the constructor for each remote object and provide an implementation for each remote method in all remote interfaces. 82. What is the role of the java.rmi.Naming Class ? The java.rmi.Naming class provides methods for storing and obtaining references to remote objects in the remote object registry. Each method of the Naming class takes as one of its arguments a name that is a String in URL format. 83. What is meant by binding in RMI ? Binding is the process of associating or registering a name for a remote object, which can be used at a later time, in order to look up that remote object. A remote object can be associated with a name using the bind or rebind methods of the Naming class. 84. What is the difference between using bind() and rebind() methods of Naming Class ? The bind method bind is responsible for binding the specified name to a remote object, while the rebind method is responsible for rebinding the specified name to a new remote object. In case a binding exists for that name, the binding is replaced. 85. What are the steps involved to make work a RMI program ? The following steps must be involved in order for a RMI program to work properly:Compilation of all source files. Generatation of the stubs using rmic. Start the rmiregistry. Start the RMIServer. Run the client program.86. What is the role of stub in RMI ? A stub for a remote object acts as a client's local representative or proxy for the remote object. The caller invokes a method on the local stub, which is responsible for executing the method on the remote object. When a stub's method is invoked, it undergoes the following steps:It initiates a connection to the remote JVM containing the remote object. It marshals the parameters to the remote JVM. It waits for the result of the method invocation and execution. It unmarshals the return value or an exception if the method has not been successfully executed. It returns the value to the caller.87. What is DGC ? And how does it work ? DGC stands for Distributed Garbage Collection. Remote Method Invocation (RMI) uses DGC for automatic garbage collection. Since RMI involves remote object references across JVM's, garbage collection can be quite difficult. DGC uses a reference counting algorithm to provide automatic memory management for remote objects. 88. What is the purpose of using RMISecurityManager in RMI ? RMISecurityManager provides a security manager that can be used by RMI applications, which use downloaded code. The class loader of RMI will not download any classes from remote locations, if the security manager has not been set. 89. Explain Marshalling and demarshalling. When an application wants to pass its memory objects across a network to another host or persist it to storage, the in-memory representation must be converted to a suitable format. This process is called marshalling and the revert operation is called demarshalling. 90. Explain Serialization and Deserialization. Java provides a mechanism, called object serialization where an object can be represented as a sequence of bytes and includes the object's data, as well as information about the object's type, and the types of data stored in the object. Thus, serialization can be seen as a way of flattening objects, in order to be stored on disk, and later, read back and reconstituted. Deserialisation is the reverse process of converting an object from its flattened state to a live object. Servlets 91. What is a Servlet ? The servlet is a Java programming language class used to process client requests and generate dynamic web content. Servlets are mostly used to process or store data submitted by an HTML form, provide dynamic content and manage state information that does not exist in the stateless HTTP protocol. 92. Explain the architechure of a Servlet. The core abstraction that must be implemented by all servlets is the javax.servlet.Servlet interface. Each servlet must implement it either directly or indirectly, either by extending javax.servlet.GenericServlet or javax.servlet.http.HTTPServlet. Finally, each servlet is able to serve multiple requests in parallel using multithreading. 93. What is the difference between an Applet and a Servlet ? An Applet is a client side java program that runs within a Web browser on the client machine. On the other hand, a servlet is a server side component that runs on the web server.An applet can use the user interface classes, while a servlet does not have a user interface. Instead, a servlet waits for client's HTTP requests and generates a response in every request. 94. What is the difference between GenericServlet and HttpServlet ? GenericServlet is a generalized and protocol-independent servlet that implements the Servlet and ServletConfig interfaces. Those servlets extending the GenericServlet class shall override the service method. Finally, in order to develop an HTTP servlet for use on the Web that serves requests using the HTTP protocol, your servlet must extend the HttpServlet instead. Check Servlet examples here. 95. Explain the life cycle of a Servlet. On every client's request, the Servlet Engine loads the servlets and invokes its init methods, in order for the servlet to be initialized. Then, the Servlet object handles all subsequent requests coming from that client, by invoking the service method for each request separately. Finally, the servlet is removed by calling the server's destroy method. 96. What is the difference between doGet() and doPost() ? doGET: The GET method appends the name-value pairs on the request's URL. Thus, there is a limit on the number of characters and subsequently on the number of values that can be used in a client's request. Furthermore, the values of the request are made visible and thus, sensitive information must not be passed in that way. doPOST: The POST method overcomes the limit imposed by the GET request, by sending the values of the request inside its body. Also, there is no limitations on the number of values to be sent across. Finally, the sensitive information passed through a POST request is not visible to an external client. 97. What is meant by a Web Application ? A Web application is a dynamic extension of a Web or application server. There are two types of web applications: presentation-oriented and service-oriented. A presentation-oriented Web application generates interactive web pages, which contain various types of markup language and dynamic content in response to requests. On the other hand, a service-oriented web application implements the endpoint of a web service. In general, a Web application can be seen as a collection of servlets installed under a specific subset of the server's URL namespace. 98. What is a Server Side Include (SSI) ? Server Side Includes (SSI) is a simple interpreted server-side scripting language, used almost exclusively for the Web, and is embedded with a servlet tag. The most frequent use of SSI is to include the contents of one or more files into a Web page on a Web server. When a Web page is accessed by a browser, the Web server replaces the servlet tag in that Web page with the hyper text generated by the corresponding servlet. 99. What is Servlet Chaining ? Servlet Chaining is the method where the output of one servlet is sent to a second servlet. The output of the second servlet can be sent to a third servlet, and so on. The last servlet in the chain is responsible for sending the response to the client. 100. How do you find out what client machine is making a request to your servlet ? The ServletRequest class has functions for finding out the IP address or host name of the client machine. getRemoteAddr() gets the IP address of the client machine and getRemoteHost() gets the host name of the client machine. See example here. 101. What is the structure of the HTTP response ? The HTTP response consists of three parts:Status Code: describes the status of the response. It can be used to check if the request has been successfully completed. In case the request failed, the status code can be used to find out the reason behind the failure. If your servlet does not return a status code, the success status code, HttpServletResponse.SC_OK, is returned by default. HTTP Headers: they contain more information about the response. For example, the headers may specify the date/time after which the response is considered stale, or the form of encoding used to safely transfer the entity to the user. See how to retrieve headers in Servlet here. Body: it contains the content of the response. The body may contain HTML code, an image, etc. The body consists of the data bytes transmitted in an HTTP transaction message immediately following the headers.102. What is a cookie ? What is the difference between session and cookie ? A cookie is a bit of information that the Web server sends to the browser. The browser stores the cookies for each Web server in a local file. In a future request, the browser, along with the request, sends all stored cookies for that specific Web server.The differences between session and a cookie are the following:The session should work, regardless of the settings on the client browser. The client may have chosen to disable cookies. However, the sessions still work, as the client has no ability to disable them in the server side. The session and cookies also differ in the amount of information the can store. The HTTP session is capable of storing any Java object, while a cookie can only store String objects.103. Which protocol will be used by browser and servlet to communicate ? The browser communicates with a servlet by using the HTTP protocol. 104. What is HTTP Tunneling ? HTTP Tunneling is a technique by which, communications performed using various network protocols are encapsulated using the HTTP or HTTPS protocols. The HTTP protocol therefore acts as a wrapper for a channel that the network protocol being tunneled uses to communicate. The masking of other protocol requests as HTTP requests is HTTP Tunneling. 105. What's the difference between sendRedirect and forward methods ? The sendRedirect method creates a new request, while the forward method just forwards a request to a new target. The previous request scope objects are not available after a redirect, because it results in a new request. On the other hand, the previous request scope objects are available after forwarding. FInally, in general, the sendRedirect method is considered to be slower compare to the forward method. 106. What is URL Encoding and URL Decoding ? The URL encoding procedure is responsible for replacing all the spaces and every other extra special character of a URL, into their corresponding Hex representation. In correspondence, URL decoding is the exact opposite procedure. JSP 107. What is a JSP Page ? A Java Server Page (JSP) is a text document that contains two types of text: static data and JSP elements. Static data can be expressed in any text-based format, such as HTML or XML. JSP is a technology that mixes static content with dynamically-generated content. See JSP example here. 108. How are the JSP requests handled ? On the arrival of a JSP request, the browser first requests a page with a .jsp extension. Then, the Web server reads the request and using the JSP compiler, the Web server converts the JSP page into a servlet class. Notice that the JSP file is compiled only on the first request of the page, or if the JSP file has changed.The generated servlet class is invoked, in order to handle the browser's request. Once the execution of the request is over, the servlet sends a response back to the client. See how to get Request parameters in a JSP. 109. What are the advantages of JSP ? The advantages of using the JSP technology are shown below:JSP pages are dynamically compiled into servlets and thus, the developers can easily make updates to presentation code. JSP pages can be pre-compiled. JSP pages can be easily combined to static templates, including HTML or XML fragments, with code that generates dynamic content. Developers can offer customized JSP tag libraries that page authors access using an XML-like syntax. Developers can make logic changes at the component level, without editing the individual pages that use the application's logic.110. What are Directives ? What are the different types of Directives available in JSP ? Directives are instructions that are processed by the JSP engine, when the page is compiled to a servlet. Directives are used to set page-level instructions, insert data from external files, and specify custom tag libraries. Directives are defined between < %@ and % >.The different types of directives are shown below:Include directive: it is used to include a file and merges the content of the file with the current page. Page directive: it is used to define specific attributes in the JSP page, like error page and buffer. Taglib: it is used to declare a custom tag library which is used in the page.111. What are JSP actions ? JSP actions use constructs in XML syntax to control the behavior of the servlet engine. JSP actions are executed when a JSP page is requested. They can be dynamically inserted into a file, re-use JavaBeans components, forward the user to another page, or generate HTML for the Java plugin.Some of the available actions are listed below:jsp:include - includes a file, when the JSP page is requested. jsp:useBean - finds or instantiates a JavaBean. jsp:setProperty - sets the property of a JavaBean. jsp:getProperty - gets the property of a JavaBean. jsp:forward - forwards the requester to a new page. jsp:plugin - generates browser-specific code.112. What are Scriptlets ? In Java Server Pages (JSP) technology, a scriptlet is a piece of Java-code embedded in a JSP page. The scriptlet is everything inside the tags. Between these tags, a user can add any valid scriplet. 113. What are Decalarations ? Declarations are similar to variable declarations in Java. Declarations are used to declare variables for subsequent use in expressions or scriptlets. To add a declaration, you must use the sequences to enclose your declarations. 114. What are Expressions ? A JSP expression is used to insert the value of a scripting language expression, converted into a string, into the data stream returned to the client, by the web server. Expressions are defined between <% = and %> tags. 115. What is meant by implicit objects and what are they ? JSP implicit objects are those Java objects that the JSP Container makes available to developers in each page. A developer can call them directly, without being explicitly declared. JSP Implicit Objects are also called pre-defined variables.The following objects are considered implicit in a JSP page:application page request response session exception out config pageContext  Still with us? Wow, that was a huge article about different types of questions that can be used in a Java interview. If you enjoyed this, then subscribe to our newsletter to enjoy weekly updates and complimentary whitepapers! So, what other Java interview questions are there? Let us know in the comments and we will include them in the article! Happy coding! ...
software-development-2-logo

Easter Hack: Even More Critical Bugs in SSL/TLS Implementations

It’s been some time since my last blog post – time for writing is rare. But today, I’m very happy that Oracle released the brand new April Critical Patch Update, fixing 37 vulnerabilities in our beloved Java (seriously, no kidding – Java is simply a great language!). With that being said, all vulnerabilities reported by my colleagues (credits go to Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk) and me are fixed and I highly recommend to patch as soon as possible if you are running a server powered by JSSE! Additional results on crypto hardware suffering from vulnerable firmware are ommited at this moment, because the patch(es) isn’t/aren’t available yet – details follow when the fix(es) is/are ready. To keep this blog post as short as possible I will skip a lot of details, analysis and pre-requisites you need to know to understand the attacks mentioned in this post. If you are interested use the link at the end of this post to get a much more detailed report. Resurrecting Fixed Attacks Do you remember Bleichenbacher’s clever million question attack on SSL from 1998? It was believed to be fixed with the following countermeasure specified in the TLS 1.0 RFC: “The best way to avoid vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incor- rectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.” – Source: RFC 2246 In simple words, the server is advised to create a random PreMasterSecret in case of problems during processing of the received, encrypted PreMasterSecret (structure violations, decryption errors, etc.). The server must continue the handshake with the randomly generated PreMasterSecret and perform all subsequent computations with this value. This leads to a fatal Alert when checking the Finished message (because of different key material at client- and server-side), but it does not allow the attacker to distinguish valid from invalid (PKCS#1v1.5 compliant and non-compliant) ciphertexts. In theory, an attacker gains no additional information on the ciphertext if this countermeasure is applied (correctly). Guess what? The fix itself can introduce problems:Different processing times caused by different code branches in the valid and invalid cases What happens if we can trigger Excpetions in the code responsible for branching? If we could trigger different Exceptions, how would that influence the timing behaviour?Let’s have a look at the second case first, because it is the easiest one to explain if you are familiar with Bleichenbacher’s attack: Exploiting PKCS#1 Processing in JSSE A coding error in the com.sun.crypto.provider.TlsPrfGenerator (missing array length check and incorrect decoding) could be used to force an ArrayIndexOutOfBoundsException during PKCS#1 processing. TheException finally led to a general error in the JSSE stack which is being communicated to the client in form of anINTERNAL_ERROR SSL/TLS alert message. What can we learn from this? The alert message is only send if we are already inside the PKCS#1 decoding code blocks! With this side channel Bleichenbacher’s attack can be mounted again: An INTERNAL_ERROR alert message suggests a PKCS#1 structure that was recognized as such, but contained an error – any other alert message was caused by the different processing branch (the countermeasure against this attack). The side channel is only triggered if the PKCS#1 structure contains a specific structure. This structure is shown below.  If  a 00 byte is contained in any of the red marked positions the side-channel will help us to recognize these ciphertexts. We tested our resurrected Bleichenbacher attack and were able to get the decrypted PreMasterSecret back. This took about 5h and 460000 queries to the target server for a 2048 bit key. Sounds much? No problem… By using the newest, high performance adaption of the attack (many thanks to Graham Steel for very the helpful discussions!) resulted in only about 73710 queries in mean for a 4096 bit RSA key!  This time JSSE was successfully exploited once. But let’s have a look on a much more complicated scenario. No obvious presence of a side channel at all :-( Maybe we can use the first case… Secret Depending Processing Branches Lead to Timing Side Channels A conspicuousness with respect to the random PreMasterSecret generation (you remeber, the Bleichenbacher countermeasure) was already obvious during the code analysis of JSSE for the previous attack: The randomPreMasterSecret was only generated if problems occured during PKCS#1 decoding. Otherwise, no random bytes were generated (sun.security.ssl.Handshaker.calculateMasterSecret(…)). The question is, how time consuming is the generation of a random PreMasterSecret? Well, it depends and there is no definitive answer to this question. Measuring time for valid and invalid ciphertexts revealed blurred results. But at least, having different branches with different processing times introduces the chance for a timing side channel. This is why OpenSSL was independently patched during our research to guarantee equal processing times for both, valid and invalid ciphertexts. Risks of Modern Software Design To make a long story short, it turned out that not the random number generation caused the timing side channel, but the concept of creating and handling Exceptions. Throwing and catching Exceptions is a very expensive task with regards towards the consumption of processing time. Unfortunately, the Java code responsible for PKCS#1 decoding (sun.security.rsa.RSAPadding.unpadV15(…)) was written with the best intentions from a developers point of view. It throws Exceptions if errors occur during PKCS#1 decoding. Time measurements revealed significant differences in the response time of a server when confronted with valid/invalid PKCS#1 structures. These differences could even be measured in a live environment (university network) with a lot of traffic and noise on the line. Again, how is this useful? It’s always the same – when knowing that the ciphertext reached the PKCS#1 decoding branch, you know it was recognized as PKCS#1 and thus represents a useful and valid side channel for Bleichenbacher’s attack.  The attack on an OpenJDK 1.6 powered server took about 19.5h and 18600 oracle queries in our live setup! JSSE was hit the second time…. OAEP Comes To The Rescue Some of you might say “Switch to OAEP and all of your problems are gone….”. I agree, partly. OAEP will indeed fix a lot of security problems (but definitely not all!), but only if implemented correctly. Manger told us that implementing OAEP the wrong way could have disastrous results. While looking at the OAEP decoding  code insun.security.rsa.RSAPadding it turned out that the code contained a behaviour similar to the one described by Manger as problematic. This could have led to another side channel if SSL/TLS did already offer OAEP support…. All the vulnerabilties mentioned in this post are fixed, but others are in the line to follow… We submitted a research paper which will explain the vulnerabilities mentioned here in more depth and the unpublished ones as well, so stay tuned – there’s more to come. Many thanks to my fellow researchers Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk all of our findings wouldn’t have been possible without everyones speical contribution. It needs a skilled team to turn theoretical attacks into practice! A more detailed analysis of all vulnerabilities listed here, as well as a lot more on SSL/TLS security can be found in my Phd thesis: 20 Years of SSL/TLS Research: An Analysis of the Internet’s Security Foundation.Reference: Easter Hack: Even More Critical Bugs in SSL/TLS Implementations from our JCG partner Christopher Meyer at the Java security and related topics blog....
grails-logo

Grails Goodness: Extending IntegrateWith Command

We can extend the integrate-with command in Grails to generate files for a custom IDE or build system. We must add a _Events.groovy file to our Grails projects and then write an implementation for the eventIntegrateWithStart event. Inside the event we must define a new closure with our code to generate files. The name of the closure must have the following pattern: binding.integrateCustomIdentifier. The value for CustomIdentifier can be used as an argument for the integrate-with command. Suppose we want to extend integrate-with to generate a simple Sublime Text project file. First we create a template Sublime Text project file where we define folders for a Grails application. We create the folder src/ide-support/sublimetext and add the file grailsProject.sublimetext-project with the following contents: { "folders": [ { "name": "Domain classes", "path": "grails-app/domain" }, { "name": "Controllers", "path": "grails-app/controllers" }, { "name": "Taglibs", "path": "grails-app/taglib" }, { "name": "Views", "path": "grails-app/views" }, { "name": "Services", "path": "grails-app/services" }, { "name": "Configuration", "path": "grails-app/conf" }, { "name": "grails-app/i18n", "path": "grails-app/i18n" }, { "name": "grails-app/utils", "path": "grails-app/utils" }, { "name": "grails-app/migrations", "path": "grails-app/migrations" }, { "name": "web-app", "path": "web-app" }, { "name": "Scripts", "path": "scripts" }, { "name": "Sources:groovy", "path": "src/groovy" }, { "name": "Sources:java", "path": "src/java" }, { "name": "Tests:integration", "path": "test/integration" }, { "name": "Tests:unit", "path": "test/unit" }, { "name": "All files", "follow_symlinks": true, "path": "." } ] } Next we create the file scripts/_Events.groovy: includeTargets << grailsScript("_GrailsInit")eventIntegrateWithStart = {// Usage: integrate-with --sublimeText binding.integrateSublimeText = {// Copy template file. ant.copy(todir: basedir) { fileset(dir: "src/ide-support/sublimetext/") }// Move template file to real project file with name of Grails application. ant.move(file: "$basedir/grailsProject.sublime-project", tofile: "$basedir/${grailsAppName}.sublime-project", overwrite: true)grailsConsole.updateStatus "Created SublimeText project file" } } We are done and can now run the integrate-with command with the new argument sublimeText: $ grails integrate-with --sublimeText | Created SublimeText project file. $ If we open the project in Sublime Text we see our folder structure for a Grails application:Code written with Grails 2.3.7.Reference: Grails Goodness: Extending IntegrateWith Command from our JCG partner Hubert Ikkink at the JDriven blog....
agile-logo

Agile – What’s a Manager to Do?

As a manager, when I first started learning about Agile development, I was confused by the fuzzy way that Agile teams and projects are managed (or manage themselves), and frustrated and disappointed by the negative attitude towards managers and management in general. Attempts to reconcile project management and Agile haven’t answered these concerns. The PMI-ACP does a good job of making sure that you understand Agile principles and methods (mostly Scrum and XP with some Kanban and Lean), but is surprisingly vague about what an Agile project manager is or does. Even a book like the Software Project Manager’s Bridge to Agility, intended to help bridge PMI’s project management practices and Agile, fails to come up with a meaningful job for managers or project managers in an Agile world. In Scrum (which is what most people mean when they say Agile today), there is no place for project managers at all: responsibilities for management are spread across the Product Owner, the Scrum Master and the development team.We have found that the role of the project manager is counterproductive in complex, creative work. The project manager’s thinking, as represented by the project plan, constrains the creativity and intelligence of everyone else on the project to that of the plan, rather than engaging everyone’s intelligence to best solve the problems. In Scrum, we have removed the project manager. The Product Owner, or customer, provides just-in-time planning by telling the development team what is needed, as often as every month. The development team manages itself, turning as much of what the product owner wants into usable product as possible. The result is high productivity, creativity, and engaged customers. We have replaced the project manager with the Scrum Master, who manages the process and helps the project and organization transition to agile practices. Ken Schwaber, Agility and PMI, 2011 Project Managers have the choice of becoming a Scrum Master (if they can accept a servant leader role and learn to be an effective Agile coach – and if the team will accept them) or a Product Owner (if they have deep enough domain knowledge and other skills), or find another job somewhere else. Project Manager as Product Owner The Product Owner is command-and-control position responsible for the “what” part of a development project. It’s a big job. The Product Owner owns the definition of what needs to be built, decides what gets done and in what order, approves changes to scope and makes scope / schedule / cost trade-offs, and decides when work is done. The Product Owner manages and represents the business stakeholders, and makes sure that business needs are met. The Product Owner replaces the project manager as the person most responsible for the success of the project (“the one throat to choke”). But they don’t control the team’s work, the technical details of who does the work or how. That’s decided by the team. Some project managers may have the domain knowledge and business experience, the analytical skills and the connections in the customer organization to meet the requirements of this role. But it’s also likely to be played by an absentee business manager or sponsor, backed up by a customer proxy, a business analyst or someone else on the team without real responsibility or authority in the organization, creating potentially serious project risks and management problems. Some organizations have tried to solve this by sharing the role across two people: a project manager and a business analyst, working together to handle all of the Product Owner’s responsibilities. Project Manager as Scrum Master It seems like the most natural path for a project manager is to become the team’s Scrum Master, although there is a lot of disagreement over whether a project manager can be effective – and accepted – as a Scrum Master, whether they will accept the changes in responsibilities and authority, and be willing to change how they work with the team and the rest of the organization. The Scrum Master is a “process owner” and coach, not a project manager. They help the team – and the Product Owner – understand how to work in an Agile process framework, what their roles and responsibilities are, set up and guide the meetings and reviews, and coach team members through change and conflict. The Scrum Master works a servant leader, a (nice) process cop, a secretary and a gofer. Somebody who supports the team and the Product Owner, “carries food and water” for them, tries to protect them from the world outside of the project and helps them solve problems. But the Scrum Master has no direct authority over the project or the team and does not make decisions for them, because Agile teams are supposed to be self-directing, self-organizing and self-managing. Of course that’s not how things start off. Any group of people must work their way through Tuckman’s 4 stages of team development: Forming-Storming-Norming-Performing. It’s only when they reach the last stage that a group can effectively manage themselves. In the mean time, somebody (the Scrum Master / Coach) has to help the team make decisions that they aren’t ready to make on their own. It can take a long time for a team to reach this point, for people to learn to trust each other – and the organization – enough. And it may not last long, before something outside of the team’s control sets them back: a key person leaving or joining the team, a change in leadership, a shock to the project like a major change in direction or cuts to the budget. Then they need to be led back to a high performing state again. Coaching the team and helping them out can be a full-time job in the beginning. After the team has got together and learned the process? Not so much. Which is why the Scrum Master is sometimes played part-time by a developer or sometimes even rotated between people on the development team. But even when the team is performing at a high level, there’s more to managing an Agile project than setting up meetings, buying pizza and trying to stay out of the way. I’ve come to understand that Agile doesn’t make a manager’s job go away. If anything, it expands it. Managing Upfront First, there’s all of the work that has to be done upfront at the start of a project – before Iteration Zero. Identifying stakeholders. Securing the charter. Negotiating the project budget and contract terms. Understanding and navigating the organization’s bureaucracy. Figuring out governance and compliance requirements and constraints, what the PMO needs. Working with HR, line managers and functional managers to put the team together, finding and hiring good people, getting space for them to work in and the tools that they need to work with. Lining up partners and suppliers and contractors. Contracting and licensing and other legal stuff. The Product Owner might do some of this work – but they can’t do it all. Managing Up and Out Then there’s the work that needs to be managed outside of the team. Agile development is insular, insulated and inward-looking. The team is protected from the world outside so they can focus on building features together. But the world outside is too important to ignore. Every development project involves more than designing and building software – often much more than the work of development itself. Every project, even a small project, has dependencies and hand-offs that need to be coordinated with other teams in other places, with other projects, with specialists outside of the team, with customers and partners and suppliers. There is forward planning that needs to be done, setting and tracking drop-dead dates, defining and maintaining interfaces and integration points and landing zones. Agile teams move and respond to change quickly. These changes can have impacts outside of the team, on the customer, other teams and other projects, other parts of the organization, suppliers and partners. You can try using a Scrum of Scrums to coordinate with other Agile teams up to a point, but somebody still has to keep track of dependencies and changes and delays and orchestrate the hand-offs. Depending on the contracting model and your compliance or governance environment, formal change control may not go away either, at least not for material changes. Even if the Product Owner and the team are happy, somebody still has to take care of the paperwork to stay onside of regulatory traceability requirements and to stay within contract terms. There are a lot of people who need to know what’s going on in a project outside of the development team – especially in big projects in big organizations. Communicating outwards, to people outside of the team and outside of the company. Communicating upwards to management and sponsors, keeping them informed and keeping them onside. Task boards and burn downs and big visible charts on the wall might work fine for the team, but upper management and the PMO and other stakeholders need a lot more, they need to understand development status in the overall context of the project or program or business change initiative. And there’s cost management and procurement. Forecasting and tracking and managing costs, especially costs outside of development labor costs. Contracts and licensing need to be taken care of. Stuff needs to be bought. Bills need to be paid. Managing Risks Scrum done right (with XP engineering practices carefully sewed in) can be effective in containing many common software development risks: scope, schedule, requirements specification, technical risks. But there are other risks that still need to be managed, risks that come from outside of the team: program risks, political risks, partner risks and other logistical risks, integration risks, data quality risks, operational risks, security risks, financial risks, legal risks, strategic risks.Scrum purposefully has many gaps, holes, and bare spots where you are required to use best practices – such as risk management. Ken Schwaber While the team and the Product Owner and Scrum Master are focused on prioritizing and delivering features and resolving technical issues, somebody has to look further out for risks, bring them up to the team, and manage the risks that aren’t under the team’s control. Managing the End Game And just like at the start of a project, when the project nears the end game, somebody needs to take care of final approvals and contractual acceptance, coordinate integration with other systems and with customers and partners, data setup and cleansing and conversion, documentation and training. Setting up the operations infrastructure, the facilities and hardware and connectivity, the people and processes and tools needed to run the system. Setting up a support capability. Packaging and deployment, roll out planning and roll back planning, the hand-off to the customer or to ops, community building and marketing and whatever else is required for a successful launch. Never mind helping make whatever changes are required to business workflows and business processes that may be required with the new system. Project Management doesn’t go away in Agile There are lots of management problems that need to be taken care of in any project. Agile spreads some management responsibilities around and down to the team, but doesn’t make management problems go away. Projects can’t scale, teams can’t succeed, unless somebody – a project manager or the PMO or someone else with the authority and skills required – takes care of them.Reference: Agile – What’s a Manager to Do? from our JCG partner Jim Bird at the Building Real Software blog....
javafx-logo

JavaFX Tip 3: Use Callback Interface

As a UI framework developer it is part of my job to provide ways to customize the appearance and behavior of my controls. In many cases this is done by allowing the framework user to register a factory on a control. In the past I would have created a factory interface for this and provided one or more default implementations within the framework. These things are done differently in JavaFX and I have started to embrace it for my own work. JavaFX uses a generic interface called javafx.util.Callback wherever a piece of code is needed that produces a result (R) for a given parameter (P). The interface looks like this: public interface Callback<P,R> { public R call(P param); } Advantages At first I didn’t like using this interface because my code was loosing verbosity: I no longer had self-explaining interface names. But in the end I realized that the advantages overweight the lack of verbosity. The advantages being:We end up writing less code. No specialized interface, no default implementations. The developer using the API does not have to remember different factories, instead he can focus on the object that he wants to create and the parameters that are available to him. The Callback interface is a functional interface. We can use Lambda expressions, which makes the code more elegant and we once again have to write less code.Case Study The  FlexGanttFX framework contains a control called Dateline for displaying (surprise) dates. Each date is shown in its own cell. The dateline can display different temporal units (ChronoUnit from java.time, and SimpleUnit from FlexGanttFX). A factory approach is used to build the cells based on the temporal unit shown. Before I was using the callback approach I had the following situation: an interface called DatelineCellFactory with exactly one method createDatelineCell(). I was providing two default implementations called ChronoUnitDatelineCellFactory and SimpleUnitDatelineCellFactory. By using Callback I was able to delete all three interfaces / classes and in the skin of the dateline I find the following two lines instead: dateline.setCellFactory(<span class="skimlinks-unlinked">SimpleUnit.class</span>, unit -> new SimpleUnitDatelineCell()); dateline.setCellFactory(<span class="skimlinks-unlinked">ChronoUnit.class</span>, unit -> new ChronoUnitDatelineCell());Two lines of code instead of three files! I think this example speaks for itself.Reference: JavaFX Tip 3: Use Callback Interface from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
java-logo

10 JDK 7 Features to Revisit, Before You Welcome Java 8

It’s been almost a month Java 8 is released and I am sure all of you are exploring new features of JDK 8. But, before you completely delve into Java 8, it’s time to revisit some of the cool features introduced on Java 7. If you remember, Java 6 was nothing on feature, it was all about JVM changes and performance, but JDK 7 did introduced some cool features which improved developer’s day to day task. Why I am writing this post now? Why I am talking about Java 1. 7, when everybody is talking about Java 8? Well I think, not all Java developers are familiar with changes introduced in JDK 7, and what time can be better to revisit earlier version than before welcoming a new version. I don’t see automatic resource management used by developer in daily life, even after IDE’s has got content assist for that. Though I see programmers using String in Switch and Diamond operator for type inference, again there is very little known about fork join framework,  catching multiple exception in one catch block or using underscore on numeric literals.  So I took this opportunity to write a summary sort of post to revise these convenient changes and adopt them into out daily programming life. There are couple of good changes on NIO and new File API, and lots of other at API level, which is also worth looking. I am sure combined with Java 8 lambda expression, these feature will result in much better and cleaner code.Type inference Before JDK 1.7 introduce a new operator <<, known as diamond operator to making type inference available for constructors as well. Prior to Java 7, type inference is only available for methods, and Joshua Bloch has rightly predicted in Effective Java 2nd Edition, it’s now available for constructor as well. Prior JDK 7, you type more to specify types on both left and right hand side of object creation expression, but now it only needed on left hand side, as shown in below example. Prior JDK 7 Map<String, List<String>> employeeRecords = new HashMap<String, List<String>>(); List<Integer> primes = new ArrayList<Integer>(); In JDK 7 Map<String, List<String>> employeeRecords = new HashMap<>(); List<Integer> primes = new ArrayList<>(); So you have to type less in Java 7, while working with Collections, where we heavily use Generics. See here for more detailed information on diamond operator in Java.String in SwitchBefore JDK 7, only integral types can be used as selector for switch-case statement. In JDK 7, you can use a String object as the selector. For example, String state = "NEW";switch (day) { case "NEW": System.out.println("Order is in NEW state"); break; case "CANCELED": System.out.println("Order is Cancelled"); break; case "REPLACE": System.out.println("Order is replaced successfully"); break; case "FILLED": System.out.println("Order is filled"); break; default: System.out.println("Invalid");} equals() and hashcode() method from java.lang.String is used in comparison, which is case-sensitive. Benefit of using String in switch is that, Java compiler can generate more efficient code than using nested if-then-else statement. See here for more detailed information of how to use String on Switch case statement.Automatic Resource ManagementBefore JDK 7, we need to use a finally block, to ensure that a resource is closed regardless of whether the try statement completes normally or abruptly, for example while reading files and streams, we need to close them into finally block, which result in lots of boiler plate and messy code, as shown below : public static void main(String args[]) { FileInputStream fin = null; BufferedReader br = null; try { fin = new FileInputStream("info.xml"); br = new BufferedReader(new InputStreamReader(fin)); if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } finally { try { if (fin != null) fin.close(); if (br != null) br.close(); } catch (IOException ie) { System.out.println("Failed to close files"); } } } Look at this code, how many lines of boiler codes? Now in Java 7, you can use try-with-resource feature to automatically close resources, which implements AutoClosable and Closeable interface e.g. Streams, Files, Socket handles, database connections etc. JDK 7 introduces a try-with-resources statement, which ensures that each of the resources in try(resources) is closed at the end of the statement by calling close() method of AutoClosable. Now same example in Java 7 will look like below, a much concise and cleaner code : public static void main(String args[]) { try (FileInputStream fin = new FileInputStream("info.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(fin));) { if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } } Since Java is taking care of closing opened resources including files and streams, may be no more leaking of file descriptors and probably an end to file descriptor error. Even JDBC 4.1 is retrofitted as AutoClosable too.Fork Join FrameworkThe fork/join framework is an implementation of the ExecutorService interface that allows you to take advantage of multiple processors available in modern servers. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application. As with any ExecutorService implementation, the fork/join framework distributes tasks to worker threads in a thread pool. The fork join framework is distinct because it uses a work-stealing algorithm, which is very different than producer consumer algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy. The centre of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService class. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes. You can wrap code in a ForkJoinTask subclass like RecursiveTask (which can return a result) or RecursiveAction. See here for some more information on fork join framework in Java.Underscore in Numeric literalsIn JDK 7, you could insert underscore(s) ‘_’ in between the digits in an numeric literals (integral and floating-point literals) to improve readability. This is especially valuable for people who uses large numbers in source files, may be useful in finance and computing domains. For example, int billion = 1_000_000_000; // 10^9 long creditCardNumber = 1234_4567_8901_2345L; //16 digit number long ssn = 777_99_8888L; double pi = 3.1415_9265; float pif = 3.14_15_92_65f; You can put underscore at convenient points to make it more readable, for examples for large amounts putting underscore between three digits make sense, and for credit card numbers, which are 16 digit long, putting underscore after 4th digit make sense, as they are printed in cards. By the way remember that you cannot put underscore, just after decimal number or at the beginning or at the end of number. For example, following numeric literals are invalid, because of wrong placement of underscore: double pi = 3._1415_9265; // underscore just after decimal point long creditcardNum = 1234_4567_8901_2345_L; //underscore at the end of number long ssn = _777_99_8888L; //undersocre at the beginning See my post about how to use underscore on numeric literals for more information and use case.Catching Multiple Exception Type in Single Catch BlockIn JDK 7, a single catch block can handle more than one exception types. For example, before JDK 7, you need two catch blocks to catch two exception types although both perform identical task: try {......} catch(ClassNotFoundException ex) { ex.printStackTrace(); } catch(SQLException ex) { ex.printStackTrace(); } In JDK 7, you could use one single catch block, with exception types separated by ‘|’. try {......} catch(ClassNotFoundException|SQLException ex) {ex.printStackTrace();} By the way, just remember that Alternatives in a multi-catch statement cannot be related by sub classing. For example a multi-catch statement like below will throw compile time error : try {......} catch (FileNotFoundException | IOException ex) {ex.printStackTrace();} Alternatives in a multi-catch statement cannot be related by sub classing, it will throw error at compile time : java.io.FileNotFoundException is a subclass of alternative java.io.IOException at Test.main(Test.java:18) see here to learn more about improved exception handling in Java SE 7.Binary Literals with prefix “0b”In JDK 7, you can express literal values in binary with prefix ’0b’ (or ’0B’) for integral types ( byte, short, int and long), similar to C/C++ language. Before JDK 7, you can only use octal values (with prefix ’0′) or hexadecimal values (with prefix ’0x’ or ’0X’). int mask = 0b01010000101; or even better int binary = 0B0101_0000_1010_0010_1101_0000_1010_0010;Java NIO 2.0Java SE 7 introduced java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the default file system. It also introduced the Path class which allow you to represent any path in operating system. New File system API complements older one and provides several useful method checking, deleting, copying, and moving files. for example, now you can check if a file is hidden in Java. You can also create symbolic and hard links from Java code.  JDK 7 new file API is also capable of searching for files using wild cards. You also get support to watch a directory for changes. I would recommend to check Java doc of new file package to learn more about this interesting useful feature.G1 Garbage CollectorJDK 7 introduced a new Garbage Collector known as G1 Garbage Collection, which is short form of garbage first. G1 garbage collector performs clean-up where there is most garbage. To achieve this it split Java heap memory into multiple regions as opposed to 3 regions in the prior to Java 7 version (new, old and permgen space). It’s said that G1 is quite predictable and provides greater through put for memory intensive applications.More Precise Rethrowing of ExceptionThe Java SE 7 compiler performs more precise analysis of re-thrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. before JDK 7, re-throwing an exception was treated as throwing the type of the catch parameter. For example, if your try block can throw ParseException as well as IOException. In order to catch all exceptions and rethrow them, you would have to catch Exception and declare your method as throwing an Exception. This is sort of obscure non-precise throw, because you are throwing a general Exception type (instead of specific ones) and statements calling your method need to catch this general Exception. This will be more clear by seeing following example of exception handling in code prior to Java 1.7 public void obscure() throws Exception{ try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } From JDK 7 onwards you can be more precise while declaring type of Exception in throws clause of any method. This precision in determining which Exception is thrown from the fact that, If you re-throw an exception from a catch block, you are actually throwing an exception type which:your try block can throw, has not handled by any previous catch block, and is a subtype of one of the Exception declared as catch parameterThis leads to improved checking for re-thrown exceptions. You can be more precise about the exceptions being thrown from the method and you can handle them a lot better at client side, as shown in following example : public void precise() throws ParseException, IOException { try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } The Java SE 7 compiler allows you to specify the exception types ParseException and IOException in the throws clause in the preciese() method declaration because you can re-throw an exception that is a super-type of any of the types declared in the throws, we are throwing java.lang.Exception, which is super class of all checked Exception. Also in some places you will see final keyword with catch parameter, but that is not mandatory any more.That’s all about what you can revise in JDK 7. All these new features of Java 7 are very helpful in your goal towards clean code and developer productivity. With lambda expression introduced in Java 8, this goal to cleaner code in Java has reached another milestone. Let me know, if you think I have left out any useful feature of Java 1.7, which you think should be here. P.S. If you love books then you may like Java 7 New features Cookbook from Packet Publication as well.Reference: 10 JDK 7 Features to Revisit, Before You Welcome Java 8 from our JCG partner Javin Paul at the Javarevisited blog....
android-logo

Android Shake to Refresh tutorial

In this post we want to explore another way to refresh our app UI called Shake to Refresh. We all know the pull-to-refresh pattern that is implemented in several app. In this pattern we pull down our finger along the screen and the UI is refreshed:Even this pattern is very useful, we can use another pattern to refresh our UI, based on smartphone sensors, we can call it Shake to refresh. Instead of pulling down our finger, we shake our smartphone to refresh the UI:Implementation In order to enable our app to support the Shake to refresh feature we have to use smartphone sensors and specifically motion sensors: Accelerometer. If you want to have more information how to use sensor you can give a look here. As said, we want that the user shakes the smartphone to refresh and at the same time we don’t want that the refresh process starts accidentally or when user just moves his smartphone. So we have to implement some controls to be sure that the user is shaking the smartphone purposely. On the other hand we don’t want to implement this logic in the class that handles the UI, because it is not advisable to mix the UI logic with other things and using another class we can re-use this “pattern” in other contexts. Then, we will create another class called ShakeEventManager. This class has to listen to sensor events: public class ShakeEventManager implements SensorEventListener { .. } so that it will implement SensorEventListener. Then we have to look for the accelerometer sensor and register our class as event listener: public void init(Context ctx) { sManager = (SensorManager) ctx.getSystemService(Context.SENSOR_SERVICE); s = sManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); register(); } and then: public void register() { sManager.registerListener(this, s, SensorManager.SENSOR_DELAY_NORMAL); } To trigger the refresh event on UI some conditions must be verified, these conditions guarantee that the user is purposely shaking his smartphone. The conditions are:The acceleration must be greater than a threshold level A fixed number of acceleration events must occur The time between these events must be in a fixed time windowWe will implement this logic in onSensorChanged method that is called everytime a new value is available. The first step is calculating the acceleration, we are interested to know the max acceleration value on the three axis and we want to clean the sensor value from the gravity force. So, as stated in the official Android documentation, we first apply a low pass filter to isolate the gravity force and then high pass filter: private float calcMaxAcceleration(SensorEvent event) { gravity[0] = calcGravityForce(event.values[0], 0); gravity[1] = calcGravityForce(event.values[1], 1); gravity[2] = calcGravityForce(event.values[2], 2);float accX = event.values[0] - gravity[0]; float accY = event.values[1] - gravity[1]; float accZ = event.values[2] - gravity[2];float max1 = Math.max(accX, accY); return Math.max(max1, accZ); } where // Low pass filter private float calcGravityForce(float currentVal, int index) { return ALPHA * gravity[index] + (1 - ALPHA) * currentVal; } Once we know the max acceleration we implement our logic: @Override public void onSensorChanged(SensorEvent sensorEvent) { float maxAcc = calcMaxAcceleration(sensorEvent); Log.d("SwA", "Max Acc ["+maxAcc+"]"); if (maxAcc >= MOV_THRESHOLD) { if (counter == 0) { counter++; firstMovTime = System.currentTimeMillis(); Log.d("SwA", "First mov.."); } else { long now = System.currentTimeMillis(); if ((now - firstMovTime) < SHAKE_WINDOW_TIME_INTERVAL) counter++; else { resetAllData(); counter++; return; } Log.d("SwA", "Mov counter ["+counter+"]");if (counter >= MOV_COUNTS) if (listener != null) listener.onShake(); } }} Analyzing the code at line 3, we simply calculate the acceleration and then we check if it is greater than a threshold value  (condition 1) (line 5). If it is the first movement,  (line 7-8), we save the timestamp to check if other events happen in the specified time window. If all the conditions are satisfied we invoke a callback method define in the callback interface: public static interface ShakeListener { public void onShake(); } Test app Now we have implemented the shake event manager we are ready to create a simple app that uses it. We can create a simple activity with a ListView that is refreshed when shake event occurs: public class MainActivity extends ActionBarActivity implements ShakeEventManager.ShakeListener { ....@Override public void onShake() { // We update the ListView } } Where at line 5 we update the UI because this method is called only when the user is shaking his smartphone. Some final considerations: When the app is paused we have to unregister the sensor listener so that it won’t listen anymore to events and in this way we will save the battery. On the other hand when the app is resumed we will register again the listener: Override protected void onResume() { super.onResume(); sd.register(); }@Override protected void onPause() { super.onPause(); sd.deregister(); }Reference: Android Shake to Refresh tutorial from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
java-logo

Programmatic Access to Sizes of Java Primitive Types

One of the first things many developers new to Java learn about is Java’s basic primitive data types, their fixed (platform independent) sizes (measured in bits or bytes in terms of two’s complement), and their ranges (all numeric types in Java are signed). There are many good online resources that list these characteristics and some of these resources are the Java Tutorial lesson on Primitive Data Types, The Eight Data Types of Java, Java’s Primitive Data Types, and Java Basic Data Types. Java allows one to programmatically access these characteristics of the basic Java primitive data types. Most of the primitive data types’ maximum values and minimum values have been available for some time in Java via the corresponding reference types’ MAX_VALUE and MIN_VALUE fields. J2SE 5 introduced a SIZE field for most of the types that provides each type’s size in bits (two’s complement). JDK 8 has now provided most of these classes with a new field called BYTES that presents the type’s size in bytes (two’s complement). DataTypeSizes.java package dustin.examples.jdk8;import static java.lang.System.out; import java.lang.reflect.Field;/** * Demonstrate JDK 8's easy programmatic access to size of basic Java datatypes. * * @author Dustin */ public class DataTypeSizes { /** * Print values of certain fields (assumed to be constant) for provided class. * The fields that are printed are SIZE, BYTES, MIN_VALUE, and MAX_VALUE. * * @param clazz Class which may have static fields SIZE, BYTES, MIN_VALUE, * and/or MAX_VALUE whose values will be written to standard output. */ private static void printDataTypeDetails(final Class clazz) { out.println("\nDatatype (Class): " + clazz.getCanonicalName() + ":"); final Field[] fields = clazz.getDeclaredFields(); for (final Field field : fields) { final String fieldName = field.getName(); try { switch (fieldName) { case "SIZE" : // generally introduced with 1.5 (twos complement) out.println("\tSize (in bits): " + field.get(null)); break; case "BYTES" : // generally introduced with 1.8 (twos complement) out.println("\tSize (in bytes): " + field.get(null)); break; case "MIN_VALUE" : out.println("\tMinimum Value: " + field.get(null)); break; case "MAX_VALUE" : out.println("\tMaximum Value: " + field.get(null)); break; default : break; } } catch (IllegalAccessException illegalAccess) { out.println("ERROR: Unable to reflect on field " + fieldName); } } }/** * Demonstrate JDK 8's ability to easily programmatically access the size of * basic Java data types. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { printDataTypeDetails(Byte.class); printDataTypeDetails(Short.class); printDataTypeDetails(Integer.class); printDataTypeDetails(Long.class); printDataTypeDetails(Float.class); printDataTypeDetails(Double.class); printDataTypeDetails(Character.class); printDataTypeDetails(Boolean.class); } } When executed, the code above writes the following results to standard output. The Output Datatype (Class): java.lang.Byte: Minimum Value: -128 Maximum Value: 127 Size (in bits): 8 Size (in bytes): 1Datatype (Class): java.lang.Short: Minimum Value: -32768 Maximum Value: 32767 Size (in bits): 16 Size (in bytes): 2Datatype (Class): java.lang.Integer: Minimum Value: -2147483648 Maximum Value: 2147483647 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Long: Minimum Value: -9223372036854775808 Maximum Value: 9223372036854775807 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Float: Maximum Value: 3.4028235E38 Minimum Value: 1.4E-45 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Double: Maximum Value: 1.7976931348623157E308 Minimum Value: 4.9E-324 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Character: Minimum Value: UPDATE: Note that, as Attila-Mihaly Balazs has pointed out in the comment below, the MIN_VALUE values showed for java.lang.Float and java.lang.Double above are not negative numbers even though these constant values are negative for Byte, Short, Int, and Long. For the floating-point types of Float and Double, the MIN_VALUE constant represents the minimum absolute value that can stored in those types. Although the characteristics of the Java primitive data types are readily available online, it’s nice to be able to programmatically access those details easily when so desired. I like to think about the types’ sizes in terms of bytes and JDK 8 now provides the ability to see those sizes directly measured in bytes.Reference: Programmatic Access to Sizes of Java Primitive Types from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
software-development-2-logo

Thoughts on The Reactive Manifesto

Reactive programming is an emerging trend in software development that has gathered a lot of enthusiasm among technology connoisseurs during the last couple of years. After studying the subject last year, I got curious enough to attend the “Principles of Reactive Programming” course on Coursera (by Odersky, Meijer and Kuhn). Reactive advocates from Typesafe and others have created The Reactive Manifesto that tries to formulate the vocabulary for reactive programming and what it actually aims at. This post collects some reflections on the manifesto.         According to The Reactive Manifesto systems that are reactivereact to events – event-driven nature enables the following qualities react to load – focus on scalability by avoiding contention on shared resources react to failure – resilient systems that are able to recover at all levels react to users – honor response time guarantees regardless of loadEvent-driven Event-driven applications are composed of components that communicate through sending and receiving events. Events are passed asynchronously, often using a push based communication model, without the event originator blocking. A key goal is to be able to make efficient use of system resources, not tie up resources unnecessarily and maximize resource sharing. Reactive applications are built on a distributed architecture in which message-passing provides the inter-node communication layer and location transparency for components. It also enables interfaces between components and subsystems to be based on loosely coupled design, thus allowing easier system evolution over time. Systems designed to rely on shared mutable state require data access and mutation operations to be coordinated by using some concurrency control mechanism, in order to avoid data integrity issues. Concurrency control mechanisms limit the degree of parallelism in the system. Amdahl’s law formulates clearly how reducing the parallelizable portion of the program code puts an upper limit to system scalability. Designs that avoid shared mutable state allow for higher degrees of parallelism and thus reaching higher degrees of scalability and resource sharing. Scalable System architecture needs to be carefully designed to scale out, as well as up, in order to be able to exploit the hardware trends of both increased node-level parallelism (increased number of CPUs and nb. of physical and logical cores within a CPU) and system level parallelism (number of nodes). Vertical and horizontal scaling should work both ways, so an elastic system will also be able to scale in and down, thereby allowing to optimize operational cost structures for lower demand conditions. A key building block for elasticity is achieved through a distributed architecture and the node-to-node communication mechanism, provided by message-passing, that allows subsystems to be configured to run on the same node or on different nodes without code changes (location transparency). Resilient A resilient system will continue to function in the presence of failures in one or more parts of the system, and in unanticipated conditions (e.g. unexpected load). The system needs to be designed carefully to contain failures in well defined and safe compartments to prevent failures from escalating and cascading unexpectedly and uncontrollably. Responsive The Reactive manifesto characterizes the responsive quality as follows: Responsive is defined by Merriam-Webster as “quick to respond or react appropriately”. … Reactive applications use observable models, event streams and stateful clients. … Observable models enable other systems to receive events when state changes. … Event streams form the basic abstraction on which this connection is built. … Reactive applications embrace the order of algorithms by employing design patterns and tests to ensure a response event is returned in O(1) or at least O(log n) time regardless of load. Commentary If you’ve been actively following software development trends during the last couple of years, ideas stated in the reactive manifesto may seem quite familiar to you. This is because the manifesto captures insights learned by the software development community in building internet-scale systems. One such set of lessons stems from problems related to having centrally-stored state in distributed systems. The tradeoffs of having a strong consistency model in a distributed system have been formalized in the CAP theorem. CAP-induced insights led developers to consider alternative consistency models, such as BASE, in order to trade off strong consistency guarantees for availability and partition tolerance, but also scalability. Looser consistency models have been popularized during recent years, in particular, by different breeds of NoSQL databases. Application’s consistency model has a major impact on the application scalability and availability, so it would be good to address this concern more explicitly in the manifesto. The chosen consistency model is a cross-cutting trait, over which all the application layers should uniformly agree. This concern is something that is mentioned in the manifesto, but since it’s such an important issue, and it has subtle implications, it would be good to elaborate it a bit more or refer to a more through discussion of the topic. Event-driven is a widely used term in programming that can take on many different meanings and has multiple variations. Since it’s such an overloaded term, it would be good to define it more clearly and try to characterize what exactly does and does not constitute as event-driven in this context. The authors clearly have event-driven architecture (EDA) in mind, but EDA is also something that can be achieved with different approaches. The same is true for “asynchronous communication”. In the reactive manifesto “asynchronous communication” seems to imply using message-passing, as in messaging systems or the Actor model, and not asynchronous function or method invocation. The reactive manifesto adopts and combines ideas from many movement from CAP theorem, NoSQL, event-driven architecture. It captures and amalgamates valuable lessons learned learned by the software development community in building internet-scale applications. The manifesto makes a lot of sense, and I can subscribe to the ideas presented in it. However, on a few occasions, the terminology could be elaborated a bit and made more approachable to developers who don’t have extensive experience in scalability issues. Sometimes, the worst thing that can happen to great ideas is that they get diluted by unfortunate misunderstandings!Reference: Thoughts on The Reactive Manifesto from our JCG partner Marko Asplund at the practicing techie blog....
jooq-2-logo

Using jOOQ with Spring: CRUD

jOOQ is a library which helps us to get back in control of our SQL. It can generate code from our database and lets us build typesafe database queries by using its fluent API. The earlier parts of this tutorial have taught us how we can configure the application context of our example application and generate code from our database. We are now ready to take one step forward and learn how we can create typesafe queries with jOOQ. This blog post describes how we can add CRUD operations to a simple application which manages todo entries.   Let’s get started. Additional Reading:Using jOOQ with Spring: Configuration is the first part of this tutorial, and it describes you can configure the application context of a Spring application which uses jOOQ. You can understand this blog post without reading the first part of this tutorial, but if you want to really use jOOQ in a Spring powered application, I recommend that you read the first part of this tutorial as well. Using jOOQ with Spring: Code Generation is the second part of this tutorial, and it describes how we can reverse-engineer our database and create the jOOQ query classes which represents different database tables, records, and so on. Because these classes are the building blocks of typesafe SQL queries, I recommend that you read the second part of this tutorial before reading this blog post.Creating the Todo Class Let’s start by creating a class which contains the information of a single todo entry. This class has the following fields:The id field contains the id of the todo entry. The creationTime field contains a timestamp which describes when the todo entry was persisted for the first time. The description field contains the description of the todo entry. The modificationTime field contains a timestamp which describes when the todo entry was updated. The title field contains the title of the todo entry.The name of this relatively simple class is Todo, and it follows three principles which are described in the following:We can create new Todo objects by using the builder pattern described in Effective Java by Joshua Bloch. If you are not familiar with this pattern, you should read an article titled Item 2: Consider a builder when faced with many constructor parameters. The title field is mandatory, and we cannot create a new Todo object which has either null or empty title. If we try to create a Todo object with an invalid title, an IllegalStateException is thrown. This class is immutable. In other words, all its field are declared final.The source code of the Todo class looks as follows: import org.apache.commons.lang3.builder.ToStringBuilder; import org.joda.time.LocalDateTime;import java.sql.Timestamp;public class Todo {private final Long id;private final LocalDateTime creationTime;private final String description;private final LocalDateTime modificationTime;private final String title;private Todo(Builder builder) { this.id = builder.id;LocalDateTime creationTime = null; if (builder.creationTime != null) { creationTime = new LocalDateTime(builder.creationTime); } this.creationTime = creationTime;this.description = builder.description;LocalDateTime modificationTime = null; if (builder.modificationTime != null) { modificationTime = new LocalDateTime(builder.modificationTime); } this.modificationTime = modificationTime;this.title = builder.title; }public static Builder getBuilder(String title) { return new Builder(title); }//Getters are omitted for the sake of clarity.public static class Builder {private Long id;private Timestamp creationTime;private String description;private Timestamp modificationTime;private String title;public Builder(String title) { this.title = title; }public Builder description(String description) { this.description = description; return this; }public Builder creationTime(Timestamp creationTime) { this.creationTime = creationTime; return this; }public Builder id(Long id) { this.id = id; return this; }public Builder modificationTime(Timestamp modificationTime) { this.modificationTime = modificationTime; return this; }public Todo build() { Todo created = new Todo(this);String title = created.getTitle();if (title == null || title.length() == 0) { throw new IllegalStateException("title cannot be null or empty"); }return created; } } } Let’s find out why we need to get the current date and time, and more importantly, what is the best way to do it. Getting the Current Date and Time Because the creation time and modification time of each todo entry are stored to the database, we need a way to obtain the current date and time. Of course could we simply create this information in our repository. The problem is that if we would do this, we wouldn’t be able to write automated tests which ensure that the creation time and the modification time are set correctly (we cannot write assertions for these fields because their values depends from the current time). That is why we need to create a separate component which is responsible for returning the current date and time. The DateTimeService interface declares two methods which are described in the following:The getCurrentDateTime() method returns the current date and time as a LocalDateTime object. The getCurrentTimestamp() method returns the current date and time as a Timestamp object.The source code of the DateTimeService interface looks as follows: import org.joda.time.LocalDateTime; import java.sql.Timestamp;public interface DateTimeService {public LocalDateTime getCurrentDateTime();public Timestamp getCurrentTimestamp(); } Because our application is interested in the “real” time, we have to implement this interface and create a component which returns the real date and time. We can do this by following these steps:Create a CurrentTimeDateTimeService class which implements the DateTimeService interface. Annotate the class with the @Profile annotation and set the name of the profile to ‘application’. This means that the component can be registered to the Spring container when the active Spring profile is ‘application’. Annotate the class with the @Component annotation. This ensures that the class is found during classpath scanning. Implement the methods declared in the DateTimeService interface. Each method must return the current date and time.The source code of the CurrentTimeDateTimeService looks as follows:import org.joda.time.LocalDateTime; import org.springframework.context.annotation.Profile; import org.springframework.stereotype.Component;import java.sql.Timestamp;@Profile("application") @Component public class CurrentTimeDateTimeService implements DateTimeService {@Override public LocalDateTime getCurrentDateTime() { return LocalDateTime.now(); }@Override public Timestamp getCurrentTimestamp() { return new Timestamp(System.currentTimeMillis()); } } Let’s move on and start implementing the repository layer of our example application. Implementing the Repository Layer First we have create a repository interface which provides CRUD operations for todo entries. This interface declares five methods which are described in the following:The Todo add(Todo todoEntry) method saves a new todo entry to the database and returns the information of the saved todo entry. The Todo delete(Long id) method deletes a todo entry and returns the deleted todo entry. The ListfindAll()method returns all todo entries which are found from the database. The Todo findById(Long id) returns the information of a single todo entry. The Todo update(Todo todoEntry) updates the information of a todo entry and returns the updated todo entry.The source code of the TodoRepository interface looks as follows: import java.util.List;public interface TodoRepository {public Todo add(Todo todoEntry);public Todo delete(Long id);public List<Todo> findAll();public Todo findById(Long id);public Todo update(Todo todoEntry); } Next we have to implement the TodoRepository interface. When we do that, we must follow the following rule: All database queries created by jOOQ must be executed inside a transaction. The reason for this is that our application uses the TransactionAwareDataSourceProxy class, and if we execute database queries without a transaction, jOOQ will use a different connection for each operation. This can lead into race condition bugs. Typically the service layer acts as a transaction boundary, and each call to a jOOQ repository should be made inside a transaction. However, because programmers make mistakes too, we cannot trust that this is the case. That is why we must annotate the repository class or its methods with the @Transactional annotation. Now that we have got that covered, we are ready to create our repository class. Creating the Repository Class We can create the “skeleton” of our repository class by following these steps:Create a JOOQTodoRepository class and implement the TodoRepository interface. Annotate the class with the @Repository annotation. This ensures that the class is found during the classpath scan. Add a DateTimeService field to the created class. As we remember, the DateTimeService interface declares the methods which are used to get the current date and time. Add a DSLContext field to the created class. This interface acts as an entry point to the jOOQ API and we can build our SQL queries by using it. Add a public constructor to the created class and annotate the constructor with the @Autowired annotation. This ensures that the dependencies of our repository are injected by using constructor injection. Add a private Todo convertQueryResultToModelObject(TodosRecord queryResult) method to the repository class. This utility method is used by the public methods of our repository class. Implement this method by following these steps:Create a new Todo object by using the information of the TodosRecord object given as a method parameter. Return the created object.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;@Autowired public JOOQTodoRepository(DateTimeService dateTimeService, DSLContext jooq) { this.dateTimeService = dateTimeService; this.jooq = jooq; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } Let’s move on and implement the methods which provide CRUD operations for todo entries. Adding a New Todo Entry The public Todo add(Todo todoEntry) method of the TodoRepository interface is used to add a new todo entries to the database. We can implement this method by following these steps:Add a private TodosRecord createRecord(Todo todoEntry) method to the repository class and implement this method following these steps:Get the current date and time by calling the getCurrentTimestamp() method of the DateTimeService interface. Create a new TodosRecord object and set its field values by using the information of the Todo object given as a method parameter. Return the created TodosRecord object.Add the add() method to the JOOQTodoRepository class and annotate the method with the @Transactional annotation. This ensures that the INSERT statement is executed inside a read-write transaction. Implement the add() method by following these steps:Add a new todo entry to the database by following these steps:Create a new INSERT statement by calling the insertInto(Table table) method of the DSLContext interface and specify that you want to insert information to the todos table. Create a new TodosRecord object by calling the createRecord() method. Pass the Todo object as a method parameter. Set the inserted information by calling the set(Record record) method of the InsertSetStep interface. Pass the created TodosRecord object as a method parameter. Ensure that the INSERT query returns all inserted fields by calling the returning() method of the InsertReturningStep interface. Get the TodosRecord object which contains the values of all inserted fields by calling the fetchOne() method of the InsertResultStep interface.Convert the TodosRecord object returned by the INSERT statement into a Todo object by calling the convertQueryResultToModelObject() method. Return the created the Todo object.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.sql.Timestamp;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional @Override public Todo add(Todo todoEntry) { TodosRecord persisted = jooq.insertInto(TODOS) .set(createRecord(todoEntry)) .returning() .fetchOne();return convertQueryResultToModelObject(persisted); }private TodosRecord createRecord(Todo todoEntry) { Timestamp currentTime = dateTimeService.getCurrentTimestamp();TodosRecord record = new TodosRecord(); record.setCreationTime(currentTime); record.setDescription(todoEntry.getDescription()); record.setModificationTime(currentTime); record.setTitle(todoEntry.getTitle());return record; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.3. The INSERT statement of the jOOQ reference manual provides additional information about inserting data to the database. Let’s move on and find out how we can find all entries which are stored to the database. Finding All Todo Entries The public List findAll() method of the TodoRepository interface returns all todo entries which are stored to the database. We can implement this method by following these steps:Add the findAll() method to the repository class and annotate the method with the @Transactional annotation. Set the value of its readOnly attribute to true. This ensures that the SELECT statement is executed inside a read-only transaction. Get all todo entries from the database by following these steps:Create a new SELECT statement by calling the selectFrom(Table table) method of the DSLContext interface and specify that you want to select information from the todos table. Get a list of TodosRecord objects by calling the fetchInto(Class type) method of the ResultQuery interface.Iterate the returned list of TodosRecord objects and convert each TodosRecord object into a Todo object by calling the convertQueryResultToModelObject() method. Add each Todo object to the list of Todo objects. Return the List which contains the found Todo objects.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.util.ArrayList; import java.util.List;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional(readOnly = true) @Override public List<Todo> findAll() { List<Todo> todoEntries = new ArrayList<>();List<TodosRecord> queryResults = jooq.selectFrom(TODOS).fetchInto(TodosRecord.class);for (TodosRecord queryResult: queryResults) { Todo todoEntry = convertQueryResultToModelObject(queryResult); todoEntries.add(todoEntry); }return todoEntries; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.2. The SELECT Statement of the jOOQ reference manual provides more information about selecting information from the database. Next we will find out how we can get a single todo entry from the database. Finding a Single Todo Entry The public Todo findById(Long id) method of the TodoRepository interface returns the information of a single todo entry. We can implement this method by following these steps:Add the findById() method the repository class and annotate the method with the @Transactional annotation. Set the value of its readOnly attribute to true. This ensures that the SELECT statement is executed inside a read-only transaction. Get the information of a single todo entry from the database by following these steps:Create a new SELECT statement by calling the selectFrom(Table table) method of the DSLContext interface and specify that you want to select information from the todos table. Specify the WHERE clause of the SELECT statement by calling the where(Collection conditions) method of the SelectWhereStep interface. Ensure that the SELECT statement returns only the todo entry which id was given as a method parameter. Get the TodosRecord object by calling the fetchOne() method of the ResultQuery interface.If the returned TodosRecord object is null, it means that no todo entry was found with the given id. If this is the case, throw a new TodoNotFoundException. Convert TodosRecord object returned by the SELECT statement into a Todo object by calling the convertQueryResultToModelObject() method. Return the created Todo object.The relevant part of the JOOQTodoRepository looks as follows:import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity.@Transactional(readOnly = true) @Override public Todo findById(Long id) { TodosRecord queryResult = jooq.selectFrom(TODOS) .where(TODOS.ID.equal(id)) .fetchOne();if (queryResult == null) { throw new TodoNotFoundException("No todo entry found with id: " + id); }return convertQueryResultToModelObject(queryResult); }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.2. The SELECT Statement of the jOOQ reference manual provides more information about selecting information from the database. Let’s find out how we can delete a todo entry from the database. Deleting a Todo Entry The public Todo delete(Long id) method of the TodoRepository interface is used to delete a todo entry from the database. We can implement this method by following these steps:Add the delete() method to the repository class and annotate the method with the @Transactional annotation. This ensures that the DELETE statement is executed inside a read-write transaction. Implement this method by following these steps:Find the deleted Todo object by calling the findById(Long id) method. Pass the id of the deleted todo entry as a method parameter. Delete the todo entry from the database by following these steps:Create a new DELETE statement by calling the delete(Table table) method of the DSLContext interface and specify that you want to delete information from the todos table. Specify the WHERE clause of the DELETE statement by calling the where(Collection conditions) method of the DeleteWhereStep interface. Ensure that the DELETE statement deletes the todo entry which id was given as a method parameter. Execute the the DELETE statement by calling the execute() method of the Query interface.Return the information of the deleted todo entry.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional @Override public Todo delete(Long id) { Todo deleted = findById(id);int deletedRecordCount = jooq.delete(TODOS) .where(TODOS.ID.equal(id)) .execute();return deleted; } } The section 4.3.5. The DELETE Statement of the jOOQ reference manual provides additional information about deleting data from the database. Let’s move on and find out how we can update the information of an existing todo entry. Updating an Existing Todo Entry The public Todo update(Todo todoEntry) method of the TodoRepository interface updates the information of an existing todo entry. We can implement this method by following these steps:Add the update() method to the repository class and annotate the method with the @Transactional annotation. This ensures that the UPDATE statement is executed inside a read-write transaction. Get the current date and time by calling the getCurrentTimestamp() method of the DateTimeService interface. Update the information of the todo entry by following these steps:Create a new UPDATE statement by calling the update(Table table) method of the DSLContext interface and specify that you want to update information found from the todos table. Set the new description, modification time, and title by calling the set(Field field, T value) method of the UpdateSetStep interface. Specify the WHERE clause of the UPDATE statement by calling the where(Collection conditions) method of the UpdateWhereStep interface. Ensure that the UPDATE statement updates the todo entry which id is found from the Todo object given as a method parameter. Execute the UPDATE statement by calling the execute() method of the Query interface.Get the information of the updated todo entry by calling the findById() method. Pass the id of the updated todo entry as a method parameter. Return the information of the updated todo entry.The relevant part of the JOOQTodoRepository class looks as follows: import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.sql.Timestamp;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;//The constructor is omitted for the sake of clarity.@Transactional @Override public Todo update(Todo todoEntry) { Timestamp currentTime = dateTimeService.getCurrentTimestamp(); int updatedRecordCount = jooq.update(TODOS) .set(TODOS.DESCRIPTION, todoEntry.getDescription()) .set(TODOS.MODIFICATION_TIME, currentTime) .set(TODOS.TITLE, todoEntry.getTitle()) .where(TODOS.ID.equal(todoEntry.getId())) .execute();return findById(todoEntry.getId()); } }The section 4.3.4. The UPDATE Statement of the jOOQ reference manual provides additional information about updating the information which is stored to the database. If you are using Firebird or PostgreSQL databases, you can use the RETURNING clause in the update statement (and avoid the extra select clause).That is all folks. Let’s summarize what we learned from this blog post. Summary We have now implemented CRUD operations for todo entries. This tutorial has taught us three things:We learned how we can get the current date and time in a way which doesn’t prevent us from writing automated tests for our example application. We learned how we can ensure that all database queries executed by jOOQ are executed inside a transaction. We learned how we can create INSERT, SELECT, DELETE, and UPDATE statements by using the jOOQ API.The next part of this tutorial describes how we can add a search function, which supports sorting and pagination, to our example application.The example application of this blog post is available at Github (The frontend is not implemented yet).Reference: Using jOOQ with Spring: CRUD from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
apache-activemq-logo

ActiveMQ – Network of Brokers Explained

Objective This 7 part blog series is to share about how to create network of ActiveMQ brokers in order to achieve high availability and scalability. Why network of brokers? ActiveMQ message broker is a core component of messaging infrastructure in an enterprise. It needs to be highly available and dynamically scalable to facilitate communication between dynamic heterogeneous distributed applications which have varying capacity needs. Scaling enterprise applications on commodity hardware is a rage nowadays. ActiveMQ caters to that very well by being able to create a network of brokers to share the load. Many times applications running across geographically distributed data centers need to coordinate messages. Running message producers and consumers across geographic regions/data centers can be architected better using network of brokers. ActiveMQ uses transport connectors over which it communicates with message producers and consumers. However, in order to facilitate broker to broker communication, ActiveMQ uses network connectors. A network connector is a bridge between two brokers which allows on-demand message forwarding. In other words, if Broker B1 initiates a network connector to Broker B2 then the messages on a channel (queue/topic) on B1 get forwarded to B2 if there is at least one consumer on B2 for the same channel. If the network connector was configured to be duplex, the messages get forwarded from B2 to B1 on demand. This is very interesting because it is now possible for brokers to communicate with each other dynamically. In this 7 part blog series, we will look into the following topics to gain understanding of this very powerful ActiveMQ feature:Network Connector Basics – Part 1 Duplex network connectors – Part 2 Load balancing consumers on local/remote brokers – Part 3 Load-balance consumers/subscribers on remote brokersQueue: Load balance remote concurrent consumers - Part 4 Topic: Load Balance Durable Subscriptions on Remote Brokers – Part 5Store/Forward messages and consumer failover  - Part 6How to prevent stuck  messagesVirtual Destinations – Part 7To give credit where it is due, the following URLs have helped me in creating this blog post series.Advanced Messaging with ActiveMQ by Dejan Bosanac [Slides 32-36] Understanding ActiveMQ Broker Networks by Jakub KorabPrerequisitesActiveMQ 5.8.0 – To create broker instances Apache Ant – To run ActiveMQ sample producer and consumers for demo.We will use multiple ActiveMQ broker instances on the same machine for the ease of demonstration. Network Connector Basics – Part 1 The following diagram shows how a network connector functions. It bridges two brokers and is used to forward messages from Broker-1 to Broker-2 on demand if established by Broker-1 to Broker-2.A network connector can be duplex so messages could be forwarded in the opposite direction; from Broker-2 to Broker-1, once there is a consumer on Broker-1 for a channel which exists in Broker-2. More on this in Part 2 Setup network connector between broker-1 and broker-2Create two broker instances, say broker-1 and broker-2Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./activemq-admin create ../bridge-demo/broker-1 Ashwinis-MacBook-Pro:bin akuntamukkala$ ./activemq-admin create ../bridge-demo/broker-2 Since we will be running both brokers on the same machine, let’s configure broker-2 such that there are no port conflicts.Edit /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/conf/activemq.xmlChange transport connector to 61626 from 61616 Change AMQP port from 5672 to 6672 (won’t be using it for this blog)Edit /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/conf/jetty.xmlChange web console port to 9161 from 8161Configure Network Connector from broker-1 to broker-2 Add the following XML snippet to  /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-1/conf/activemq.xmlnetworkConnectors> <networkConnector name="T:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <queue physicalName=">" /> </excludedDestinations> </networkConnector> <networkConnector name="Q:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> </networkConnectors> The above XML snippet configures two network connectors “T:broker1->broker2″ (only topics as queues are excluded) and “Q:broker1->broker2″  (only queues as topics are excluded). This allows for nice separation between network connectors used for topics and queues. The name can be arbitrary although I prefer to specify the [type]:->[destination broker]. The URI attribute specifies how to connect to broker-2Start broker-2Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./broker-2 consoleStart broker-1Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-1/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./broker-1 console Logs on broker-1 show 2 network connectors being established with broker-2 INFO | Establishing network connection from vm://broker-1?async=false&network=true to tcp://localhost:61626 INFO | Connector vm://broker-1 Started INFO | Establishing network connection from vm://broker-1?async=false&network=true to tcp://localhost:61626 INFO | Network connection between vm://broker-1#24 and tcp://localhost/127.0.0.1:61626@52132(broker-2) has been established. INFO | Network connection between vm://broker-1#26 and tcp://localhost/127.0.0.1:61626@52133(broker-2) has been established. Web Console on broker-1 @ http://localhost:8161/admin/connections.jsp shows the two network connectors established to broker-2The same on broker-2 does not show any network connectors since no network connectors were initiated by broker-2 Let’s see this in action Let’s produce 100 persistent messages on a queue called “foo.bar” on broker-1. Ashwinis-MacBook-Pro:example akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/example Ashwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61616 -Dtopic=false -Ddurable=true -Dsubject=foo.bar -Dmax=100broker-1 web console shows that 100 messages have been enqueued in queue “foo.bar” http://localhost:8161/admin/queues.jspLet’s start a consumer on a queue called “foo.bar” on broker-2. The important thing to note here is that the destination name “foo.bar” should match exactly. Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61626 -Dtopic=false -Dsubject=foo.bar We find that all the 100 messages from broker-1′s foo.bar queue get forwarded to broker-2′s foo.bar queue consumer. broker-1 admin console at http://localhost:8161/admin/queues.jspbroker-2 admin console @ http://localhost:9161/admin/queues.jsp shows that the consumer we had started has consumed all 100 messages which were forwarded on-demand from broker-1broker-2 consumer details on foo.bar queuebroker-1 admin console shows that all 100 messages have been dequeued [forwarded to broker-2 via the network connector].broker-1 consumer details on “foo.bar” queue shows that the consumer is created on demand: [name of connector]_[destination broker]_inbound_Thus we have seen the basics of network connector in ActiveMQ. Stay tuned for Part 2…Reference: ActiveMQ – Network of Brokers Explained from our JCG partner Ashwini Kuntamukkala at the Ashwini Kuntamukkala – Technology Enthusiast blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books