Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

Java Servlet Tutorial – The ULTIMATE Guide

Java Servlets is a Java based web technology. Java Servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. A servlet can almost be thought of as an applet that runs on the server side–without a face. Java servlets make many Web applications possible. Java Servlets comprise a fundamental part of the Java Enterprise Edition (Java EE). Please note that Java Servlets have to be executed inside a Servlet compatible “Servlet Container” (e.g. web server) in order to work. This tutorial works as a comprehensive, kick-start guide for your Java Servlet based code.Table Of Contents1. Introduction1.1 Servlet Process 1.2 Merits2. Lifecycle 3. Container3.1 Services 3.2 Servlet Container Configurations4. Demo: To start with 5. Filter5.1 Interface 5.2 Example6. Session6.1 Session Handling 6.2 Mechanism of Session Handling 6.3 Example7. Exception Handling7.1 Error Code Configuration 7.2 Exception- Type Configuration8. Debugging8.1 Message Logging 8.2 Java Debugger 8.3 Headers 8.4 Refresh9. Internationalization9.1 Methods 9.2 Example10. References10.1 Website 10.2 Book11. Conclusion 12. Download1. Introduction Servlet is a Java programming language class, part of Java Enterprise Edition (Java EE). Sun Microsystems developed its first version 1.0 in the year 1997. Its current Version is Servlet 3.1. Servlets are used for creating dynamic web applications in java by extending the capability of a server. It can run on any web server integrated with a Servlet container. 1.1 Servlet Process The process of a servlet is shown below:A Request is sent by a client to a servlet container. The container acts as a Web server. The Web server searches for the servlet and initiates it. The client request is processed by the servlet and it sends the response back to the server. The Server response is then forwarded to the client.1.2 MeritsServlets are platform independent as they can run on any platform. The Servlet API inherits all the features of the Java platform. It builds and modifies the security logic for server-side extensions. Servlets inherit the security provided by the Web Server. In Servlet, only a single instance of the requests runs concurrently. It does not run in a separate process. So, it saves the memory by removing the overhead of creating a new process for each request.2. Life Cycle Servlet lifecycle describes how the servlet container manages the servlet object.Load Servlet Class Servlet Instance is created by the web container when the servlet class is loaded init():This is called only once when the servlet is created. There is no need to call it again and again for multiple requests.public void init() throws ServletException {}service(): It is called by the web container to handle request from clients. Here the actual functioning of the code is done. The web container calls this method each time when request for the servlet is received.It calls doGet(), doPost(), doTrace(), doPut(), doDelete() and other methodsdoGet():public void doGet(HttpServletRequest request,HttpServletResponse response) throws ServletException, IOException { // code }doPost():public voiddoPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // code }destroy(): It is used to clean resources and called before removing the servlet instance.public void destroy()3. Container It is known as servlet engine which manages Java Servlet components on top of a web server to the request send by the client. 3.1 Services Servlet Container provides the following services:It manages the servlet life cycle. The resources like servlets, JSP pages and HTML files are managed by servlet container. It appends session ID to the URL path to maintain session. Provides security service. It loads a servlet class from network services, file systems like remote file system and local file system.3.2 Servlet Container Configurations The servlet container can be configured with the web server to manage servlets in three ways listed below:Standalone container In-process container Out-process containerStandalone container: In this type the Web Server functionality is taken by the Servlet container. Here, the container is strongly coupled with the Web server. In-Process container: In this the container runs within the Web server process. Out-Process container: In this type there is a need to configure the servlet container to run outside the Web server process. It is used in some cases like if there is a need to run Servlets and Servlet container in different process/systems. 4. Demo: To start with Here is an example showing Demo Servlet. Follow these steps to start with your first Servlet Application in NetBeansIDE. Step 1: Open NetBeansIDE -> File -> New Project->WebApplication -> Set Project name as WebApplicationServletDemoStep 2: Now click on Next >as shown above. This will create new project with the following directory structure.Step 3: Create new servlet application by Right Clicking on Project Directory-> New -> ServletStep 4: Add the Servlet Class Name as “ServletDemo” and click on Next.Step 5: Now, Configure Servlet Deployment by checking “Add information to deployment descriptor (web.xml)” and adding URL Pattern (the link visible) as ServletDemo. This step will generate web.xml file in WEB-INF folder.Step 6: Click on Finish as shown above, this will add ServletDemo.java servlet under project directory. Check the changes under Directory Structure:Here is the code for deployment descriptor (web.xml) with URL-patter as /ServletDemo: Listing 1: web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"> <servlet> <servlet-name>ServletDemo</servlet-name> <servlet-class>ServletDemo</servlet-class> </servlet> <servlet-mapping> <servlet-name>ServletDemo</servlet-name> <url-pattern>/ServletDemo</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> </web-app> Here, <servlet-name>: name given to Servlet <servlet-class>: servlet class <servlet-mapping>: maps internal name to URL <url-pattern>: link displays when Servlet runs The hyperlink Next is mentioned as ServletDemo. So, when the user will click on it, the page will redirect to ServletDemo servlet whose url-pattern is mentioned as ServetDemo: Listing 2: index.html <html> <head> <title>Welcome</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <div><h2>Welcome</h2></div> <p>We're still under development stage. Stay Tuned for our website's new design and learning content.</p> <a href="ServletDemo"><b>Next</b></a> </body> </html> Listing 3: ServletDemo.javaimport java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;public class ServletDemo extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); try (PrintWriter out = response.getWriter()) { out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>Servlet ServletDemo</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet ServletDemo at " + request.getContextPath() + "</h1>"); out.println("</body>"); out.println("</html>"); } }@Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { /* TODO output your page here. You may use following sample code. */ out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>Servlets</title>"); out.println("</head>"); out.println("<body>"); out.println("<br /><p><h2>First Demo Servlet application</h2><br />Here, the URL-pattern is ServletDemo in web.xml. So, the address is <i>WebApplicationServletDemo/ServletDemo</i>.</p>"); out.println("<br /><br /><a href=\"index.html\">Previous Page</a>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } }5. Filter Filters transform the content of requests, responses, and header information from one format to another. These are reusable codes.Filter class is declared in the deployment descriptor. It is used to write reusable components. The request is process before it is called using filters. It can be used under a web application for some tasks like:Validation Compression Verification Internationalization5.1 Interface It consists of these 3 filters:Filter This is the initial and basic interface which all filter class should implement. Java.servlet.Filter interface has the following methods:Methods Descriptioninit(FilterConfig) This method initializes a filterdoFilter(ServletRequest, ServletResponse, FilterChain) This method encapsulates the service logic on ServletRequest to generate ServletResponse. FilterChain is to forward request/response pair to the next filter.destroy() It destroys the instance of the filter class.FilterConfig Its object is used when the filters are initialized. Deployment descriptor (web.xml) consists of configuration information. The object of FilterConfig interface is used to fetch configuration information about filter specified in web.xml. Its methods are mentioned below:Methods DescriptiongetFilterName() It returns the name of filter in web.xmlgetInitParameter(String) It returns specified initialization parameter’s value from web.xmlgetInitParameterNames() It returns enumeration of all initialization parameters of filter.getServletContext() It returns ServletContext object.FilterChain It stores information about more than 1 filter (chain). All filters in this chain should be applied on request before processing of a request. 5.2 Example This is an example showing filters application in NetBeansIDE. Create a WebApplication project WebApplicationFilterDemo in the same ways as shown under Demo section. New Filter can be added in the web application by Right Clicking on Project Directory -> New -> FilterConfigure Filter Deployment by checking “Add information to deployment descriptor (web.xml)”. Now, the Next button is disabled here due to an error highlighted in Figure 13. The error “Enter at least one URL pattern” can be solved by clicking on “New”.Now, filter is mapped by adding URL-pattern as shown in Figure 15.After adding new filter and clicking on OK, the error will get resolved. Now, add init-parameter with name and value. Then click Finish.Listing 4: web.xml The Filter NewFilter can be applied to every servlet as /* is specified here for URL-pattern. <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"> <filter> <filter-name>NewFilter</filter-name> <filter-class>NewFilter</filter-class> <init-param> <param-name>newParam</param-name> <param-value>valueOne</param-value> </init-param> </filter> <filter-mapping> <filter-name>NewFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> </web-app> Listing 5: NewFilter.javaimport java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.util.*;public class NewFilter implements Filter {public void init(FilterConfigfilterConfig) { // init parameter String value = filterConfig.getInitParameter("newParam");// displaying init parameter value System.out.println("The Parameter value: " + value); }public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {// IP address of the client machine. String remoteAddress = request.getRemoteAddr();// Returns the remote address System.out.println("Remote Internet Protocl Address: "+ remoteAddress);chain.doFilter(request,response); }public void destroy( ){} }6. Session It is a collection of HTTP requests between client and server. The session is destroyed when it expires and its resources are back to the servlet engine. 6.1 Session Handling It is a means to keep track of session data. This represents the data transferred in a session. It is used when session data from one session may be required by a web server for completing tasks in same or different sessions. Session handling is also known assession tracking. 6.2 Mechanisms of Session Handling There are four mechanisms for session handling: URL rewriting: The session data required in the next request is appended to the URL path used by the client to make the next request. ·        Query String: A string appended after the requested URI is query string. The string is appended with separator as ‘?’ character. Example 1): http://localhost:8080/newproject/login?user=test&passwd=abcde ·        Path Info: It is the part of the request URI. Session data can be added to the path info part of the request URI. Example 2): http://localhost:8080/newproject/myweb/login;user=test&passwd=abcde Hidden form field:  A type of HTML form field which remains hidden in the view. Some other form fields are: textbox, password etc. This approach can be used with form-based requests. It is just used for hiding user data from other different types of users. Example 3: <input type="hidden" username="name" value="nameOne"/> Cookies: It is a file containing the information that is sent to a client by a server. Cookies are saved at the client side after being transmitted to clients (from server)through the HTTP response header. Cookies are considered best when we want to reduce the network traffic. Its attributes are name, value, domain, version number, path, and comment. The package javax.servlet.http consists of a class names Cookie. Some methods in javax.servlet.http.Cookie class are listed below:setValue (String) getValue() getName() setComment(String) getComment() setVersion(String) getVersion() setDomain(String) setPath(String) getPath() setSecure(boolean) getSecure(boolean)HTTP session: It provides asession management service implemented through HttpSession object. Some HttpSession object methods are listed here; this is referred from the official Oracle Documentation:Method Descriptionpublic Object getAttribute(String name) It returns the object bound with the specified name in this session or null if no object is bound under the name.public Enumeration getAttributeNames() It returns Enumeration of String objects containing the names of all the objects bound to this session.public String getId() It returns a string containing the unique identifier assigned to this session.public long getCreationTime() It returns the time when this session was created, measured in milliseconds since midnight January 1, 1970 GMT.public long getLastAccessedTime() It returns the last time the client sent a request associated with this session.public int getMaxInactiveInterval() It returns the maximum time interval, in seconds that the servlet container will keep this session open between client accesses.public void invalidate() It Invalidates this session then unbinds any objects bound to it.public boolean isNew() It returns true if the client does not yet know about the session or if the client chooses not to join the session.6.3 Example Session Information like session id, session creation time, last accessed time and others is printed under this example. Listing 6: ServletSession.java import java.io.IOException; import java.io.PrintWriter; import java.util.Date; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.servlet.http.HttpSession;public class ServletSession extends HttpServlet {@Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // session object creation HttpSessionnewSession = request.getSession(true); // Session creation time. Date cTime = new Date(newSession.getCreationTime()); // The last time the client sent a request. Date lTime = new Date( newSession.getLastAccessedTime());/* sets the time, in seconds, between client requests before the servlet container invalidates this session */ newSession.setMaxInactiveInterval(1 * 60 * 60); String str = "Website | Session";response.setContentType("text/html"); PrintWriter out = response.getWriter();String document = "<!doctype html public \"-//w3c//dtd html 4.0 " + "transitional//en\">\n"; out.println(document + "<html>\n" + "<head><title>" + str + "</title></head>\n" + "<body bgcolor=\"#bbf5f0\">\n" + "<h2>Website: Displaying Session Information</h2>\n" + "<table border=\"2\">\n" + "<tr>\n" + " <td>Unique identifier assigned to this session</td>\n" + " <td>" + newSession.getId() + "</td>" + "</tr>\n" + "<tr>\n" + " <td>The time when this session was created</td>\n" + " <td>" + cTime + " </td>" + "</tr>\n" + "<tr>\n" + " <td>The last time the client sent a request associated with this session</td>\n" + " <td>" + lTime + " </td>" + "</tr>\n" + "</tr>\n" + "<tr>\n" + " <td> the maximum time interval, in seconds that the servlet container will keep this session open between client accesses.</td>\n" + " <td>" + newSession.getMaxInactiveInterval() + " </td>" + "</tr>\n" + "</table>\n" + "</body></html>"); } }7. Exception Handling Exceptions are used to handle errors. It is a reaction to unbearable conditions. Here comes the role of web.xml i.e. deployment description which is used to run JSP and servlet pages. The container searches the configurations in web.xml for a match. So, in web.xml use these exception-type elements for match with the thrown exception type when a servlet throws an exception. 7.1 Error Code Configuration The /HandlerClass servlet gets called when an error with status code 403 occurs as shown below: Listing 7: For Error code 403<error-page> <error-code>403</error-code> <location>/HandlerClass</location> </error-page>7.2 Exception-Type Configuration If the application throws IOException, then /HandlerClass servlet gets called by the container: Listing 8: For Exception Type IOException <error-page> <exception-type>java.io.IOException</exception-type > <location>/HandlerClass</location> </error-page> If you want to avoid the overhead of adding separate elements, then use java.lang.Throwable as exception-type: Listing 9: For all exceptions mention java.lang.Throwable: <error-page> <exception-type>java.lang.Throwable</exception-type > <location>/HandlerClass</location> </error-page> 8. Debugging Client-server interactions are in large number in Servlets. This makes errors difficult to locate. Different ways can be followed for location warnings and errors. 8.1 Message Logging Logs are provided for getting information about warning and error messages. For this a standard logging method is used. Servlet API can generate this information using log() method. Using Apache Tomcat, these logs can be found in TomcatDirectory/logs. 8.2 Java Debugger Servlets can be debugged using JDB Debugger i.e. Java Debugger. In this the program being debugged is sun.servlet.http.HttpServer. Set debugger’s class path for finding the following classes:servlet.http.HttpServer server_root/servlets and server_root/classes: Through this the debugger sets breakpoints in a servlet.8.3 Headers Users should have some information related to structure of HTTP headers. Issues can be judged using them which can further locate some unknown errors. Information related to HTTP headers can help you in locating errors. Studying request and response can help in guessing what is not going well. 8.4 Refresh Refresh your browser’s web page to avoid it from caching previous request. At some stages, browser shows request performed previously. This is a known point but can be a problem for those who are working correctly but unable to display the result properly. Listing 21: ServletDebugging.java Here, Servlet Debugging is shown which displays the errors in Tomcat log.import java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;public class ServletDebugging extends HttpServlet {@Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {// parameter "name" String strpm = request.getParameter("name");ServletContext context = getServletContext( );// checks if the parameter is set or not if (strpm == null || strpm.equals("")) context.log("No message received:", new IllegalStateException("Sorry, the parameter is missing.")); else context.log("Here is the visitor's message: " +strpm); } }9. Internationalization For building a global website, some important points are considered which includes language related to user’s nationality. Internationalization isenabling a website for providing content translated in different languages according to user’s nationality. 9.1 Methods For finding visitors local region and language, these methods are used:Method DescriptionString getCountry() Returns the country code.String getDisplayCountry() Returns a name for the visitors’ country.String getLanguage() Returns the language code.String getDisplayLanguage() Returns a name for the visitors’ language.String getISO3Country() Returns a three-letter abbreviation for the visitors country.String getISO3Language() Returns a three-letter abbreviation for the visitors language.9.2 Example The example displays the current locale of a user. Following project is created in NetBeansIDE: Project Name: WebApplicationInternationalization Project Location: C:\Users\Test\Documents\NetBeansProjects Servlet: ServletLocale URL Pattern: /ServletLocaleListing 22: ServletLocale.java import java.io.IOException; import java.io.PrintWriter; import java.util.Locale; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;public class ServletLocale extends HttpServlet {@Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //Get the client's Locale Locale newloc = request.getLocale(); String country = newloc.getCountry();// Set response content type response.setContentType("text/html"); PrintWriter out = response.getWriter();// this sets the page title and body content String title = "Finding Locale of current user"; String docType = "<!doctype html public \"-//w3c//dtd html 4.0 " + "transitional//en\">\n"; out.println(docType + "<html>\n" + "<head><title>" + title + "</title></head>\n" + "<body bgcolor=\"#C0C0C0\">\n" + "<h3>" + country + "</h3>\n" + "</body></html>"); } }Listing23: index.html with location hyperlink as URL-pattern – ServletLocale <html> <head> <title>User's Location</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <p>Click on the following link for finding the locale of visitor:</p> <a href="ServletLocale"><b>Location</b></a> </body> </html>Listing24: web.xml with URL-pattern as /ServletLocale <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"> <servlet> <servlet-name>ServletLocale</servlet-name> <servlet-class>ServletLocale</servlet-class> </servlet> <servlet-mapping> <servlet-name>ServletLocale</servlet-name> <url-pattern>/ServletLocale</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> </web-app>10. References 10.1 WebsiteOfficial Oracle Documentation Sun Developer Network Free NetBeans Download Free Apache Download Free Java Download10.2 BooksHead First Servlets and JSP: Passing the Sun Certified Web Component Developer Exam, by Bryan Basham, Kathy Sierra , Bert Bates Servlet and JSP (A Tutorial), by Budi Kurniawan11. Conclusion Servlet is fast in performance and easy to use when compared with traditional Common Gateway Interfaces (CGI). Through this guide you can easily learn the concepts related to Java Servlets. The project codes are developed under NetBeansIDE, so you will get an idea about some of its amazing user-friendly features as well. 12. Download This was a tutorial on Java Servlets. Download You can download the full source code of this tutorial here: Servlet_Project_Code ...
software-development-2-logo

Developers want to be heard

How often have you been in this situation?You’re in a meeting with the team and you’re all discussing the implementation of a new feature. The group seems to be converging on a design, but there’s something about it that feels off, some sort of “smell”.  You point this out to the team, perhaps outlining the specific areas that make you uncomfortable. Maybe you even have an alternative solution. The team lets you have your say, but assures you their solution is The Way. Or what about this?A tech lead asks you to fix a bug, and as you work on your implementation you bounce ideas around periodically just to make sure you’re on the right track. Things seem to be OK, until it comes to getting your code merged. Now it becomes clear that your implementation is not what the lead had in mind, and there’s a frustrating process of back-and-forth while you explain and defend your design decisions whilst trying to incorporate the feedback. At the end, the solution doesn’t feel like your work, and you’re not entirely sure what was wrong with your initial implementation – it fixed the problem, passed the tests, and met the criteria you personally think are important (readability / scalability / performance / stability / time-to-implementation, whatever it is that you value).When you speak to women developers, you often hear “I feel like I have to work really hard to convince people about my ideas” or “it’s taken me a long time to prove my worth” or “I still don’t know how to be seen as a full member of the team”. And you hear these a lot from women because we ask women a lot what they don’t like about their work, since we’re (correctly) concerned as an industry about the lack of female developers and the alarming rate at which they leave technical roles. However, if you ask any developer you’ll hear something similar.  Even very senior, very experienced (very white, very male) developers have a lot of frustration trying to convince others that their ideas have value. It’s not just a Problem With Women. I’ve been wondering if our problem is that we don’t listen.  When it comes to exchanging technical ideas, I think overall we’re not good at really listening to each other.  At the very least, I think we’re bad at making people feel heard. Let’s think about this for a bit: if we don’t listen to developers, if we don’t help them to understand why they’re wrong, or work together to incorporate all ideas into a super-idea that’s the best solution, developers will become frustrated.  We’re knowledge workers, what we bring to the table is our brains, our ideas, our solutions.  If these are persistently not valued, we could go one of two ways:Do it our way anyway. We still think we’re right, we haven’t been convinced that our idea is not correct, or that someone else’s is correct (maybe because we didn’t listen to them? Maybe because no-one took the time to listen to us and explain why we were wrong? Maybe because we were right and no-one was listening?). Leave. We might join a team where we feel more valued, or we might leave development all together.  At least as a business analyst, as a project manager, as a tester, people have to listen to us: by their very definition the output of those jobs is an input to the development team.Option one leads to rogue code in our application, often not checked by anyone else in the team let alone understood by them, because of course we were not allowed to implement this.  So it’s done in secret.  If it works, at worst no-one notices. And at best? You’re held up as a hero for actually Getting Something Done. This can’t be right, we’re rewarding the rebel behaviour, not encouraging honest discussion and making people feel included. Option two leads to the team (and maybe the industry) losing a developer. Sometimes you might argue “Good Riddance”.  But there’s such a skills shortage, it’s so hard (and expensive) to hire developers, and you must have seen something in that developer to hire them in the first place, that surely it’s cheaper, better, to make them feel welcome, wanted, valued? What can we do to listen to each other?Retrospectives. Done right, these give the team a safe place to discuss things, to ask questions, to suggest improvements.  It’s not necessarily a place to talk about code or design, but it is a good place to raise issues like the ones above, and to suggest ways to address these problems. You could schedule sessions for sharing technical ideas: maybe regular brown bags to help people understand the technologies or existing designs; maybe sessions where the architecture or design or principals of a particular area are explored and explained; maybe space and time for those who feel unheard to explain in more detail where they’re coming from, principals that are important to them.  It’s important that these sessions are developer-lead, so that everyone has an opportunity to share their ideas. Pair programming. When you’re sat together, working together, there’s a flow of ideas, information, designs, experience. It’s not necessarily a case of a more senior person mentoring a less experienced developer, all of us have different skills and value different qualities in our implementation – for example, one of you could be obsessive about the tests, where the other really cares about readability of the code. When you implement something in a pair, you feel ownership of that code but you feel less personally attached to the code – you created it, but you created it from the best of both of you, you had to listen to each other to come to a conclusion and implement it. And if an even better idea comes along, great, it just improves the code. You’re constantly learning from the other people you work with, and can see the effect of them learning from you. We should value, and coach, more skills than simply technology skills. I don’t know why we still seem to have this idea that developers are just typists communing with the computer – the best developers work well in teams and communicate effectively with the business and users; the best leaders make everyone in their team more productive.  In successful organisations, sales people are trained in skills like active listening, like dealing with objections.  More development teams should focus on improving these sorts of communication skills as a productivity tool.I’m sure there are loads more options, I just thought of these in ten minutes.  If you read any books aimed at business people, or at growing your career, there are many tried and tested methods for making people feel heard, for playing nicely with others. So we should work harder to listen to each other. Next time you’re discussing something with your team, or with your boss, try and listen to what they’re saying – ask them to clarify things you don’t understand (you won’t look stupid, and developers love explaining things), and repeat back what you do understand. Request the same respect in return – if you feel your ideas aren’t being heard, make sure you sit down with someone to talk over your ideas or your doubts in more detail, and be firm in making sure the team or that person is hearing what you think you’re saying.  We may be wrong, they may be right, but we need to understand why we’re wrong, or we’ll never learn. If we all start listening a bit more, maybe we’ll be a bit happier.Reference: Developers want to be heard from our JCG partner Trisha Gee at the Java Advent Calendar blog....
apache-maven-logo

Docker All The Things for Java EE Developers – On Windows with Maven

Everybody seems to do Docker these days. And the whole topic gets even more attraction with Microsoft committed to integrate it into Windows. As many middleware developers are running Windows, I thought I give it a try myself and also give some more tips along the way about how to build and run images with the least possible amount of struggle with Docker containers, hosts and guests and command line options. Arun did a very nice introduction to Docker on a recent blog-post. I’m skipping this here and directly dive into it. Installing Boot2Docker The Docker Engine uses Linux-specific kernel features, so to run it on Windows we need to use a lightweight virtual machine. There is a helper application called Boot2Docker which makes installing and running everything pretty straight forward. The first step is to download the latest version of the binary installer and execute it. It will install Oracle VirtualBox, MSYS-git, the boot2docker Linux ISO, and the Boot2Docker management tool. Next stept is to run the Boot2Docker start script (there’s a little whale icon on your desktop after the install). It will setup the Docker host and connect via ssh to the host. If you want to do that again at a later state, you can simply type: boot2docker ssh.There is no Docker client for Windows based systems for now. So, ssh basically is a workaround which works pretty good and is probably also a well known way. With Microsoft’s recently announced new partnership with Docker, this might change soonish. If you want to build your own Docker client, you find some more information in Khalid Mouss’ blog post about it. Some tips for you. You need to have the %MSYS-git_INSTALL%/bin directory in your PATH. It contains a cmd line ssh client. If you want to use PUTTY make sure to connect to the Docker host using user “root” and password “tcuser”. If you are running any kind of VPN client, you will absolutely run into trouble. Docker normally runs in Host-only mode and and installed VPN client turns this into NAT. Go to the VirtualBox management console and open the settings for the boot2docker-vm and add a port forwarding rule for the Docker API. We will need this later.A little warning: The default Docker API port is 2375. In my case this wasn’t true, so I had to first find out which port the Docker API is listening on. Do this with netstat on your host. So, I basically used a direct guest to host mapping on 2376 in this example. All done. Now you’re ready to launch the hello world example. Just type “docker run hello-world” and wait for a “Hello from Docker.”. Now you’re good to go. If you need a complete reference for Boot2Docker, this is a very helpful site. Why Exactly Do We Do Docker? What is the hype around Docker these days? There is a little history in it and it might on the long run also support microservice deployments by just defining complete packages that can be deployed. Including the infrastructure requirements. Think of Docker containers as application servers which can run defined images. And think of images as large Maven archives which does not only contain your application but also the OS and all the parts that are needed to run your application. Like it or not, but everybody is playing around with it and at the end of the day, it is a way to solve some problems. I’m not telling you, that I fell in love with it instantly, but at least I want it to help me with demos and showcases. And the thought, that I only have to define a bunch of dependencies and Maven Plugins in my Java EE applications and everything magically just runs is something I like. But let’s look at what it takes and how to do it. Already Available Images – e.g. WildFly Speaking of images: There are a bunch of images ready to go. We at JBoss have a special microsite ready for you with all the Docker images that we have ready for you to run. If you want to use any of them you basically just install them in your container and start it. By doing this, you can just have any component running, basically like you would have it running locally on your machine. The only difference is, that it runs in your “Docker Host”. If you want to start WildFly all you have to do is to issue the following Docker command: docker run -it -p 9990:99 jboss/wildfly Docker automatically pulls the relevant bits (which might take a while) and starts a container from this image. The port mapping actually is between the host and the container. Remember the VPN problems from above? Make sure to add the port mapping in VirtualBox also if you want to try that out. The outcome is pretty clear: You now have a WildFly running in a container. Map the needed ports and just use it as you would normally use a remote instance. If you want even more images, you can browse and search the Docker Hub. There’s plenty out there already. Using Docker like that is not exactly the idea behind it. Actually, the image should contain not only the base component but also a completely configured application in it.   Building You Own Images – All The Different Ways Therefore you need to build your own images. There are different ways of doing that. You can either update a container created from an image and commit the results to an image. Or create your own Dockerfile to specify instructions to create an image or you can use a build tool like Maven to create your image. The Dockerfile approach is very powerful and requires quite a bit of typing and vi-magic. I was looking for an easy way to create an image from Maven. Because, this is what I use for projects anyway. Building A Docker Image With Maven There are many different Maven plugins out there which actually offer this kind of feature I was looking for. At the end of the day, the Fabric8 team was using the Maven-Docker-Plugin made by Roland Huß. The plugin can build and publish images but also start and stop containers for integration testing and development. I struggled a bit with setting it up and I am still playing around with the best ways to integrate it into my applications, so this is basically a first first list of my findings and solutions and no complete user-guide. Please look at the samples and the official user guide of the plugin for more details. I will build a complete example in one of my next blog posts and walk you through it. DOCKER_HOST Environment variable First thing for this plugin to work is obviously the DOCKER_HOST environment variable. As the whole experience of Windows is a bit clumsy for now, this variable isn’t set when you start your vm. Good news is, that you already figured everything you need to know out by installing and doing the port mapping. So, you basically just set it: set DOCKER_HOST=tcp://127.0.0.1:2376 Make sure to point your general <configuration> section in the maven-docker-plugin to the same <dockerHost>https://127.0.0.1:2376</dockerHost> Certificates and HTTPS Connections Since 1.3.0 Docker remote API requires communication via SSL and authentication with certificates when used with boot2docker. So, you need to configure the certificates. Find them in the .boot2docker/certs folder and make sure to also add this path to your plugin configuration. <certPath>C:/Users/myfear/.boot2docker/certs/boot2docker-vm</certPath> That’s it for now. Let me know if you also have experiences about how to work with Docker on Windows.Reference: Docker All The Things for Java EE Developers – On Windows with Maven from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
software-development-2-logo

Why do we mock?

I do Java interviews. During the interviews I ask technical question that I know the answer for. You may think this is boring. To be honest: sometimes it is. But sometimes it is interesting to learn what misconcepts there are. I happened to ask during the interview what you can read in the title: “Why do we mock?”. The answer was interesting. I can not quote word by word and I also would not like to do that for editorial, ethical and many other reasons. I also would like to stress that a misconcept does not qualify the person clever or stupid or anything. It is just a misconcept that comes from someone’s personal experience. Here is what she/he was saying. We use mocks so that we can write tests for units that use heavy services that may not be available when we run the test. It is also important to mock so that the tests can run fast even when the services would make the testing slow. It may also happen in an enterprise environment that some of the services are not available when we develop, and therefore testing would be impossible if we did not use mocks. Strictly speaking the above statements are true. I would not argue about why you or anybody else uses mocks. But as a Java professional I would argue about what the major and first goal we use mocks for.We use mocks to separate the modules from each other during testing so that we can tell from the test results which module is faulty and which passed the tests.This is what unit tests were invented for the first place. There are side effects, like those mentioned above. There are other side effects. For example the unit tests are great documentations. If formulated well they explain better how the unit works and what interfaces it needs and provides than any javadoc. Especially that javadocs tend to be outdated while Junit tests fail during build if get outdated. Another side effect is that you write testable code if you craft the unit tests first and this generally improve your coding style. Saying it simply: unit testing is testing units. Units bare without dependencies. And this can be done with mocks. The other things are side effects. We like them, but thy are not the main reason to mock when we unit test.Reference: Why do we mock? from our JCG partner Peter Verhas at the Java Deep blog....
docker-logo

Pushing Docker images to Registry

Tech Tip #57 explained how to create your own Docker images. That particular blog specifically showed how to build your own WildFly Docker images on CentOS and Ubuntu. Now you are ready to share your images with rest of the world. That’s where Docker Hub comes in handy. Docker Hub is the “distribution component” of Docker, or a place to store and search images. From the Getting Started with Docker Hub docs … The Docker Hub is a centralized resource for working with Docker and its components. Docker Hub helps you collaborate with colleagues and get the most out of Docker. Starting and pushing images to with Docker Hub is pretty straight forward.Pushing images to Docker Hub require an account. It can be created as explained here. Or rather easily by using docker login command.wildfly-centos> docker login Username: arungupta Password: Email: arun.gupta@gmail.com Login SucceededSearching on WildFly shows there are 72 images:wildfly-centos> docker search wildfly NAME DESCRIPTION STARS OFFICIAL AUTOMATED jboss/wildfly WildFly application server image 42 [OK] sewatech/wildfly Debian + WildFly 8.1.0.Final with OpenJDK ... 1 [OK] kamcord/wildfly 1 openshift/wildfly-8-centos 1 [OK] abstractj/wildfly AeroGear Wildfly Docker image 1 jsightler/wildfly_nightly Nightly build from wildfly's github master... 1 centos/wildfly CentOS based WildFly Docker image 1 aerogear/unifiedpush-wildfly 1 [OK] t0nyhays/wildfly 1 [OK] tsuckow/wildfly-propeller Dockerization of my application *Propeller... 0 [OK] n3ziniuka5/wildfly 0 [OK] snasello/wildfly 0 [OK] jboss/keycloak-adapter-wildfly 0 [OK] emsouza/wildfly 0 [OK] sillenttroll/wildfly-java-8 WildFly container with java 8 0 [OK] jboss/switchyard-wildfly 0 [OK] n3ziniuka5/wildfly-jrebel 0 [OK] dfranssen/docker-wildfly 0 [OK] wildflyext/wildfly-camel WildFly with Camel Subsystem 0 ianblenke/wildfly 0 [OK] arcamael/docker-wildfly 0 [OK] dmartin/wildfly 0 [OK] pires/wildfly-cluster-backend 0 [OK] aerogear/push-quickstarts-wildfly-dev 0 [OK] faga/wildfly Wildfly application server with ubuntu. 0 abstractj/unifiedpush-wildfly AeroGear Wildfly Docker image 0 murad/wildfly - oficial centos image - java JDK "1.8.0_0... 0 aerogear/unifiedpush-wildfly-dev 0 [OK] ianblenke/wildfly-cluster 0 [OK] blackhm/wildfly 0 khipu/wildfly8 0 [OK] rowanto/docker-wheezy-wildfly-java8 0 [OK] ordercloud/wildfly 0 lavaliere/je-wildfly A Jenkins Enterprise demo master with a Wi... 0 adorsys/wildfly Ubuntu - Wildfly - Base Image 0 akalliya/wildfly 0 lavaliere/joc-wildfly Jenkins Operations Center master with an a... 0 tdiesler/wildfly 0 apiman/on-wildfly8 0 [OK] rowanto/docker-wheezy-wildfly-java8-ex 0 [OK] arcamael/blog-wildfly 0 lavaliere/wildfly 0 jfaerman/wildfly 0 yntelectual/wildfly 0 svenvintges/wildfly 0 dbrotsky/wildfly 0 luksa/wildfly 0 tdiesler/wildfly-camel 0 blackhm/wildfly-junixsocket 0 abstractj/unifiedpush-wildfly-dev AeroGear UnifiedPush server developer envi... 0 abstractj/push-quickstarts-wildfly-dev AeroGear UnifiedPush Quickstarts developer... 0 bn3t/wildfly-wicket-examples An image to run the wicket-examples on wil... 0 lavaliere/wildfly-1 0 munchee13/wildfly-node 0 munchee13/wildfly-manager 0 munchee13/wildfly-dandd 0 munchee13/wildfly-admin 0 bparees/wildfly-8-centos 0 lecoz/wildflysiolapie fedora latest, jdk1.8.0_25, wildfly-8.1.0.... 0 lecoz/wildflysshsiolapie wildfly 8.1.0.Final, jdk1.8.0_25, sshd, fe... 0 wildflyext/example-camel-rest 0 pepedigital/wildfly 0 [OK] tsuckow/wildfly JBoss Wildfly 8.1.0.Final standalone mode ... 0 [OK] mihahribar/wildfly Dockerfile for Wildfly running on Ubuntu 1... 0 [OK] hpehl/wildfly-domain Dockerfiles based on "jboss/wildfly" to se... 0 [OK] raynera/wildfly 0 [OK] hpehl/wildfly-standalone Dockerfile based on jboss/wildfly to setup... 0 [OK] aerogear/wildfly 0 [OK] piegsaj/wildfly 0 [OK] wildflyext/wildfly Tagged versions JBoss WildFly 0Official images are tagged jboss/wildfly. In order to push your own image, it needs to be built as a named image otherwise you’ll get an error as shown:2014/11/26 09:59:37 You cannot push a "root" repository. Please rename your repository in <user>/<repo> (ex: arungupta/wildfly-centos)This can be easily done as shown:wildfly-centos> docker build -t="arungupta/wildfly-centos" . Sending build context to Docker daemon 4.096 kB Sending build context to Docker daemon Step 0 : FROM centos ---> ae0c2d0bdc10 Step 1 : MAINTAINER Arun Gupta ---> Using cache ---> e490dfcb3685 Step 2 : RUN yum -y update && yum clean all ---> Using cache ---> f212cb9dbcf5 Step 3 : RUN yum -y install xmlstarlet saxon augeas bsdtar unzip && yum clean all ---> Using cache ---> 28b11e6151f0 Step 4 : RUN groupadd -r jboss -g 1000 && useradd -u 1000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss ---> Using cache ---> 73603eab89b7 Step 5 : WORKDIR /opt/jboss ---> Using cache ---> 9a661ae4341b Step 6 : USER jboss ---> Using cache ---> 6265153611c7 Step 7 : USER root ---> Using cache ---> 12ed28a7acb7 Step 8 : RUN yum -y install java-1.7.0-openjdk-devel && yum clean all ---> Using cache ---> 44c4bb92fa11 Step 9 : USER jboss ---> Using cache ---> 930cb2a860f7 Step 10 : ENV JAVA_HOME /usr/lib/jvm/java ---> Using cache ---> fff2c21b0a71 Step 11 : ENV WILDFLY_VERSION 8.2.0.Final ---> Using cache ---> b7b7ca7a9172 Step 12 : RUN cd $HOME && curl -O http://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.zip && unzip wildfly-$WILDFLY_VERSION.zip && mv $HOME/wildfly-$WILDFLY_VERSION $HOME/wildfly && rm wildfly-$WILDFLY_VERSION.zip ---> Using cache ---> a1bc79a43c77 Step 13 : ENV JBOSS_HOME /opt/jboss/wildfly ---> Using cache ---> d46fdd618d55 Step 14 : EXPOSE 8080 9990 ---> Running in 9c2c2a5ef41c ---> 8988c8cbc051 Removing intermediate container 9c2c2a5ef41c Step 15 : CMD /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 ---> Running in 9e28c3449ec1 ---> d989008d1f84 Removing intermediate container 9e28c3449ec1 Successfully built d989008d1f84docker build command builds the image, -t specifies the repository name to be applied to the resulting image. Once the image is built, it can be verified as:wildfly-centos> docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE arungupta/wildfly-centos latest d989008d1f84 14 hours ago 619.6 MB wildfly-ubuntu latest a2e96e76eb10 43 hours ago 749.5 MB 0281986b0ed8 44 hours ago 749.5 MB 1a5e1aeadc85 44 hours ago 607.7 MB wildfly-centos latest 97c8780a7d6a 45 hours ago 619.6 MB registry latest 7e2db37c6564 13 days ago 411.6 MB centos latest ae0c2d0bdc10 3 weeks ago 224 MB jboss/wildfly latest 365390553f92 4 weeks ago 948.7 MB ubuntu latest 5506de2b643b 4 weeks ago 199.3 MBNotice the first line shows the named image arungupta/wildfly-centos. This image can then be pushed to Docker Hub as:wildfly-centos> docker push arungupta/wildfly-centos The push refers to a repository [arungupta/wildfly-centos] (len: 1) Sending image list Pushing repository arungupta/wildfly-centos (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping ae0c2d0bdc10: Image already pushed, skipping e490dfcb3685: Image successfully pushed f212cb9dbcf5: Image successfully pushed 28b11e6151f0: Image successfully pushed 73603eab89b7: Image successfully pushed 9a661ae4341b: Image successfully pushed 6265153611c7: Image successfully pushed 12ed28a7acb7: Image successfully pushed 44c4bb92fa11: Image successfully pushed 930cb2a860f7: Image successfully pushed fff2c21b0a71: Image successfully pushed b7b7ca7a9172: Image successfully pushed a1bc79a43c77: Image successfully pushed d46fdd618d55: Image successfully pushed 8988c8cbc051: Image successfully pushed d989008d1f84: Image successfully pushed Pushing tag for rev [d989008d1f84] on {https://cdn-registry-1.docker.io/v1/repositories/arungupta/wildfly-centos/tags/latest}And you can verify this by pulling the image:wildfly-centos> docker pull arungupta/wildfly-centos Pulling repository arungupta/wildfly-centos d989008d1f84: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete ae0c2d0bdc10: Download complete e490dfcb3685: Download complete f212cb9dbcf5: Download complete 28b11e6151f0: Download complete 73603eab89b7: Download complete 9a661ae4341b: Download complete 6265153611c7: Download complete 12ed28a7acb7: Download complete 44c4bb92fa11: Download complete 930cb2a860f7: Download complete fff2c21b0a71: Download complete b7b7ca7a9172: Download complete a1bc79a43c77: Download complete d46fdd618d55: Download complete 8988c8cbc051: Download complete Status: Image is up to date for arungupta/wildfly-centos:latestEnjoy!Reference: Pushing Docker images to Registry from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
docker-logo

Remove Docker image and container with a criteria

You have installed multiple Docker images and would like to clean them up using rmi command. So, you list all the images as:                ~> docker images --no-trunc REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE mysql latest 98840bbb442c7dc3640ffe3a8ec45d3fee934c2f6d85daaaa2edf65b380485a0 25 hours ago 236 MB wildfly-centos latest fc378232f03d04bff96987f4c23969461582f73c3a7b473a7cb823ac67939f48 5 days ago 619.6 MB arungupta/wildfly-centos latest e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 6 days ago 619.6 MB wildfly-ubuntu latest a2e96e76eb10f4df87d01965ce4df5310de6f9f3927aceb7f5642393050e8752 7 days ago 749.5 MB registry latest 7e2db37c6564bf030e6c5af9725bf9f9a8196846e3a77a51e201fc97871e2e60 2 weeks ago 411.6 MB centos latest ae0c2d0bdc100993f7093400f96e9abab6ddd9a7c56b0ceba47685df5a8fe906 4 weeks ago 224 MB jboss/wildfly latest 365390553f925f96f8c00f79525ad101847de7781bb4fec23b1188f25fe99a6a 5 weeks ago 948.7 MB centos/wildfly latest 1de9304f58bbc2d401b4dcbba6fc686bdd6f6bff473fe486e7cb905c02163b1a 6 weeks ago 606.6 MBThen try to remove the “arungupta/wildfly-centos” image as shown below, but get an error:~> docker rmi e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Error response from daemon: Conflict, cannot delete e4f1dbdff189 because the container bafc2b3327a4 is using it, use -f to force 2014/12/02 12:56:53 Error: failed to remove one or more images So you follow the recommendation of using -f switch but get another error:~> docker rmi -f e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Error response from daemon: No such id: c345720579e024df4f6d28d2062fda64b7743f7dbb214136d4d2285bc3afc95b 2014/12/02 12:56:55 Error: failed to remove one or more imagesWhat do you do ? This message indicates that the image is used by one of the containers and that’s why could not be removed. The error message is very ambiguous and a #9458 has been filed for the same. In the meanwhile, an easy way to solve this is to list all the containers as shown:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bafc2b3327a4 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago boring_ptolemy bfe71d92a612 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago agitated_einstein e1c0965d202c arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago thirsty_blackwell ddc400c26f1a mysql:latest "/entrypoint.sh mysq 5 days ago Exited (0) 27 minutes ago 3306/tcp sample-mysql 05c741b5e22f wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago agitated_lalande ff10b83d6c17 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago insane_wilson b2774b17460c arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago goofy_pasteur 2d64f4eb8fb9 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago focused_lalande c3f61947671a arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago silly_ardinghelli ac6f29b92c7a arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago stoic_leakey fc16f3f8c139 wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago desperate_babbage 4555628a5d0a wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (-1) 4 days ago sharp_bardeen 3bdae1d2527a wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago sick_lovelace 2697c769c2ee wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago thirsty_fermat f8c686d1d6be wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago cranky_fermat a1945f2ca473 wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (-1) 4 days ago suspicious_turing 31b9c4df0633 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago distracted_franklin cd8dad2b1e22 c345720579e0 "/bin/sh -c '#(nop) 5 days ago cocky_blackwellThere are lots of containers that are using “arungupta/wildfly-centos” image but none of them seem to be running. If there are any containers that are running then you need to stop them as:docker rm $(docker stop $(docker ps -q))Remove the containers that are using this image as:docker ps -a | grep arungupta/wildfly-centos | awk '{print $1}' | xargs docker rm bafc2b3327a4 bfe71d92a612 e1c0965d202c ff10b83d6c17 b2774b17460c 2d64f4eb8fb9 ac6f29b92c7a 31b9c4df0633The criteria here is specified as a grep pattern. docker ps command has other options to specify criteria as well such as only the latest created containers or containers in a particular status. For example, containers that exited with status -1 can be seen as:~> docker ps -a -f "exited=-1" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 68aca76aa690 wildfly-centos:latest "/opt/jboss/wildfly/ 39 minutes ago Exited (-1) 37 minutes ago insane_yonathAll running containers, as opposed to meeting a specific criteria, can be removed as:docker rm $(docker ps -aq)And now the image can be easily removed as:~> docker rmi e4f1dbdff189 Untagged: arungupta/wildfly-centos:latest Deleted: e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Deleted: ad2899e176a2e73acbcf61909426786eaa195fcea7fb0aa27061431a3aae6633Just like removing all containers, all images can be removed as:docker rmi $(docker images -q)Enjoy!Reference: Remove Docker image and container with a criteria from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Beyond the JAX-RS spec: Apache CXF search extension

In today’s post we are going to look beyond the JAX-RS 2.0 specification and explore the useful extensions which Apache CXF, one of the popular JAX-RS 2.0 implementations, is offering to the developers of REST services and APIs. In particular, we are going to talk about search extension using subset of the OData 2.0 query filters. In the nutshell, search extension just maps some kind of the filter expression to a set of matching typed entities (instances of Java classes). The OData 2.0 query filters may be very complex however at the moment Apache CXF supports only subset of them:  Operator Description Exampleeq Equal city eq ‘Redmond’ne Not equal city ne ‘London’gt Greater than price gt 20ge Greater than or equal price ge 10lt Less than price lt 20le Less than or equal price le 100and Logical and price le 200 and price gt 3.5or Logical or price le 3.5 or price gt 200Basically, to configure and activate the search extension for your JAX-RS services it is enough to define two properties, search.query.parameter.name and search.parser, plus one additional provider,SearchContextProvider: @Configuration public class AppConfig { @Bean( destroyMethod = "shutdown" ) public SpringBus cxf() { return new SpringBus(); } @Bean @DependsOn( "cxf" ) public Server jaxRsServer() { final Map< String, Object > properties = new HashMap< String, Object >(); properties.put( "search.query.parameter.name", "$filter" ); properties.put( "search.parser", new ODataParser< Person >( Person.class ) );final JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class ); factory.setProvider( new SearchContextProvider() ); factory.setProvider( new JacksonJsonProvider() ); factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) ); factory.setAddress( factory.getAddress() ); factory.setProperties( properties );return factory.create(); } @Bean public JaxRsApiApplication jaxRsApiApplication() { return new JaxRsApiApplication(); } @Bean public PeopleRestService peopleRestService() { return new PeopleRestService(); } } The search.query.parameter.name defines what would be the name of query string parameter used as a filter (we set it to be $filter), while search.parser defines the parser to be used to parse the filter expression (we set it to be ODataParser parametrized with Person class). The ODataParser is built on top of excellent Apache Olingo project which currently implements OData 2.0 protocol (the support for OData 4.0 is on the way). Once the configuration is done, any JAX-RS 2.0 service is able to benefit from search capabilities by injecting the contextual parameter SearchContext. Let us take a look on that in action by defining the REST service to manage people represented by following class Person: public class Person { private String firstName; private String lastName; private int age;// Setters and getters here } The PeopleRestService would just allow to create new persons using HTTP POST and perform the search using HTTP GET, listed under /search endpoint: package com.example.rs;import java.util.ArrayList; import java.util.Collection;import javax.ws.rs.FormParam; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.Context; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo;import org.apache.cxf.jaxrs.ext.search.SearchCondition; import org.apache.cxf.jaxrs.ext.search.SearchContext;import com.example.model.Person;@Path( "/people" ) public class PeopleRestService { private final Collection< Person > people = new ArrayList<>(); @Produces( { MediaType.APPLICATION_JSON } ) @POST public Response addPerson( @Context final UriInfo uriInfo, @FormParam( "firstName" ) final String firstName, @FormParam( "lastName" ) final String lastName, @FormParam( "age" ) final int age ) { final Person person = new Person( firstName, lastName, age ); people.add( person ); return Response .created( uriInfo.getRequestUriBuilder().path( "/search" ) .queryParam( "$filter=firstName eq '{firstName}' and lastName eq '{lastName}' and age eq {age}" ) .build( firstName, lastName, age ) ) .entity( person ).build(); } @GET @Path("/search") @Produces( { MediaType.APPLICATION_JSON } ) public Collection< Person > findPeople( @Context SearchContext searchContext ) { final SearchCondition< Person > filter = searchContext.getCondition( Person.class ); return filter.findAll( people ); } } The findPeople method is the one we are looking for. Thanks to all hard lifting which Apache CXF does, the method looks very simple: the SearchContext is injected and the filter expression is automatically picked up from $filter query string parameter. The last part is to apply the filter to the data, which in our case is just a collection named people. Very clean and straightforward. Let us build the project and run it: mvn clean package java -jar target/cxf-search-extension-0.0.1-SNAPSHOT.jar Using awesome curl tool, let us issue a couple of HTTP POST requests to generate some data to run the filter queries against: > curl http://localhost:8080/rest/api/people -X POST -d "firstName=Tom&lastName=Knocker&age=16" { "firstName": "Tom", "lastName": "Knocker", "age": 16 }> curl http://localhost:8080/rest/api/people -X POST -d "firstName=Bob&lastName=Bobber&age=23" { "firstName": "Bob", "lastName": "Bobber", "age": 23 }> curl http://localhost:8080/rest/api/people -X POST -d "firstName=Tim&lastName=Smith&age=50" { "firstName": "Tim", "lastName": "Smith", "age": 50 } With sample data in place, let us go ahead and come up with a couple of different search criteria, complicated enough to show off the power of OData 2.0 query filters:find all persons whose first name is Bob ($filter=“firstName eq ‘Bob'”)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="firstName eq 'Bob'" [ { "firstName": "Bob", "lastName": "Bobber", "age": 23 } ]find all persons whose last name is Bobber or last name is Smith and firstName is not Bob ($filter=“lastName eq ‘Bobber’ or (lastName eq ‘Smith’ and firstName ne ‘Bob’)”)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="lastName eq 'Bobber' or (lastName eq 'Smith' and firstName ne 'Bob')" [ { "firstName": "Bob", "lastName": "Bobber", "age": 23 }, { "firstName": "Tim", "lastName": "Smith", "age": 50 } ]find all persons whose first name starts from letter T and who are 16 or older ($filter=“firstName eq ‘T*’ and age ge 16″)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="firstName eq 'T*' and age ge 16" [ { "firstName": "Tom", "lastName": "Knocker", "age": 16 }, { "firstName": "Tim", "lastName": "Smith", "age": 50 } ] Note: if you run this commands on Linux-like environment, you may need to escape the $ sign using \$ instead, for example: curl -X GET -G http://localhost:8080/rest/api/people/search –data-urlencode \$filter=”firstName eq ‘Bob'” At the moment, Apache CXF offers just basic support of OData 2.0 query filters, with many powerful expressions left aside. However, there is a commitment to push it forward once the community expresses enough interest in using this feature. It is worth mentioning that OData 2.0 query filters is not the only option available. Search extension also supports FIQL (The Feed Item Query Language) and this great article from one of the core Apache CXF developers is a great introduction into it. I think this quite useful feature of Apache CXF can save a lot of your time and efforts by providing simple (and not so simple) search capabilities to your JAX-RS 2.0 services. Please give it a try if it fits your application needs.The complete project source code is available on Github.Reference: Beyond the JAX-RS spec: Apache CXF search extension from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
devops-logo

Continuous Deployment: Implementation

This article is part of the Continuous Integration, Delivery and Deployment series. Previous post described several Continuous Deployment strategies. In this one we will attempt to provide one possible solution for reliable, fast and automatic continuous deployment with ability to test new releases before they become available to general users. If something goes wrong we should be able to rollback back easily. On top of that, we’ll try to accomplish zero-downtime. No matter how many times we deploy our applications, there should never be a single moment when they are not operational. To summarize, our goals are:to deploy on every commit or as often as needed to be fast to be automated to be able to rollback to have zero-downtimeSetting up the stage Let’s set-up the technological part of the story. Application will be deployed as a Docker container. It is an open source platform that can be used to build, ship and run distributed applications. While Docker can be deployed on any operating system, my preference is to use CoreOS. It is a Linux distribution that provides features needed to run modern architecture stacks. An advantage CoreOS has over others is that it is very light-wight. It has only few tools and they are just those that we need for continuous deployment. We’ll use Vagrant to create a virtual machine with CoreOS. Two specifically useful tools that come pre-installed on CoreOS are etcd (key-value store for shared configuration and service discovery) and systemd (a suite of system management daemons, libraries and utilities). We’ll use nginx as our reverse proxy server. Its templates will be maintained by confd that is designed to manage application configuration files using templates and data from etcd. Finally, as an example application we’ll deploy (many times) BDD Assistant. It can be used as a helper tool for BDD development and testing. The reason for including it is that we’ll need a full-fledged application that can be used to demonstrate deployment strategy we’re about to explore.I’m looking for early adopters of the application. If you’re interested, please contact me and I’ll provide all the help you might need.CoreOS If you do not already have an instance of CoreOS up and running, continuous-deployment repository contains Vagrantfile that can be used to bring one up. Please clone that repo or download and unpack the ZIP file. To run the OS, please install Vagrant and run the following command from the directory with cloned (or unpacked) repository.   vagrant up Once creation and startup of the VM is finished, we can enter the CoreOS using: vagrant ssh From now on you should be inside CoreOS. DockerWe’ll use the BDD Assistant as an example simulation of Continuous Deployment. Container with the application is created on every commit made to the BDD Assistant repo. For now we’ll run it directly with Docker. Further on we’ll refine the deployment to be more resilient. Once the command below is executed it will start downloading the container images. First run might take a while. Good news is that images are cached and later on it will update very fast when there is a new version and run in a matter of seconds.       # Run container technologyconversationsbdd and expose port 9000 docker run --name bdd_assistant -d -p 9000:9000 vfarcic/technologyconversationsbdd It might take a while until all Docker images are downloaded for the first time. From there on, starting and stopping the service is very fast. To see the result, open http://localhost:9000/ in your browser. That was easy. With one command we downloaded fully operational application with AngularJS front-end, Play! web server, REST API, etc. The container itself is self-sufficient and immutable. New release would be a whole new container. There’s nothing to configure (except the port application is running on) and nothing to update when new release is made. It simply works. etcd Let’s move onto the etcd. etcd & From now on, we can use it to store and retrieve information we need. As an example, we can store the port BDD Assistant is running. That way, any application that would need to be integrated with it, can retrieve the port and, for example, use it to invoke the application API. # Set value for a give key etcdctl set /bdd-assistant/port 9000 # Retrive stored value etcdctl get /bdd-assistant/port That was a very simple (and fast) way to store any key/value that we might need. It will come in handy very soon. nginx At the moment, our application is running on port 9000. Instead of opening localhost:9000 (or whatever port it’s running) it would be better if it would simply run on localhost. We can use nginx reverse proxy to accomplish that. This time we won’t call Docker directly but run it as a service through systemd. # Create directories for configuration files sudo mkdir -p /etc/nginx/{sites-enabled,certs-enabled} # Create directories for logs sudo mkdir -p /var/log/nginx # Copy nginx service sudo cp /vagrant/nginx.service /etc/systemd/system/nginx.service # Enable nginx service sudo systemctl enable /etc/systemd/system/nginx.service nginx.service file tells systemd what to do when we want to start, stop or restart some service. In our case, the service is created using the Docker nginx container. Let’s start the nginx service (first time it might take a while to pull the Docker image). # Start nginx service sudo systemctl start nginx.service # Check whether nginx is running as Docker container docker ps As you can see, nginx is running as a Docker container. Let’s stop it. # Stop nginx service sudo systemctl stop nginx.service # Check whether nginx is running as Docker container docker ps Now it disappeared from Docker processes. It’s as easy as that. We can start and stop any Docker container in no time (assuming that images were already downloaded). We’ll need nginx up and running for the rest of the article so let’s start it up again. sudo systemctl start nginx.service confd We need something to tell our nginx what port to redirect to when BDD Assistant is requested. We’ll use confd for that. Let’s set it up. # Download confd wget -O confd https://github.com/kelseyhightower/confd/releases/download/v0.6.3/confd-0.6.3-linux-amd64 # Put it to the bin directory so that it is easily accessible sudo cp confd /opt/bin/. # Give it execution permissions sudo chmod +x /opt/bin/confd Next step is to configure confd to modify nginx routes and reload them every time we deploy our application. # Create configuration and templates directories sudo mkdir -p /etc/confd/{conf.d,templates} # Copy configuration sudo cp /vagrant/bdd_assistant.toml /etc/confd/conf.d/. # Copy template sudo cp /vagrant/bdd_assistant.conf.tmpl /etc/confd/templates/. Both bdd_assistant.toml and bdd_assistant.conf.toml are in the repo you already downloaded. Let’s see how it works. sudo confd -onetime -backend etcd -node 127.0.0.1:4001 cat /etc/nginx/sites-enabled/bdd_assistant.conf wget localhost; cat index.html We just updated nginx template to use the port previously set in etcd. Now you can open http://localhost:8000/ in your browser (Vagrant is set to expose default 80 as 8000). Even though the application is running on port 9000, we setup nginx to redirect requests from the default port 80 to the port 9000. Let’s stop and remove the BDD Assistant container. We’ll create it again using all the tools we saw by now. docker stop bdd_assistant docker rm bdd_assistant docker ps BDD Assistant Deployer Now that you are familiar with the tools, it’s time to tie them all together. We will practice Blue Green Deployment. That means that we will have one release up and running (blue). When new release (green) is deployed, it will run in parallel. Once it’s up and running, nginx will redirect all requests to it instead to the old one. Each consecutive release will follow the same process. Deploy over blue, redirect requests from green to blue, deploy over green, redirect requests from blue to green, etc. Rollbacks will be easy to do. We would just need to change the reverse proxy. There will be zero-down time since new release will be up and running before we start redirecting requests. Everything will be fully automated and very fast. With all that in place, we’ll be able to deploy as often as we want (preferably on every commit to the repository). sudo cp /vagrant/bdd_assistant.service /etc/systemd/system/bdd_assistant_blue@9001.service sudo cp /vagrant/bdd_assistant.service /etc/systemd/system/bdd_assistant_green@9002.service sudo systemctl enable /etc/systemd/system/bdd_assistant_blue@9001.service sudo systemctl enable /etc/systemd/system/bdd_assistant_green@9002.service # sudo systemctl daemon-reload etcdctl set /bdd-assistant/instance none sudo chmod 744 /vagrant/deploy_bdd_assistant.sh sudo cp /vagrant/deploy_bdd_assistant.sh /opt/bin/. We just created two BDD Assistant services: blue and green. Each of them will run on different ports (9001 and 9002) and store relevant information to etcd. deploy_bdd_assistant.sh is a simple script that starts the service, updates nginx template using conf and, finally, stops the old service. Both BDD Assistant service and deploy_bdd_assistant.sh are available in the repo you already downloaded. Let’s try it out. sudo deploy_bdd_assistant.sh New release will be deployed each time we run the script deploy_bdd_assistant.sh. We can confirm that by checking what value is stored in etcd, looking at Docker processes and, finally, running the application in browser. docker ps etcdctl get /bdd-assistant/port Docker process should change from running blue deployment on port 9001 to running green on port 9002 and the other way around. Port stored in etcd should be changing from 9001 to 9002 and vice verse. Whichever version is deployed, http://localhost:8000/ will always be working in your browser no matter whether we are in the process of deployment or already finished it. Repeat the execution of the script deploy_bdd_assistant.sh as many times as you like. It should always deploy the latest new version. For brevity of this article I excluded deployment verification. In “real world”, after new container is run and before reverse proxy is set to point to it, we should run all sorts of tests (functional, integration and stress) that would validate that changes to the code are correct. Continuous Delivery and Deployment The process described above should be tied to your CI/CD server (Jenkins, Bamboo, GoCD, etc). One possible Continuous Delivery procedure would be:Commit the code to VCS (GIT, SVN, etc) Run all static analysis Run all unit tests Build Docker container Deploy to the test environmentRun the container with the new version Run automated functional, integration (i.e. BDD) and stress tests Perform manual tests Change the reverse proxy to point to the new containerDeploy to the production environmentRun the container with the new version Run automated functional, integration (i.e. BDD) and stress tests Change the reverse proxy to point to the new containerIdeally, there should be no manual tests and in that case point 5 is not necessary. We would have Continuous Deployment that would automatically deploy every single commit that passed all tests to production. If manual verification is unavoidable, we have Continuous Delivery to test environments and software would be deployed to production on a click of a button inside the CI/CD server we’re using. Summary No matter whether we choose continuous delivery or deployment, when our process is completely automated (from build through tests until deployment itself), we can spend time working on things that bring more value while letting scripts do the work for us. Time to market should decrease drastically since we can have features available to users as soon as code is committed to the repository. It’s a very powerful and valuable concept. In case of any trouble following the exercises, you can skip them and go directly to running the deploy_bdd_assistant.sh script. Just remove comments (#) from the Vagrantfile. If VM is already up and running, destroy it. vagrant destroy Create new VM and run the deploy_bdd_assistant.sh script. vagrant up vagrant ssh sudo deploy_bdd_assistant.sh Hopefully you can see the value in Docker. It’s a game changer when compared to more traditional ways of building and deploying software. New doors have been opened for us and we should step through them. BDD Assistant and its deployment with Docker can be even better. We can split the application into smaller microservices. It could, for example have front-end as a separate container. Back-end can be split into smaller services (stories managements, stories runner, etc). Those microservices can be deployed to the same or different machines and orchestrated with Fleet. Microservices will be the topic of the next article.Reference: Continuous Deployment: Implementation from our JCG partner Viktor Farcic at the Technology conversations blog....
software-development-2-logo

Do You Really Understand SQL’s GROUP BY and HAVING clauses?

There are some things in SQL that we simply take for granted without thinking about them properly. One of these things are the GROUP BY and the less popular HAVING clauses. Let’s look at a simple example. For this example, we’ll reiterate the example database we’ve seen in this previous article about the awesome LEAD(), LAG(), FIRST_VALUE(), LAST_VALUE() functions:         CREATE TABLE countries ( code CHAR(2) NOT NULL, year INT NOT NULL, gdp_per_capita DECIMAL(10, 2) NOT NULL, govt_debt DECIMAL(10, 2) NOT NULL ); Before there were window functions, aggregations were made only with GROUP BY. A typical question that we could ask our database using SQL is:What are the top 3 average government debts in percent of the GDP for those countries whose GDP per capita was over 40’000 dollars in every year in the last four years.Whew. Some (academic) business requirements. In SQL (PostgreSQL dialect), we would write: select code, avg(govt_debt) from countries where year > 2010 group by code having min(gdp_per_capita) >= 40000 order by 2 desc limit 3 Or, with inline comments -- The average government debt select code, avg(govt_debt)-- for those countries from countries-- in the last four years where year > 2010-- yepp, for the countries group by code-- whose GDP p.c. was over 40'000 in every year having min(gdp_per_capita) >= 40000-- The top 3 order by 2 desc limit 3 The result being: code avg ------------ JP 193.00 US 91.95 DE 56.00 Remember the 10 easy steps to a complete understanding of SQL:FROM generates the data set WHERE reduces the generated data set GROUP BY aggregates the reduced data set HAVING reduces the aggregated data set SELECT transforms the reduced aggregated data set ORDER BY sorts the transformed data set LIMIT .. OFFSET frames the sorted data set… where LIMIT .. OFFSET may come in very different flavours. The empty GROUP BY clause A very special case of GROUP BY is the explicit or implicit empty GROUP BY clause. Here’s a question that we could ask our database: Are there any countries at all with a GDP per capita of more than 50’000 dollars?And in SQL, we’d write: select true answer from countries having max(gdp_per_capita) >= 50000 The result being answer ------ t You could of course have used the EXISTS clause instead (please don’t use COUNT(*) in these cases): select exists( select 1 from countries where gdp_per_capita >= 50000 ); And we would get, again: answer ------ t … but let’s focus on the plain HAVING clause. Not everyone knows that HAVING can be used all by itself, or what it even means to have HAVING all by itself. Already the SQL 1992 standard allowed for the use of HAVING without GROUP BY, but it wasn’t until the introduction of GROUPING SETS in SQL:1999, when the semantics of this syntax was retroactively unambiguously defined: 7.10 <having clause> <having clause> ::= HAVING <search condition> Syntax Rules 1) Let HC be the <having clause>. Let TE be the <table expression> that immediately contains HC. If TE does not immediately contain a <group by clause>, then GROUP BY ( ) is implicit.That’s interesting. There is an implicit GROUP BY ( ), if we leave out the explicit GROUP BY clause. If you’re willing to delve into the SQL standard a bit more, you’ll find: <group by clause> ::= GROUP BY <grouping specification><grouping specification> ::= <grouping column reference> | <rollup list> | <cube list> | <grouping sets list> | <grand total> | <concatenated grouping><grouping set> ::= <ordinary grouping set> | <rollup list> | <cube list> | <grand total><grand total> ::= <left paren> <right paren>So, GROUP BY ( ) is essentially grouping by a “grand total”, which is what’s intuitively happening, if we just look for the highest ever GDP per capita: select max(gdp_per_capita) from countries; Which yields: max -------- 52409.00 The above query is also implicitly the same as this one (which isn’t supported by PostgreSQL): select max(gdp_per_capita) from countries group by (); The awesome GROUPING SETs In this section of the article, we’ll be leaving PostgreSQL land, entering SQL Server land, as PostgreSQL shamefully doesn’t implement any of the following (yet). Now, we cannot understand the grand total (empty GROUP BY ( ) clause), without having a short look at the SQL:1999 standard GROUPING SETS. Some of you may have heard of CUBE() or ROLLUP() grouping functions, which are just syntactic sugar for commonly used GROUPING SETS. Let’s try to answer this question in a single query: What are the highest GDP per capita values per year OR per country In SQL, we’ll write: select code, year, max(gdp_per_capita) from countries group by grouping sets ((code), (year)) Which yields two concatenated sets of records: code year max ------------------------ NULL 2009 46999.00 <- grouped by year NULL 2010 48358.00 NULL 2011 51791.00 NULL 2012 52409.00CA NULL 52409.00 <- grouped by code DE NULL 44355.00 FR NULL 42578.00 GB NULL 38927.00 IT NULL 36988.00 JP NULL 46548.00 RU NULL 14091.00 US NULL 51755.00 That’s kind of nice, isn’t it? It’s essentially just the same thing as this query with UNION ALL select code, null, max(gdp_per_capita) from countries group by code union all select null, year, max(gdp_per_capita) from countries group by year; In fact, it’s exactly the same thing, as the latter explicitly concatenates two sets of grouped records… i.e. two GROUPING SETS. This SQL Server documentation page also explains it very nicely. And the most powerful of them all: CUBE() Now, imagine, you’d like to add the “grand total”, and also the highest value per country AND year, producing four different concatenated sets. To limit the results, we’ll also filter out GDPs of less than 48000 for this example: select code, year, max(gdp_per_capita), grouping_id(code, year) grp from countries where gdp_per_capita >= 48000 group by grouping sets ( (), (code), (year), (code, year) ) order by grp desc; This nice-looking query will now produce all the possible grouping combinations that we can imagine, including the grand total, in order to produce: code year max grp --------------------------------- NULL NULL 52409.00 3 <- grand totalNULL 2012 52409.00 2 <- group by year NULL 2010 48358.00 2 NULL 2011 51791.00 2CA NULL 52409.00 1 <- group by code US NULL 51755.00 1US 2010 48358.00 0 <- group by code and year CA 2012 52409.00 0 US 2012 51755.00 0 CA 2011 51791.00 0 US 2011 49855.00 0 And because this is quite a common operation in reporting and in OLAP, we can simply write the same by using the CUBE() function: select code, year, max(gdp_per_capita), grouping_id(code, year) grp from countries where gdp_per_capita >= 48000 group by cube(code, year) order by grp desc; Compatibility While the first couple of queries also worked on PostgreSQL, the ones that are using GROUPING SETS will work only on 4 out of 17 RDBMS currently supported by jOOQ. These are:DB2 Oracle SQL Server Sybase SQL AnywherejOOQ also fully supports the previously mentioned syntaxes. The GROUPING SETS variant can be written as such: // Countries is an object generated by the jOOQ // code generator for the COUNTRIES table. Countries c = COUNTRIES;ctx.select( c.CODE, c.YEAR, max(c.GDP_PER_CAPITA), groupingId(c.CODE, c.YEAR).as("grp")) .from(c) .where(c.GDP_PER_CAPITA.ge(new BigDecimal("48000"))) .groupBy(groupingSets(new Field[][] { {}, { c.CODE }, { c.YEAR }, { c.CODE, c.YEAR } })) .orderBy(fieldByName("grp").desc()) .fetch(); … or the CUBE() version: ctx.select( c.CODE, c.YEAR, max(c.GDP_PER_CAPITA), groupingId(c.CODE, c.YEAR).as("grp")) .from(c) .where(c.GDP_PER_CAPITA.ge(new BigDecimal("48000"))) .groupBy(cube(c.CODE, c.YEAR)) .orderBy(fieldByName("grp").desc()) .fetch();… and in the future, we’ll emulate GROUPING SETS by their equivalent UNION ALL queries in those databases that do not natively support GROUPING SETS.Reference: Do You Really Understand SQL’s GROUP BY and HAVING clauses? from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Manipulating JARs, WARs, and EARs on the Command Line

Although Java IDEs and numerous graphical tools make it easier than ever to view and manipulate the contents of Java archive (JAR, WAR, and EAR) files, there are times when I prefer to use the command-line jar command to accomplish these tasks. This is particularly true when I have to do something repeatedly or when I am doing it as part of a script. In this post, I look at use of the jar command to create, view, and manipulate Java archive files. JAR files will be used primarily in this post, but the commands I demonstrate on .jar files work consistently with .war and .ear files. It’s also worth keeping in mind that the JAR format is based on the ZIP format and so the numerous tools available for working with ZIP files can be applied to JAR, WAR, and EAR files. It’s also worth keeping in mind that the jar options tend to mirror tar‘s options. For my examples, I want to jar up and work with some .class files. The next screen snapshot demonstrates compiling some Java source code files (.java files) into .class files. The actual source of these files is insignificant to this discussion and is not shown here. I have shown compiling these without an IDE to be consistent with using command-line tools in this post. Preparing the Files to Be Used in the jar ExamplesThe next screen snapshot shows my .class files have been compiled and are ready to be included in a JAR.Creating a JAR File The “c” option provided to the jar command instructs it to create an archive. I like to use the “v” (verbose) and “f” (filename) options with all jar commands that I run so that the output will be verbose (to help see that something is happening and that it’s the correct thing that’s happening) and so that the applicable JAR/WAR/EAR filename can be provided as part of the command rather than input or output depending on standard input and standard output. In the case of creating a JAR file, the options “cvf” will create JAR file (c) with specified name (f) and print out verbose output (v) regarding this creation. The next screen snapshot demonstrates the simplest use of jar cvf. I have changed my current directory to the “classes” directory so that creating the JAR is as simple as running jar cvf * or jar cvf . and all files in the current directory and all subdirectories and files in subdirectories will be included in the created JAR file. This process is demonstrated in the next screen snapshot.If I don’t want to explicitly change my current directory to the most appropriate directory from which to build the JAR before running jar, I can use the -C option to instruct jar to implicitly do this as part of its creation process. This is demonstrated in the next screen snapshot.Listing Archive’s Contents Listing (or viewing) the contents of a JAR, WAR, or EAR file is probably the function I perform most with the jar command. I typically use the options “t” (list contents of archive), “v” (verbose), and “f” (filename specified on command line) for this. The next screen snapshot demonstrates running jar tvf MyClasses.jar to view the contents of my generated JAR file.Extracting Contents of Archive File It is sometimes desirable to extract one or many of the files contained in an archive file to work on or view the contents of these individual files. This is done using jar “x” (for extract) option. The next screen snapshot demonstrates using jar xvf MyClasses.jar to extract all the contents of that JAR file. Note that the original JAR file is left intact, but its contents are all now available directly as well.I often only need to view or work with one or two files of the archive file. Although I could definitely extract all of them as shown in the last example and only edit those I need to edit, I prefer to extract only the files I need if the number of them is small. This is easily done with the same jar xvf command. By specifying the fully qualified files to extract explicitly after the archive file’s name in the command, I can instruct to only extract those specific files. This is advantageous because I don’t fill my directory up with files I don’t care about and I don’t need to worry about cleaning up as much when I’m done. The next screen snapshot demonstrates running jar xvf MyClasses.jar dustin/examples/jar/GrandParent.class to extract only that single class definition for GrandParent rather than extracting all the files in that JAR.Updating an Archive File Previous examples have demonstrated providing the jar command with “c” to create an archive, “t” to list an archive’s contents, and “x” to extract an archive’s contents. Another commonly performed function is to update an existing archive’s contents and this is accomplished with jar‘s “u” option. The next screen snapshot demonstrates creating a text file (in DOS with the copy con command) called tempfile.txt and then using jar uvf MyClasses.jar tempfile.txt to update the MyClasses.jar and add tempfile.txt to that JAR.If I want to update a file in an existing archive, I can extract that file using jar xvf, modify the file as desired, and place t back in the original JAR with the jar uvf command. The new file will overwrite the pre-existing one of the same name. This is simulated in the next screen snapshot.Deleting an Entry from Archive File It is perhaps a little surprising to see no option for deleting entries from a Java archive file when reading the jar man page, the Oracle tools description of jar, or the Java Tutorials coverage of jar. One way to accomplish this is to extract the contents of a JAR, remove the files that are no longer desired, and re-create the JAR from the directories with those files removed. However, a much easier approach is to simply take advantage of the Java archiving being based on ZIP and use ZIP-based tools’ deletion capabilities. The next screen snapshot demonstrates using 7-Zip (on Windows) to delete tempfile.txt from MyClasses.jar by running the command 7z d MyClasses.jar tempfile.txt. Note that the same thing can be accomplished in Linux with zip -d MyClasses.jar tempfile.txt. Other ZIP-supporting tools have their own options.WAR and EAR Files All of the examples in this post have been against JAR files, but these examples work with WAR and EAR files. As a very simplistic example of this, the next screen snapshot demonstrates using jar uvf to update a WAR file with a new web descriptor. The content of the actual files involved do not matter for purposes of this illustration. The important observation to make is that a WAR file can be manipulated in the exact same manner as a JAR file. This also applies to EAR files.Other jar Operations and Options In this post, I focused on the “CRUD” operations (Create/Read/Update/Delete) and extraction that can be performed from the command-line on Java archive files. I typically used the applicable “CRUD” operation command (“c”, “t”, “u”) or extraction command (“x”) used in conjunction with the common options “v” (verbose) and “f” (Java archive file name explicitly specified on command line). The jar command supports operations other than these such as “M” (controlling Manifest file creation) and “0” (controlling compression). I also did not demonstrate using “i” to generate index information for a Java archive. Additional Resources on Working with Java Archives I referenced these previously, but summarize them here for convenience.Java Tutorials: Packaging Programs in JAR Files Oracle Tools Documentation on jar Command jar man PageConclusion The jar command is relatively easy to use and can be the quickest approach for creating, viewing, and modifying Java archive files contents in certain cases. Familiarity with this command-line tool can pay off from time to time for the Java developer, especially when working on a highly repetitive task or one that involves scripting. IDEs and tools (especially build tools) can help a lot with Java archive file manipulation, but sometimes the “overhead” of these is much greater than what is required when using jar from the command line.Reference: Manipulating JARs, WARs, and EARs on the Command Line from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close