Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Tips for Effective Session Submissions at Technology Conferences

Several of us go through the process of submitting talks at a technology conference. This requires thinking of a topic that you seem worthy of a presentation. Deciding a topic can be a blog by itself, but once the topic is selected then it involves creating a title and an abstract that will hopefully get selected. The dreaded task of preparing the slides and demos after that is a secondary story, but this blog will talk about dos and don’ts of an effective session submission that can improve your chances of acceptance. What qualifies me to write this blog? I’ve been speaking for 15+ years in ~40 countries, around the world, in a variety of technology conferences. In early years, this involved submitting a session at a conference, getting rejected and accepted, and then speaking at some of the conferences. The sessions were reviewed by a Program Committee which is typically a bunch of people, expert in their domain, and help shape the conference agenda. For the past several years, I’ve participated in Program Committees of several conferences, either as an individual member or leading the track with multiple individual members. Now, I’ve had my share of rejects, and still get rejected at conferences. There are multiple reasons for that such as too many sessions by a speaker, more compelling abstract by another speaker, Program Committee looking for a real-life practitioner, and others. But the key part is that these rejects never let me down. I do miss the opportunity to talk to the attendees at that conference though. For example, I’ve had rejects from a conference three years in a row but got accepted on the fourth year. And hopefully will get invited again this year Lets see what I practice to write a session title/abstract. And what I expect from other sessions when I’m part of the Program Committee! Tips for Effective Session SubmissionNo product pitches - In a technology-focused conference, any product, marketing or seemingly market-ish talk is put at the bottom of the list, or basically rejected right away. Most vendors have their product specific conference and such talks are better suited there. Catchy title – Title is typically 50-80 characters that explain what your talk is all about. Make your title is catchy and conveys the intention. Program Committee will read through the entire submission but more likely to look at yours first if the title is catchy. Attendees are more likely to read the abstract, and possibly attend the talk, if they like the title.Some more points on title:Politically correct language – However, don’t lean on the side of making it arcane or at least use any foul language. You must remember that Program Committee has both male and female members and people from different cultures. Certain words may be appropriate in a particular culture but not so on a global level. So make sure you check global political correctness of the title before picking the words. Use numbers, if possible - Instead of saying “Tips for Java EE 7″, use “50 Tips in 50 Minutes for Java EE 7″. This talk got me a JavaOne 2013 Rockstar Award. Now this was not entirely due to the title but I’ve seen a few other talks with similar titles in JavaOne 2014. I guess the formula works And there is something about numbers and how the human brain operate. If something is more quantified then you are more likely to pay attention to it!Coherent Abstract – Abstract is typically 500-1500 characters, some times more than that, that describes what you are going to do in your session. Session abstracts can differ based upon what is being presented. But typically as a submitter, I divide it into three parts – setup/define the problem space, show what will be presented (preferably with an outline), and then the lessons learned by the attendees. I also include any demos, cast studies, customer/partner participation, that will be included in the talk. As a Program Committee member, I’m looking at similar points and how the title/abstract is going to fit in the overall rhythm of the conference.Some additional points about abstract since that is where most of the important information is available.WIIFM (Whats In It For Me) – Prepare an abstract that will allow the attendees to connect with you. Is this something that they may care about? Something that they face in their daily life? Think if you were an attendee, would you be interested in attending this session by reading the abstract? Think WIIFM from attendee’s perspective. Use all the characters – Conferences have different limit of characters to pitch your abstract. The reviewers may not know you or your product at all and you get N characters to pitch your idea. Make sure to use all of them, to the last Nth character. Review your abstract – Make sure to read your abstract multiple times to ensure that you are giving all the relevant information. Think through your presentation and see if you are leaving out any important aspects. Also look if the abstract has any redundant information that will not required by the reviewers. You can also consider getting your abstract peer reviewed.I’m always happy to provide that service to my blog readers Coordinate within team – Make sure to coordinate within your team before the submission – multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your “google presence” and/or review committee’s prior knowledge of the speaker. Its unfortunate if the selected speaker is not the most appropriate one.Make sure you don’t write an essay here, or at least provide a TLDR; version. Just pick the three most important aspect of your session and highlight them. Hands-on Labs: Hands-on labs is where attendees sit through the session from two to four hours, and learn a tool, build/debug/test an application, practice some methodology, or something else in a hands-on manner. Make sure you clearly highlight flow of the lab, down to every 30 mins, if possible. The end goal such as “Attendees will build an end-to-end Java EE 7 application using X, Y, X” or “Attendees will learn the tools and techniques for adopting DevOps in their team”. A broad outline of the content is still very important so that Program Committee can understand attendees’ experience. Appropriate track – Typically conferences have multiple tracks and as a submitter you typically one as a primary track, and possibly another as a secondary. Give yourself time to read through track descriptions and choose the appropriate track for your talk. In some cases, the selected track may be inappropriate, either by accident, or some other reason. In that case, Program Committee will try their best to recategorize the talk to an appropriate track, if it needs to. But please ensure that you are filing in the right track to have all the right eyeballs looking at it. It would be really unfortunate, for the speaker and the conference, if an excellent talk gets dropped because of being in the inappropriate track. Use tags – Some conferences have the ability to apply tags to a submission. Feel free to use the existing tags, or create something that is more likely to be searched by the Program Committee. This provides a different dissection of all the submissions, and possibly some more eyes on your submission. First time speaker – If you are a newbie, or a first time presenter, then consider paying close attention to the CFP sections which gives you an opportunity to toot your horn. Make sure to include a URL of your video presentation that has been done elsewhere. If you never ever presented at a public conference or speaking at this conference for the first time, then you can consider recording a technical presentation and upload the video on YouTube or Vimeo. This will allow the Program Committee to know you slightly better. Links to slideshare profile are recommended as well in this case. Very often the Program Committee members will google the speaker. So make sure your social profile, at least Twitter and LinkedIn, are up to date. Please don’t say “call me at xxx-xxx-xxxx to find out the details” Run spell checker – Make sure to run spell checker in everything you submit as part of the session. Spelling mistakes turn off some of the Program Committee members, including myself This will generally never be a sole criteria of rejection but shows lack of attention, and only makes us wonder about the quality of session.Never Give Up! If your session does not get accepted, don’t give up and don’t take it personally. Each conference has a limited number of session slots and typically the number of submissions is more, sometimes way more, than that. The Program Committee tries, to the best of their ability, to pick the right sessions that fits in the rhythm of the conference. You’ve done the hard work of preparing a compelling title/abstract, submit at other conferences. At the least, try giving the talk at a local Java User Group and get feedback from the attendees there. You can always try out Virtual JUG as well for a more global audience. Even though these tips are based upon my experience on presenting and selecting sessions at technology conferences, but most of these would be valid at others as well. If your talk do get approved and you go through the process of creating compelling slides and sizzling demos, the attendees will always be a mixed bunchEnjoy, good luck, and happy conferencing! Any more tips to share?Reference: Tips for Effective Session Submissions at Technology Conferences from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

A Vision of the Future of the Software Developer’s Platform

How will the developer’s platform change over the next three years? Will you still be using desktop-based development tools? Cloud-based software development options are getting more powerful, but will they completely replace the desktop? For a certain set of developers, cloud-based software development tools will be a natural fit and so we should expect migration to tools like Che. But the desktop will remain viable and vibrant well into the future: for many classes of problem, the desktop is the right solution. Of course, there will be grey areas. Some problems can be addressed equally well by desktop- and cloud-based solutions. For these sorts of problems, the choice of development tools may be–at least in part–a matter of developer preference. There will be other drivers, of course (the exact nature of which is difficult to speculate). For this grey area, the ability for a software developer to pick and choose the tools that are most appropriate for the job is important. Further, the ability to mix and match development tool choices across a team will be a key factor. I’ve spent a good part of the last few months working with a group of Eclipse developers to hammer out a vision for the future of the developer’s platform. Here’s what we came up with: Our vision is to build leading desktop and cloud-based development solutions, but more importantly to offer a seamless development experience across them. Our goal is to ensure that developers will have the ability to build, deploy, and manage their assets using the device, location and platform best suited for the job at hand. Eclipse projects, the community, and ecosystem will all continue to invest in and grow desktop Eclipse. Full-function cloud-based developer tools delivered in the browser will emerge and revolutionize software development. Continued focus on quality and performance, out-of-the-box experience, Java 9, and first class Maven, Gradle, and JVM Languages support also figure prominently in our vision of a powerful developer’s platform. To paraphrase:Desktop Eclipse will remain dominant for the foreseeable future; Cloud-based developer environments like Che and Orion will revolutionize software development; Developers will be able to choose the most appropriate tools and environment; Projects can move from desktop to cloud and back; Desktop Eclipse developer tools will gain momentum; The community will continue to invest in desktop Eclipse-based IDEs; Java™ 9 will be supported; Developer environments will have great support for Maven and Gradle; Support for JVM languages will continue to improve; and User experience will become a primary focusYou’ve likely noticed that this is focused pretty extensively on Java development. This is not intended to exclude support for other programming languages, tools, and projects. As the expression goes, “a rising tide lifts all boats”: as we make improvements and shift focus to make Java development better, those improvements will have a cascading effect on everybody else. My plan for the near future (Mars time-frame) is to get the Che project boot-strapped and latch onto the that last bullet with regard to the desktop IDE: user experience. While user experience is an important consideration for most Eclipse projects, it needs to be a top focus. This vision of the future isn’t going to just happen. To succeed, we need organizations and individuals to step up and contribute. I’m aware that project teams are stretched pretty thin right now and many of the things on the list will require some big effort to make happen. Our strategy, then, is to start small. I’m buoyed (in keeping with my sea metaphors) by the overwhelmingly positive response that we got when we turned line numbers on by default in some of the packages. I’ll admit that I don’t quite understand the excitement (it’s such an easy thing to toggle), but for many of our users, this was a very big and important change. The curious thing is that–while the change was preceded by a lengthy and time-consuming discussion–making the actual change was relatively simple. My take away is that we can have some pretty big wins by doing some relatively small things. With this in mind, I’ve been poking at an informal programme that I’ve been calling “Every Detail Matters” (I borrowed this name from the Gnome community). Every Detail Matters will initially tackle things like names and labels, default settings for preferences, documentation, and the website/download experience (I’ve set up an Every Detail Matters for Mars umbrella bug to capture the issues that I believe make up the success criteria). We’re also trying to tackle some relatively big things. The “installer problem” is one that I’m hopeful we’ll be able to address with via the Oomph project. I’m also pretty excited by the prospect of having Eclipse release bits available from the Fedora software repository on GA day. In parallel, we’ve launched a more formal Great Fixes for Mars competition with prizes for winning contributors of fixes that improve the Java development experience.Enter the Great Fixes for Mars skills competition; there’ll be prizes! I’ll set up a BoF session at EclipseCon to discuss the vision and our strategy for making it real. It’d be great to see you there!I wrote about the Platform Vision in the November Eclipse newsletter.Reference: A Vision of the Future of the Software Developer’s Platform from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....

What I’ve Learned After 15 Years as a Java Group Leader

After founding the Philadelphia Area Java Users’ Group in 2000 and leading it for 15 years, I’ve decided to resign my post and pass on leadership to someone else. It’s time. At our first meeting in a small and long-forgotten dot com, 35 Java developers came to eat pizza and listen to a presentation on XML and JAXP. Since then we’ve had about 100 events (a few with 200 attendees) and a mailing list that peaked around 1,500 members. My experience running this group has revealed some patterns that may be useful for other user group leaders (or those looking to start one), ideas for speakers, and observations on the career paths of many members I’ve known for an extended period of time. Some thoughts. Members Topic suggesters and early adopters – A group of roughly ten members regularly suggested meeting topics then unfamiliar to me, but became widely popular a few years later. I relied on this group heavily for ideas at different times, and many of the suggestions were a bit beyond the scope for a JUG. Early on I typically rejected non-Java/JVM topic suggestions, so many of these meetings never developed. Consecutive meetings on Scala and Clojure in 2009 come to mind as an example of being timed ahead of popular adoption. These ten members included both experienced developers and even a couple who were then only computer science undergrad students. Without exception, the career paths of these specific individuals have been noticeably positive relative to the average career path, and more than half have been promoted from developer to CTO or equivalent. I believe all of this small group have now gone on to using other languages as their primary tool, and all still code regularly. Language migration – Of the overall membership, the vast majority (perhaps 80%+) are still using Java as their primary language by day. About 5% now focus on mobile development. Another 5% combined currently focus on Python or Ruby, and maybe 3% work in functional languages (Scala and Clojure). Age – Although I don’t have data, it’s fairly clear that the average age of a member or meeting attendee has increased over the years even as the member roster changed. The group has received far fewer membership requests over the past five years than in the past, and new members are less likely to be fresh graduates than they were in the group’s early days. Groups sense overhyped technologies – Some of our meeting topics were technologies or products that were heavily marketed through multiple channels (conferences, speaker tours, newsletters) at the time, yet failed to gain traction.  Many that RSVP’d to these meetings commented on their suspicions, and some admitted to a desire to attend in order to poke some holes in the hype. Regulars – At any given meeting, about 50% of the attendees were regulars that attended almost every event regardless of their specific interest in that evening’s topic. Many of these people also regularly attend events held by other groups. Presenters Speaker name recognition – This should surprise no one, but our three largest events by far were all with speakers who had a fair amount of celebrity and industry credibility. These were open source advocate/author Eric ‘ESR’ Raymond (YouTube link), Spring framework creator Rod Johnson, and a joint meeting with Hibernate author Gavin King and JBoss founder Marc Fleury. We had Johnson, King and Fleury around the height of their products’ popularity, and ESR (who is not a figure specific to Java) in 2012. Each event was SRO, with many more in attendance than had RSVP’d. The next level of attendance was for speakers who had either founded a company or created a product/tool, but perhaps did not have top-tier name recognition. We had eleven meetings of this nature (including the three mentioned), most drawing large crowds (150). For speakers without a product that were relatively unknown, the strength of a bio definitely impacted attendance. Current job title, employer name recognition, overall industry experience, past speaking experience, and even academic credentials clearly influenced our members.Local speakers were our lifeblood – About 80% of our speakers lived within an hour drive of our meeting space. We had four presenters who combined for fifteen meetings, and another eleven who all spoke twice. Fifteen local speakers delivered almost 40% of our presentations. Speakers benefit from presenting – Several of our local speakers have shared anecdotes of being discovered by an employer or new client through a JUG presentation. Even though we did not allow recruiting or sales/marketing people at events, most speakers are easy to contact. Speaking allowed some members to start building a ‘brand’ and increased visibility in the tech community. The best way to sell is by not selling – Our official policy was to forbid pure product demos, but we knew that when a company flies out their ‘evangelist’ and buys pizza for 150 people, we’re getting at least a minimal level of demo. Speakers who dove into a sales pitch early on would almost always see members start to leave, a few times in droves. The evangelists who were most effective in keeping an audience often used a similar presentation style where the first hour defined a problem and how it can be solved, and concluded with something like “My presentation is over. You can leave now, or if you want to stay another 15 minutes I’ll show you how our product solves the problem.” This usually led to discussions with the speaker that lasted beyond the meeting schedule, and sales. Making the demo optional and at the very end is a successful tactic. Accomplished technologists aren’t all great speakers – A strong biography and list of accomplishments does not always result in a strong presentation. We were lucky that most of our speakers were quite good, but we did have at least a few disappointments from those active on the conference speaker circuit. Sponsors Ask everyone for something – Companies willing to spend money on sponsorship and travel costs clearly understand the value of goodwill and community visibility. There are also those that want to get that visibility and goodwill for free, and they ask leaders to announce a conference or product offering as a favor or “for the good of the community“. These requests are an opportunity to add additional value to the members. Instead of complying with these requests, I would always request something in return. For conference announcements, I would ask for a discount code for members or a free pass to raffle off. Sponsors with a product might be asked for a license to give away, or at worst some swag. Most were willing to barter in exchange for their request being met. Maintain control – Some sponsors clearly just want your membership roster and email addresses, which they may try to acquire by using the “fishbowl business card drawing” approach to raffles, sign-in sheets, speaker review forms, surveys, or perhaps through a bold request. Don’t sell your members’ private information to sponsors under any circumstances, and companies will still be willing to sponsor if you deny their attempts. We allowed a business card drawing once for an iPad, and all members were aware that if they decided to enter that drawing they would likely be getting a call from the vendor.Reference: What I’ve Learned After 15 Years as a Java Group Leader from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Openshift: Build Spring Boot application on Wildfly 8.2.0 with Java 8

OpenShift DIY cartridge is a great way to test unsupported languages on OpenShift. But it is not scalable (you can vote for Scalable DIY cartridge here) which makes it hard to use with production grade Spring Boot applications. But what if we deployed Spring Boot application to WildFly Application Server? Spring Boot can run with embedded servlet container like Tomcat or much faster Undertow, but it also can be deployed to a standalone application server. This would mean that it can be also deployed to WildFly application server that is supported by OpenShift. Let’s see how easy is to get started with creating a Spring Boot application from scratch and deploy it to WildFly 8.2 on OpenShift. Note: While browsing OpenShift documentation one can think that on WildFly 8.1 and Java 7 is supported (as of time of writing this blog post). But this is fortunately not true anymore: WildFly 8.2 and Java 8 will work fine and it is default in fact!. This was the first time when I was happy about documentation being outdated. Update: If you are looking for a quick start, without the step by step walkthrough have a look here: Quick Start: Spring Boot and WildfFly 8.2 on OpenShift Prerequisite Before you can start building the application, you need to have an OpenShift free account and client tools (rhc) installed. Create WildFly application To create a WildFly application using client tools, type the following command: rhc create-app boot jboss-wildfly-8 --scaling jboss-wildfly-8 cartridge is described as WildFly Application Server 8.2.0.Final. Scaling option is used as it will be impossible to set it later (vote here) When the application is created you should see username and password for an administration user created for you. Please store these credentials to be able to login to the WildFly administration console. Template Application Source code OpenShift creates a template project. The project is a standard Maven project. You can browse through pom.xml and see that Java 8 is used by default for this project. In addition, there are two non standard folders created: deployments, that is used to put the resulting archive into, and .openshift with OpenShift specific files. Please note .opensift/config. This is the place where WildFly configuration is stored. Spring Boot dependencies As the dependency management Spring IO Platform will be used. The main advantage of using Spring IO Platform is that it simplifies dependency management by providing versions of Spring projects along with their dependencies that are tested and known to work together. Modify the pom.xml by adding: <dependencyManagement> <dependencies> <dependency> <groupId>io.spring.platform</groupId> <artifactId>platform-bom</artifactId> <version>1.1.1.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Now, Spring Boot dependencies can be added. Please note that since the application will be deployed to WildFly, we need to explicitly remove dependency on Tomcat.: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> Configure the application Initialize Spring Boot Application Having all dependencies, we can add application code. Create in demo package. The Application class’s work is to initiate Spring Boot application, so it must extend from SpringBootServletInitializer and be annotated with @SpringBootApplication package demo;import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.context.web.SpringBootServletInitializer;@SpringBootApplication public class Application extends SpringBootServletInitializer {} @Entity, @Repository, @Controller Spring Data JPA, part of the larger Spring Data family, makes it easy to easily implement JPA based repositories. For those who are not familiar with the project please visit: Domain model for this sample project is just a Person with some basic fields: @Entity @Table(name = "people") public class Person {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) protected Integer id;@Column(name = "first_name") @NotEmpty protected String firstName;@Column(name = "last_name") @NotEmpty protected String lastName;@Column(name = "address") @NotEmpty private String address;@Column(name = "city") @NotEmpty private String city;@Column(name = "telephone") @NotEmpty @Digits(fraction = 0, integer = 10) private String telephone;} The Person needs a @Repository, so we can createa basic one using Spring’s Data repository. Spring Data repositories reduce much of the boilerplate code thanks to a simple interface definition: @Repository public interface PeopleRepository extends PagingAndSortingRepository<Person, Integer> { List<Person> findByLastName(@Param("lastName") String lastName); } With the domain model in place some test data can be handy. The easiest way is to provide a data.sql file with the SQL script to be executed on the application start-up. Create src/main/resources/data.sql containing initial data for the people table (see below). Spring Boot will pick this file and run against configured Data Source. Since the Data Source used is connecting to H2 database, the proper SQL syntax must be used: INSERT INTO people VALUES (1, 'George', 'Franklin', '110 W. Liberty St.', 'Madison', '6085551023'); Having Spring Data JPA repository in place, we can create a simple controller that exposes data over REST: @RestController @RequestMapping("people") public class PeopleController {private final PeopleRepository peopleRepository;@Inject public PeopleController(PeopleRepository peopleRepository) { this.peopleRepository = peopleRepository; }@RequestMapping public Iterable<Person> findAll(@RequestParam Optional<String> lastName) { if (lastName.isPresent()) { return peopleRepository.findByLastName(lastName.get()); } return peopleRepository.findAll(); } } findAll method accepts optional lastName parameter that is bound to Java’s 8 java.util.Optional. Start page The project generated by OpenShift during project setup contain webapp folder with some static files. These files can be removed and index.html can be modified: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>OpenShift</title> </head> <body> <form role="form" action="people"> <fieldset> <legend>People search</legend> <label for="lastName">Last name:</label> <input id="lastName" type="text" name="lastName" value="McFarland"/> <input type="submit" value="Search"/> </fieldset> </form> <p> ... or: <a href="people">Find all ...</a> </p> </body> </html> It is just a static page, but I noticed that application will not start if there is not default mapping (/) or if returns code different than 200. Normally, there will be always a default mapping. Configuration Create src/main/resources/ and put the following values:management.context-path=/manage: actuator default management context path is /. This is changed to /manage, because OpenShift exposes /health endpoint itself that covers Actuator’s /health endpoint . spring.datasource.jndi-name=java:jboss/datasources/ExampleDS: since the application uses Spring Data JPA, we want to bind to the server’s Data Source via JNDI. Please look at .openshift/config/standalone.xml for other datasources. This is important if you wish to configure MySql or PostgreSQL to be used with your application. Read more about connecting to JNDI Data Source in Spring Boot here: spring.jpa.hibernate.ddl-auto=create-drop: create structure of the database based on the provided entities.Deploying to OpenShift The application is ready to be pushed to the repository. Commit your local changes and then push it to remote: git push The initial deployment (build and application startup) will take some time (up to several minutes). Subsequent deployments are a bit faster. You can now browse to: and you should see the form: Clicking search with default value will get record with id = 3: [ { "id": 3, "firstName": "2693 Commerce St.", "lastName": "McFarland", "address": "Eduardo", "city": "Rodriquez", "telephone": "6085558763" } ] Navigating to will return all records from the database. Going Java 7 If you want to use Java 7 in your project, instead of default Java 8, rename .openshift/markers/java8 to .openshift/markers/java7 and changte pom.xml accordingly: <properties> <>UTF-8</> <maven.compiler.source>1.7</maven.compiler.source> <>1.7</> <maven.compiler.fork>true</maven.compiler.fork> </properties> Please note maven.compiler.executable was removed. Don’t forget to change the @Controller’s code and make it Java 7 compatible. Summary In this blog post you learned how to configure basic Spring Boot application and run it on OpenShift with WildfFly 8.2 and Java 8. OpenShift scales the application with the web proxy HAProxy. OpenShift takes care of automatically adding or removing copies of the application to serve requests as needed. Resources – source code for this blog post.Reference: Openshift: Build Spring Boot application on Wildfly 8.2.0 with Java 8 from our JCG partner Rafal Borowiec at the blog....

JPA 2.1 criteria delete/update and temporary tables in Hibernate

Since JPA version 2.0 the EntityManager offers the method getCriteriaBuilder() to dynamically build select queries without the need of string concatenation using the Java Persistence Query Languge (JPQL). With version 2.1 this CriteriaBuilder offers the two new methods createCriteriaDelete() and createCriteriaUpdate() that let us formulate delete and update queries using the criteria API. For illustration purposes lets use a simple inheritance use case with the two entities Person and Geek:     @Entity @Table(name = "T_PERSON") @Inheritance(strategy = InheritanceType.JOINED) public class Person { @Id @GeneratedValue private Long id; @Column(name = "FIRST_NAME") private String firstName; @Column(name = "LAST_NAME") private String lastName; ... }@Entity @Table(name = "T_GEEK") @Access(AccessType.PROPERTY) public class Geek extends Person { private String favouriteProgrammingLanguage; ... } To delete all geeks from our database that favour Java as their programming language, we can utilize the following code using EntityManager’s new createCriteriaDelete() method: EntityTransaction transaction = null; try { transaction = entityManager.getTransaction(); transaction.begin(); CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaDelete<Geek> delete = builder.createCriteriaDelete(Geek.class); Root<Geek> geekRoot = delete.from(Geek.class); delete.where(builder.equal(geekRoot.get("favouriteProgrammingLanguage"), "Java")); int numberOfRowsUpdated = entityManager.createQuery(delete).executeUpdate();"Deleted " + numberOfRowsUpdated + " rows."); transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { transaction.rollback(); } } Like with pure SQL we can use the method from() to specify the table the delete query should be issued against and where() to declare our predicates. This way the criteria API allows the definition of bulk deletion operations in a dynamic way without using too much string concatenations. But how does the SQL look like that is created? First of all the ORM provider has to pay attention that we are deleting from an inheritance hierarchy with the strategy JOINED, meaning that we have two tables T_PERSON and T_GEEK where the second tables stores a reference to the parent table. Hibernate in version 4.3.8.Final creates the following SQL statements: insert into HT_T_GEEK select as id from T_GEEK geek0_ inner join T_PERSON geek0_1_ on where geek0_.FAV_PROG_LANG=?;delete from T_GEEK where ( id ) IN ( select id from HT_T_GEEK );delete from T_PERSON where ( id ) IN ( select id from HT_T_GEEK )delete from HT_T_GEEK; As we can see, Hibernate fills a temporary table with the ids of the geeks/persons that match our search criteria. Then it deletes all rows from the geek table and then all rows from the person table. Finally the temporary table gets purged. The sequence of delete statements is clear, as the table T_GEEK has a foreign key constraint on the id column of the T_PERSON table. Hence the rows in the child table have to be deleted before the rows in the parent table. The reason why Hibernate creates a temporary table is explained in this article. To summarize it, the underlying problem is that the query restricts the rows to be deleted on a column that only exists in the child table. But the rows in the child table have to be deleted before the corresponding rows in the parent table. Having deleted the rows in the child table, i.e. all geeks with FAV_PROG_LANG='Java', makes it impossible to delete afterwards all corresponding persons as the geek rows have already been deleted. The solution to this problem is the temporary table that first collects all row ids that should be deleted. Once all ids are known, this information can be used to delete the rows first from the geek table and then from the person table. The generated SQL statements above are of course independent from the usage of the criteria API. Using the JPQL approach leads to the same generated SQL: EntityTransaction transaction = null; try { transaction = entityManager.getTransaction(); transaction.begin(); int update = entityManager.createQuery("delete from Geek g where g.favouriteProgrammingLanguage = :lang").setParameter("lang", "Java").executeUpdate();"Deleted " + update + " rows."); transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { transaction.rollback(); } } When we change the inheritance strategy from JOINED to SINGLE_TABLE, the generated SQL statements also changes to a single one (here the discriminator column is DTYPE): delete from T_PERSON where DTYPE='Geek' and FAV_PROG_LANG=? Conclusion The new additions to the criteria API for deletion and update let you construct your SQL statements without the need of any string concatenation. But be aware that bulk deletions from an inheritance hierarchy can force the underlying ORM to use temporary tables in order to assemble the list of rows that have to be removed in advance.Reference: JPA 2.1 criteria delete/update and temporary tables in Hibernate from our JCG partner Martin Mois at the Martin’s Developer World blog....

JavaFX Tip 18: Path Clipping

I recently noticed that the PopOver control, which I committed to the ControlsFX project, does not properly clip its content. It became obvious when I was working on the accordion popover for the FlexCalendarFX framework. Whenever the last titled pane was expanded the bottom corners were no longer rounded but square. After placing a red rectangle as content to the titled pane it became clear to me that I forgot to add clipping. The following picture shows the problem.            Normally clipping in JavaFX is quite easy. All it takes is an additional node and a call to setClip(node). However, normally this clip is a simple shape, like a rectangle. In the PopOver case the clip had to be a path, just like the original path that was used for the shape of the PopOver. Why a path? Because the popover, when “attached” to an owner, also features an arrow pointing at the owner. See screenshot below.So the good thing was that the original path gets constructed based on a list of path elements. These are not nodes and can be reused for a second path. When I tried this the result was a PopOver that only consisted of a border with no content at all.The reason for this was the fact that the path was not filled. Once I set a fill on the clip path the result was what I was aiming for.Now the PopOver control clipped its content correctly. The image below shows the final result.Some might say that this is just a minor detail and they are right, but it is this attention to detail that makes an application stand out and look professional. The image below shows how the PopOver is used within FlexCalendarFX.Reference: JavaFX Tip 18: Path Clipping from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....

Thou Shalt Not Name Thy Method “Equals”

(unless you really override Object.equals(), of course). I’ve stumbled upon a rather curious Stack Overflow question by user Frank: Why does Java’s Area#equals method not override Object#equals? Interestingly, there is a Area.equals(Area) method which really takes an Area argument, instead of a Object argument as declared in Object.equals(). This leads to rather nasty behaviour, as discovered by Frank:   @org.junit.Test public void testEquals() { java.awt.geom.Area a = new java.awt.geom.Area(); java.awt.geom.Area b = new java.awt.geom.Area(); assertTrue(a.equals(b)); // -> truejava.lang.Object o = b; assertTrue(a.equals(o)); // -> false } Technically, it is correct for AWT’s Area to have been implemented this way (as hashCode() isn’t implemented either), but the way Java resolves methods, and the way programmers digest code that has been written like the above code, it is really a terrible idea to overload the equals method. No static equals, either These rules also hold true for static equals() methods, such as for instance Apache Commons Lang‘s ObjectUtils.equals(Object o1, Object o2) The confusion here arises by the fact that you cannot static-import this equals method: import static org.apache.commons.lang.ObjectUtils.equals; When you now type the following: equals(obj1, obj2); You will get a compiler error: The method equals(Object) in the type Object is not applicable for the arguments (…, …) The reason for this is that methods that are in the scope of the current class and its super types will always shadow anything that you import this way. The following doesn’t work either: import static org.apache.commons.lang.ObjectUtils.defaultIfNull;public class Test { void test() { defaultIfNull(null, null); // ^^ compilation error here }void defaultIfNull() { } } Details in this Stack Overflow question. Conclusion The conclusion is simple. never overload any of the methods declared in Object (overriding is fine, of course). This includes:clone() equals() finalize() getClass() hashCode() notify() notifyAll() toString() wait()Of course, it would be great if those methods weren’t declared in Object in the first place, but that ship has sailed 20 years ago.Reference: Thou Shalt Not Name Thy Method “Equals” from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Using junit for something else

junit != unit test Junit is the Java unit testing framework. We use it for unit testing usually, but many times we use it to execute integration tests as well. The major difference is that unit tests test individual units, while integration tests test how the different classes work together. This way integration tests cover longer execution chain. This means that they may discover more errors than unit tests, but at the same time they usually run longer times and it is harder to locate the bug if a test fails. If you, as a developer are aware of these differences there is nothing wrong to use junit to execute non-unit tests. I have seen examples in production code when the junit framework was used to execute system tests, where the execution chain of the test included external service call over the network. Junit is just a tool, so still, if you are aware of the drawbacks there is nothing inherently wrong with it. However in the actual case the execution of the junit tests were executed in the normal maven test phase and once the external service went down the code failed to build. That is bad, clearly showing the developer creating the code was not aware of the big picture that includes the external services and the build process. After having all that said, let me tell you a different story and join the two threads later. We speak languages… many Our programs have user interface, most of the time. The interface contains texts, usually in different languages. Usually in English and local language where the code is targeted. The text literals are usually externalized stored in “properties” files. Having multiple languages we have separate properties file for each language, each defining a literal text for an id. For example we have the files and in the Java code we were accessing these via the Spring MessageSource calling String label = messageSource.getMessage("",null,"label",locale); We, programmers are kind of lazy The problems came when we did not have some of the translations of the texts. The job of specifying the actual text of the labels in different languages does not belong to the programmers. Programmers are good speaking Java, C and other programming languages but are not really shining when it comes to natural languages. Most of us just do not speak all the languages needed. There are people who have the job to translate the text. Different people usually for different languages. Some of them work faster, others slower and the coding just could not wait for the translations to be ready. For the time till the final translation is available we use temporary strings. All temporary solutions become final. The temporary strings, which were just the English version got into the release. Process and discipline: failed To avoid that we implemented a process. We opened a Jira issue for each translation. When the translation was ready it got attached to the issue. When it got edited into the properties file and committed to git the issue was closed. It was such a burden and overhead that programmers were slowed down by it and less disciplined programmers just did not follow the process. Generally it was a bad idea. We concluded that not having a translation into the properties files is not the real big issue. The issue is not knowing that it was missing and creating a release. So we needed a process to check the correctness of the properties files before release. Light-way process and control Checking would have been cumbersome manually. We created junit tests that compared the different language files and checked that there is no key missing from one present in an other and that the values are not the same as the default English version. The junit test was to be executed each time when the project was to be released. Then we realized that some of the values are really the same as the English version so we started to use the letter ‘X’ at the first position in the language files to signal a label waiting for real translated value replacement. At this point somebody suggested that the junit test could be replaced by a simple ‘grep’. It was almost true, except we still wanted to discover missing keys and test running automatically during the release process. Summary, and take-away The Junit framework was designed to execute unit tests, but frameworks can and will be used not only for the purpose they were designed for. (Side note: this is actually true for any tool be it simple as a hammer or complex as default methods in Java interfaces.) You can use junit to execute tasks that can be executed during the testing phase of build and/or release.The tasks should execute fast, since the execution time adds to the build/release cycle. Should not depend on external sources, especially those that are reachable over the network, because these going down may also render the build process fail. When something is not acceptable for the build use the junit api to signal failure. Do not just write warnings. Nobody reads warnings.Reference: Using junit for something else from our JCG partner Peter Verhas at the Java Deep blog....

Java 8 pitfall – Beware of Files.lines()

There’s a really nice new feature in Java8 which allows you to get a stream of Strings from a file in a one liner.                   List lines = Files.lines(path).collect(Collectors.toList()); You can manipulate the Stream as you would with any other Stream for example you might want to filter() or map() or limit() or skip() etc. I started using this all over my code until I was hit with this exception, Caused by: java.nio.file.FileSystemException: /tmp/date.txt: Too many open files in system at sun.nio.fs.UnixException.translateToIOException( at sun.nio.fs.UnixException.rethrowAsIOException( at sun.nio.fs.UnixException.rethrowAsIOException( at sun.nio.fs.UnixFileSystemProvider.newByteChannel( at java.nio.file.Files.newByteChannel( at java.nio.file.Files.newByteChannel( at java.nio.file.spi.FileSystemProvider.newInputStream( at java.nio.file.Files.newInputStream( at java.nio.file.Files.newBufferedReader( at java.nio.file.Files.lines( at java.nio.file.Files.lines( For some reason I had too many open files! Odd, doesn’t Files.lines() close the file? See code below ( run3() ) where I’ve created reproduced the issue: package utility;import; import; import; import; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.Date; import;public class Test2 { public static void main(String[] args) throws IOException{ int times = 100_000;Path path = Paths.get("/tmp", "date.txt"); Test2 t2 = new Test2(); t2.setDate(path);for (int i = 0; i < times; i++) { t2.run1(path); } for (int i = 0; i < times; i++) { t2.run2(path); } for (int i = 0; i < times; i++) { t2.run3(path); //throws exception too many files open } System.out.println("finished"); }public String run1(Path path){ try(BufferedReader br = new BufferedReader(new FileReader(path.toFile()))){ return br.readLine(); } catch (IOException e) { throw new AssertionError(e); } }public String run2(Path path){ try(Stream<String> stream = Files.lines(path)) { return stream.findFirst().get(); } catch (IOException e) { throw new AssertionError(e); } }public String run3(Path path) throws IOException{ return Files.lines(path).findFirst().get(); }public void setDate(Path path) { try (FileWriter writer = new FileWriter(path.toFile())){ writer.write(new Date().toString()); writer.flush(); } catch (IOException e) { throw new AssertionError(e); } } }My code looked something like run3() which produced the exception. I proved this by running the unix command lsof (lists open files) and noticing many many instances of date.txt open. To check that the problem was indeed with Files.lines() I made sure that the code ran with run1() using a BufferedReader, which it did.  By reading through the source code forFiles I realised that the Stream need to be created in an auto closable.  When I implemented that in run2() the code ran fine again. In my opinion I don’t think that this is not particularly intuitive.  It really spoils the one liner when you have to use the auto closable.  I guess that the code does need a signal as to when to close the file but somehow it would be nice if that was hidden from us.  At the very least it should be highlighted in the JavaDoc which it is not :-)Reference: Java 8 pitfall – Beware of Files.lines() from our JCG partner Daniel Shaya at the Rational Java blog....

Calculate PageRanks with Apache Hadoop

Currently I am following the Coursera training ‘Mining Massive Datasets‘. I have been interested in MapReduce and Apache Hadoop for some time and with this course I hope to get more insight in when and how MapReduce can help to fix some real world business problems (another way to do so I described here). This Coursera course is mainly focussing on the theory of used algorithms and less about the coding itself. The first week is about PageRanking and how Google used this to rank pages. Luckily there is a lot to find about this topic in combination with Hadoop. I ended up here and decided to have a closer look at this code. What I did was taking this code (forked it) and rewrote it a little. I created unit tests for the mappers and reducers as I described here. As a testcase I used the example from the course. We have three webpages linking to each other and/or themselves:This linking scheme should resolve to the following page ranking:Y 7/33 A 5/33 M 21/33Since the MapReduce example code is expecting ‘Wiki page’ XML as input I created the following test set: <mediawiki xmlns="" xmlns:xsi="" xsi:schemaLocation="" version="0.10" xml:lang="en"> <page> <title>A</title> <id>121173</id> <revision> ... <text xml:space="preserve" bytes="6523">[[Y]] [[M]]</text> </revision> </page> <page> <title>Y</title> <id>121173</id> <revision> ... <text xml:space="preserve" bytes="6523">[[A]] [[Y]]</text> </revision> </page> <page> <title>M</title> <id>121173</id> <revision> ... <text xml:space="preserve" bytes="6523">[[M]]</text> </revision> </page> </mediawiki> The global way it works is already explained very nice at the original page itself. I will only describe the unit tests I created. With the original explanation and my unit tests you should be able to go through the matter and understand what happens. As described the total job is divided in three parts:parsing calculating orderingIn the parsing part the raw XML is taken, split into pages and mapped so that we get as output the page as a key and a value of the pages it has outgoing links to. So the input for the unit test will be the three ‘Wiki’ pages XML as shown above. The expected out the ‘title’ of the pages with the linked pages. The unit test looks then like: package net.pascalalma.hadoop.job1;...public class WikiPageLinksMapperTest {MapDriver<LongWritable, Text, Text, Text> mapDriver;String testPageA = " <page>\n" + " <title>A</title>\n" + " ..." + " <text xml:space=\"preserve\" bytes=\"6523\">[[Y]] [[M]]</text>\n" + " </revision>";String testPageY = " <page>\n" + " <title>Y</title>\n" + " ..." + " <text xml:space=\"preserve\" bytes=\"6523\">[[A]] [[Y]]</text>\n" + " </revision>\n" + " </page>"; String testPageM = " <page>\n" + " <title>M</title>\n" + " ..." + " <text xml:space=\"preserve\" bytes=\"6523\">[[M]]</text>\n" + " </revision>\n" + " </page>";@Before public void setUp() { WikiPageLinksMapper mapper = new WikiPageLinksMapper(); mapDriver = MapDriver.newMapDriver(mapper); }@Test public void testMapper() throws IOException { mapDriver.withInput(new LongWritable(1), new Text(testPageA)); mapDriver.withInput(new LongWritable(2), new Text(testPageM)); mapDriver.withInput(new LongWritable(3), new Text(testPageY)); mapDriver.withOutput(new Text("A"), new Text("Y")); mapDriver.withOutput(new Text("A"), new Text("M")); mapDriver.withOutput(new Text("Y"), new Text("A")); mapDriver.withOutput(new Text("Y"), new Text("Y")); mapDriver.withOutput(new Text("M"), new Text("M")); mapDriver.runTest(false); } } The output of the mapper will be the input for our reducer. The unit test for that one looks like: package net.pascalalma.hadoop.job1; ... public class WikiLinksReducerTest {ReduceDriver<Text, Text, Text, Text> reduceDriver;@Before public void setUp() { WikiLinksReducer reducer = new WikiLinksReducer(); reduceDriver = ReduceDriver.newReduceDriver(reducer); }@Test public void testReducer() throws IOException { List<Text> valuesA = new ArrayList<Text>(); valuesA.add(new Text("M")); valuesA.add(new Text("Y")); reduceDriver.withInput(new Text("A"), valuesA); reduceDriver.withOutput(new Text("A"), new Text("1.0\tM,Y"));reduceDriver.runTest(); } } As the unit test shows we expect the reducer to reduce the input to the value of an ‘initial’ page rank of 1.0 concatenated with all pages the (key) page has outgoing links to. That is the output of this phase and will be used as input for the ‘calculate’ phase. In the calculate part a recalculation of the incoming page ranks will be performed to implement the ‘power iteration‘ method. This step will be performed multiple times to obtain an acceptable page rank for the given page set. As said before the output of the previous part is the input of this step as we see in the unit test for this mapper: package net.pascalalma.hadoop.job2; ... public class RankCalculateMapperTest {MapDriver<LongWritable, Text, Text, Text> mapDriver;@Before public void setUp() { RankCalculateMapper mapper = new RankCalculateMapper(); mapDriver = MapDriver.newMapDriver(mapper); }@Test public void testMapper() throws IOException { mapDriver.withInput(new LongWritable(1), new Text("A\t1.0\tM,Y")); mapDriver.withInput(new LongWritable(2), new Text("M\t1.0\tM")); mapDriver.withInput(new LongWritable(3), new Text("Y\t1.0\tY,A")); mapDriver.withOutput(new Text("M"), new Text("A\t1.0\t2")); mapDriver.withOutput(new Text("A"), new Text("Y\t1.0\t2")); mapDriver.withOutput(new Text("Y"), new Text("A\t1.0\t2")); mapDriver.withOutput(new Text("A"), new Text("|M,Y")); mapDriver.withOutput(new Text("M"), new Text("M\t1.0\t1")); mapDriver.withOutput(new Text("Y"), new Text("Y\t1.0\t2")); mapDriver.withOutput(new Text("A"), new Text("!")); mapDriver.withOutput(new Text("M"), new Text("|M")); mapDriver.withOutput(new Text("M"), new Text("!")); mapDriver.withOutput(new Text("Y"), new Text("|Y,A")); mapDriver.withOutput(new Text("Y"), new Text("!")); mapDriver.runTest(false); } } The output here is explained in the source page. The ‘extra’ items with ‘!’ and ‘|’ are necessary in the reduce step for the calculations. The unit test for the reducer looks like: package net.pascalalma.hadoop.job2; ... public class RankCalculateReduceTest {ReduceDriver<Text, Text, Text, Text> reduceDriver;@Before public void setUp() { RankCalculateReduce reducer = new RankCalculateReduce(); reduceDriver = ReduceDriver.newReduceDriver(reducer); }@Test public void testReducer() throws IOException { List<Text> valuesM = new ArrayList<Text>(); valuesM.add(new Text("A\t1.0\t2")); valuesM.add(new Text("M\t1.0\t1")); valuesM.add(new Text("|M")); valuesM.add(new Text("!"));reduceDriver.withInput(new Text("M"), valuesM);List<Text> valuesA = new ArrayList<Text>(); valuesA.add(new Text("Y\t1.0\t2")); valuesA.add(new Text("|M,Y")); valuesA.add(new Text("!"));reduceDriver.withInput(new Text("A"), valuesA);List<Text> valuesY = new ArrayList<Text>(); valuesY.add(new Text("Y\t1.0\t2")); valuesY.add(new Text("|Y,A")); valuesY.add(new Text("!")); valuesY.add(new Text("A\t1.0\t2"));reduceDriver.withInput(new Text("Y"), valuesY);reduceDriver.withOutput(new Text("A"), new Text("0.6\tM,Y")); reduceDriver.withOutput(new Text("M"), new Text("1.4000001\tM")); reduceDriver.withOutput(new Text("Y"), new Text("1.0\tY,A"));reduceDriver.runTest(false); } } As is shown the output from the mapper is recreated as input and we check that the output of the reducer matches the first iteration of the page rank calculation. Each iteration will lead to the same output format but with possible different page rank values. Final step is the ‘ordering’ part. This is quite straightforward and so is the unit test. This part only contains a mapper which takes the output of the previous step and ‘reformats’ it to the wanted format: pagerank + page order by pagerank. The sorting by key is done by Hadoop framework when the mapper result is supplied to the reducer step so this ordering isn’t reflected in the Mapper unit test. The code for this unit test is: package net.pascalalma.hadoop.job3; ... public class RankingMapperTest {MapDriver<LongWritable, Text, FloatWritable, Text> mapDriver;@Before public void setUp() { RankingMapper mapper = new RankingMapper(); mapDriver = MapDriver.newMapDriver(mapper); }@Test public void testMapper() throws IOException { mapDriver.withInput(new LongWritable(1), new Text("A\t0.454545\tM,Y")); mapDriver.withInput(new LongWritable(2), new Text("M\t1.90\tM")); mapDriver.withInput(new LongWritable(3), new Text("Y\t0.68898\tY,A"));//Please note that we cannot check for ordering here because that is done by Hadoop after the Map phase mapDriver.withOutput(new FloatWritable(0.454545f), new Text("A")); mapDriver.withOutput(new FloatWritable(1.9f), new Text("M")); mapDriver.withOutput(new FloatWritable(0.68898f), new Text("Y")); mapDriver.runTest(false); } } So here we just check that the mapper takes the input and formats the output correctly. This concludes all the examples of the unit tests. With this project you should be able to test it yourself and get bigger insight in how the original code works. It sure helped me to understand it!The complete version of the code including unit tests can be found here.Reference: Calculate PageRanks with Apache Hadoop from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.