Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Organizing an Agile Program: Part 2, Networks for Managing Agile Programs

In Organizing an Agile Program: Part 1, Introduction, I discussed the difference between hierarchies and networks. I used Scrum of Scrums as an example. It could be any organizing hierarchy. Remember, I like Scrum as a way to organize a project team’s work. I also like lean. I like XP. I like unbranded agile. I like anything that helps a team deliver features quickly and get feedback. I’m not religious about what any project team in the program uses. What works for small programs is going to be different from what works for medium programs. It is going to be different from what works for large programs. Why? It’s the scaling problem and the communication path problem. Larger programs are not linear scales of smaller programs. That’s why I asked you how large your program was at the beginning.   Using a Network in a Medium Size Program When you look at the small world network image here, you can see how the teams are connected. This might even be ideal for a five-team program. But what happens with a nine-team program? According to me, that’s still a medium size program. And I claim that not all the teams have to be fully connected. In fact, I claim that they can’t be. No one can have that develop and maintain that many “intimate” connections with other people at work. Note: I realize Dunbar’s Number is about 150, maybe more, maybe less. Dunbar’s Number is the number of people you can maintain social relationships with. On a medium-size program, you have a shot of maintaining relationships with many of the people on the program. Maybe not all of them, but many of them. That helps you accomplish the work. If you have 6-person teams, and you have 9 teams, that’s only 54 people. The teams don’t have to all be connected, as in the small world network here. Some teams are more connected than other teams. Can you really track all the people on your program and know what the heck is going on with each of them? No, not when it comes to their code or tests or features. Never mind when it comes to their human-ness. I’m suggesting you not even try. You track enough of the other people do to be able to finish your work. Some people are well-connected with others. Some are not. This is why communities of practice help. (This is why lunch rooms help even more.) What you are able to do, however, is ask a question of your network and get the answer. Fast. You cooperate to produce the shippable product.You work with the people on your team and maintain relationships with a few other people. That’s what small world networks do. That’s why communities of practice work. That’s why the rumor mill works so well in organizations. We have some well-connected people, and a bunch of people who are not so well-connected. And, that’s okay. Here’s the key point: you don’t have to go up and down a hierarchy to accomplish work. You either know who to ask to accomplish work, or they know who to ask. Nobody needs to ask permission. There is no chief. There is no master. There is no hierarchy. There is cooperation. What are the Risks in Programs? Let’s back up a minute and ask what’s important to programs. Why are there standups in a team? The standups in a team are about micro-commitments to each other. They are also a way to show the status in a visible way. They help us see our risks. We can coordinate our features and see if we have too much work in progress. We ask about impediments. That’s why the Scrum of Scrums got started. All excellent reasons. If you think about what’s risky in programs, every program starts with these areas of risk:Coordinating the features, the backlog among and between the teams. Nurturing the architecture, so it evolves at the “correct” pace. Not too early so we haven’t wasted time and energy on it, and not so late that we have technical architecture debt and frameworks that don’t work for us. How to describe the status in a way that says, “Here is where we are and here is how we know when we will be done.” How to describe our interdependent risks. Risks can be independent of features.Your program will have other risks that are singular to your program. But the risks of managing features among teams; coordinating architecture; explaining status; and managing risk—those big four are the big issues in program management. So, how do we manage those risks if I claim that Scrum of Scrums is a hierarchy and doesn’t work so well? Let’s start with how we manage the issue of features in programs. Programs Need Roadmaps Programs need roadmaps so that the teams can see where they are headed. I am not saying the teams should implement ahead of this iteration’s backlog. If the teams have a roadmap, they can see where they are going with the features. This helps in multiple ways:They can see how this feature might interconnect with another team’s feature They can see how this feature might affect the architecture They can create product backlog burnup charts based on feature accomplishmentIn programs, teams need to be able to look ahead. They don’t need to implement ahead. That would be waste. No, we don’t want that. But looking ahead is quite useful. If the teams are able to look ahead, they can talk with their product owners and help the product owners see if it makes sense to implement some features together or not. Or, if it makes sense to change the order of some features. When I get to large programs, where several teams might work off the same backlog, I’ll come back to this point. I realize that several teams working off the same backlog is not restricted to large teams, but I have a backlog for writing too, and I’m not addressing this yet.A roadmap is a changing document. It is our best guess, based on the most recent demo. We expect it to change. We ask the program product owner to create and maintain the business value of the roadmap. We ask the product owner community to create user stories from the phrases and words on the roadmap. The teams can see which release the features might occur in, and they can see which features they’re supposed to get done in this release, and most importantly now, across the program. Some of the words are not anything like stories. Some might be close to stories. The items on the roadmap close to us in time might be closer to stories. I would expect the ones farther away to be a lot less close. I would expect them to be epic in size. It’s the entire product owner community job to continually evaluate those phrases and ask, “Do we want these? If so, we need to define what they mean and create stories that represent what they mean.” I don’t care what approach the product owners use to create stories from the roadmap. But the roadmap is the 50,000 foot idea. Only the quarter that we are in has anything that might resemble stories. Oh, and that big black line? That’s what the teams need to complete this quarter. Anything below that would be great. As the teams complete the stories, the product owner community reassesses the remaining stories on the roadmap. Yes, they do. It’s a ton of work. Once you have a roadmap, the product owners can create user stories that make sense for their teams. The program product owner works with the teams, as needed. Since the teams are feature teams, not architecture-based teams, they can create product backlog burnup charts. Now, you can tell your status by seeing where you are in the roadmap. Note that you do not need a Gantt chart. You have finished some number of features, ranked by business value. You have some number features remaining. You can finish the program at any time, because you are working by business value. Oh, and you don’t need ROI. You never try to predict anything. You can’t predict anything for a team, and you certainly can’t predict anything for a program. Programs Need Architecture Nurturing I am a huge fan of evolving the architecture in any agile program. Why? Because for all but the smallest of projects, the architecture has always changed. Now, that does not mean I don’t think we should not think about the architecture as we proceed. What I like is when the project teams implement several features before they settle on a specific framework. This works especially well in small and medium-size programs. Just-in-time architecture and evolving it is a tough thing. It’s tough on the project teams. It’s tough on the architects. It’s so much easier to think about a framework (or frameworks) first and pick it, and attempt to bend the product to make it work. But, that’s how we get a pile of technical debt, especially in a complex product, which is where you need a program. So, as much as I would like to pick an architecture early and stick with it, even I force myself to postpone the architecture decisions as late as we can, and keep evolving the architecture as much as possible. What is the Most Responsible Date for Architecture? Now, sometimes “as late as we can” is the second or third iteration. But in a medium size program, the most responsible date is often later than that. And, sometimes the architects need to work in a community, wayfinding along with the feature teams. Did you see in the roadmap in Q1, where we needed a platform before we could do any “real” features? If you have a hardware product, sometimes you need to do that. You just do. But, for SaaS, you almost never do. This means I ask for architects to be embedded into the project teams. I also ask for architects to produce an updated picture of the architecture as an output of each iteration. Can you do this on your agile program? If you have tests to support your work, you can. Remember, agile is the most disciplined approach to product development. If you’re hacking and calling it agile, well, you can call it anything you want, but it’s not agile. Explaining Status: The Standup, By Itself is Not Sufficient When you have a small program, and you have Scrum of Scrums, the daily standup is supposed to manage all four of these issues: how the features work with the teams, how the architecture retains its integrity, what the status, and what the risks are. In a medium program, that daily standup is supposed to do the same. Here is my question: Is your daily standup, for your Scrum of Scrums working for you? Does it have everyone you need? If so, fine. You don’t need me. But for those of you who are struggling with the hierarchy that a Scrum of Scrums brings, or, if you think your program is proceeding too slowly, you have other options. Or, if you need to know when your program will be done, you need agile program management. One of the problems when you have a medium program is that at some point, the number of people who participate in a Scrum of Scrums of teams starts to overwhelm any one standup meeting. The issues you have cannot be resolved in a standup. The standup is not adequate for the problems you encounter. (Remember, what was Scrum designed for? 5-7 people. What happens when you have more than 7 teams? You start to outgrow Scrum. Can you make it work? Of course you can. You can make anything work. You are a smart person, working with smart people. You can. Should you? That’s another question.) Asking “what did you complete,” or even the old “what did you do since our last standup” is the wrong question. The question is irrelevant. As your program grows, the data overwhelms the ability of the people to take the data in. Especially if the data is not useful. Are you doing continuous integration in your program? If so, you don’t need to ask that question in your program. Once you get past five teams, what you did or what you completed is not the question. You need to know what the obstacles are. You need to know what the interdependencies are. You need to know if deliverables are going to be late. That’s program management. We’ll carry on with program management in part 3. This is long enough.   Reference: Organizing an Agile Program: Part 2, Networks for Managing Agile Programs from our JCG partner Johanna Rothman at the Managing Product Development blog. ...

Hunting down memory leaks: a case study

A week ago I was asked to fix a problematic webapp suffering from memory leaks. How hard can it be, I thought – considering that I have both seen and fixed hundreds of leaks over the past two years or so. But this one proved to be a challenge. 12 hours later I had discovered no less than five leaks in the application and had managed to fix four of them. I figured it would be an experience worth sharing. For the impatient ones – all in all I found leaks fromMySQL drivers launching background threads java.sql.DriverManager not unloaded on redeploys BoneCP loading resources from the wrong classloaders   Datasource registered into a JNDI tree blocking unloading Connection pool using finalizers tied to Google’s implementation of reference queue running in a separate threadThe application at hand was a simple Java web application with a few datasources connecting to the relational databases, Spring in the middle to glue stuff together and simple JSP pages rendered to the end user. No magic whatsoever. Or so I thought. Boy, was I wrong. First stop – MySQL drivers. Apparently the most common MySQL drivers launches a thread in the background cleaning up your unused and unclosed connections. So far so good. But the catch is that the context classloader of this newly created thread is your web application classloader. Which means that while this thread is running and you are trying to undeploy your webapp, its classloader is left dangling behind – with all the classes loaded in it. Apparently it took from July 2012 to February 2013 to fix this after the bug was discovered. You can follow the discussion in MySQL issue tracker. The solution finally implemented was a shutdown() method to the API, which you as a developer should know to invoke before redeploys. Well, I didn’t. And I bet 99% of you out there didn’t, either. There is a good place for such shutdown hooks in your typical Java web application, namely the ServletContextListener class contextDestroyed() method. This specific method gets called each and every time the servlet context is destroyed, which most often happens during redeploys for example. Chances are that quite a few developers are aware this place exists, but how many are actually realise the need to clean up in this particular hook? Back to the application, which was still far from being fixed. My second discovery was also related to context classloaders and datasources. When you are using com.jdbc.myslq.Driver it registers itself as a driver in java.sql.DriverManager class. Again,this is done with good intentions. After all, this is what your application uses to figure out how to choose the right driver for each query when connecting to the database URL. But as you might guess, there is a catch: this DriverManager is loaded in bootstrap classloader,rather than your web application’s classloader, so cannot be unloaded when redeploying your application. What now makes things really peculiar is that there is no general way to unregister the driver by yourself. The reference to the class you are trying to unregister seems to deliberately hidden from you. In this particular case I was lucky and the connection pool used in the application was able to unregister the driver. In case I remember to ask. Looking back to similar cases in my past, this was the first time I saw such a feature implemented in connection pool. Before that, I once had to enumerate through all the JDBC drivers registered with DriverManager to figure out which ones should I unregister. Not an experience I can recommend to anyone. This should be it, I thought. Two leaks in the same application is already more than one can tolerate. Wrong. The third issue staring right at me from the leak report was sun.awt.AppContext with its static field mainAppContext. What? I have no idea what this class is supposed to do, but I was pretty sure that the application at hand didn’t use AWT in any way. So I started a debugger to find out who loads this class (and why). Another surprise:it was com.sun.jmx.trace.Trace.out() . Can you think of a good reason why a com.sun.jmx class would call a sun.awt class? I certainly can’t. Nevertheless, that class stack originated from the connection pool, BoneCP. And there’s absolutely zero way to skip that code line that leads to this particular memory leak. Solution? The following magic incantation in my ServletContextListener.contextInitialized(): Thread.currentThread().setContextClassLoader(null); // Force the AppContext singleton to be created and initialized without holding reference to WebAppClassLoder sun.awt.AppContext.getAppContext(); But I still wasn’t done: Something was still leaking. In this case I found out that our application was binding this datasource to the InitialContext() JNDI tree, a good, standardized way to bind your objects for future discovery. But again – when using this nice thing you had to clean up after yourself by unbinding this datasource from the JNDI tree in the very same contextDestroy() method. Well, so far we had pretty logical, albeit rare and somewhat obscure problems, but with some reasoning and google-fu were quickly fixed. My fifth and last problem was nothing like that. I still had that application crashing with OutOfMemoryError: PermGen. Both Plumbr and Eclipse MAT reported to me that the culprit, the one who had taken my classloader hostage, was a thread named “Who the hell is this guy?” – was my last thought before the darkness engulfed me. A couple of hours and four coffees later I found myself staring at three lines: emf.close(); emf = null; ds = null;It is hard to recollect exactly what happened during the intervening hours. I have remote memories of WeakReferences, ReferenceQueues, Finalizers, Reflection and my first time of seeing a PhantomReference in the wild. Even today I still cannot fully explain why and for what purpose the connection pool used finalizers tied to google’s implementation of reference queue running in a separate thread. Nor can I explain why closing javax.persistence.EntityManagerFactory (named emf in the code above and held in static reference in one of application’s own classes) was not enough; and so I had to manually null this reference. And similar static reference to the data source used by that factory. I was sure that Java’s GC could cope with circular references all day long, but it seems that this magical ring of classes, static references, object, finalizers and reference queues was too hard even for him. And so, again for first time in my long career, I had to nullify java reference. I am a humble guy and thus cannot claim that I was the most efficient in finding the cure for all of the above in a mere 12 hours. But I have to admit I have been dealing with memory leaks almost exclusively for the past three years. And I even had my own creation,Plumbr, helping me (in fact, four out of five of those leaks were discovered by Plumbr in 30 minutes or so). But to actually solve those leaks, it took me more than a full working day in addition. Overall – something is apparently broken in the Java EE and/or classloader world. It cannot be normal that a developer must remember all those hooks and configuration tricks, because it simply isn’t possible. After all, we like to use our heads for something productive. And, as seen from the workarounds bundled with two popular servlet containers (Tomcat and Jetty), the problem is severe. Solving it, however, will require more than simply alleviating some of the symptoms, but curing the underlying design errors.   Reference: Hunting down memory leaks: a case study from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...

Expressive JAX-RS integration testing with Specs2 and client API 2.0

No doubts, JAX-RS is an outstanding piece of technology. And upcoming specification JAX-RS 2.0 brings even more great features, especially concerning client API. Topic of today’s post is integration testing of the JAX-RS services. There are a bunch of excellent test frameworks like REST-assured to help with that, but the way I would like to present it is by using expressive BDD style. Here is an example of what I mean by that:           Create new person with email <> Given REST client for application deployed at http://localhost:8080 When I do POST to rest/api/people? Then I expect HTTP code 201 Looks like typical Given/When/Then style of modern BDD frameworks. How close we can get to this on JVM, using statically compiled language? It turns out, very close, thanks to great specs2 test harness. One thing to mention, specs2 is a Scala framework. Though we are going to write a bit of Scala, we will do it in a very intuitive way, familiar to experienced Java developer. The JAX-RS service under the test is the one we’ve developed in previous post. Here it is: package;import java.util.Collection;import javax.inject.Inject; import; import; import; import; import; import; import; import; import; import; import; import; import; import;import com.example.model.Person; import;@Path( '/people' ) public class PeopleRestService { @Inject private PeopleService peopleService;@Produces( { MediaType.APPLICATION_JSON } ) @GET public Collection< Person > getPeople( @QueryParam( 'page') @DefaultValue( '1' ) final int page ) { return peopleService.getPeople( page, 5 ); }@Produces( { MediaType.APPLICATION_JSON } ) @Path( '/{email}' ) @GET public Person getPeople( @PathParam( 'email' ) final String email ) { return peopleService.getByEmail( email ); }@Produces( { MediaType.APPLICATION_JSON } ) @POST public Response addPerson( @Context final UriInfo uriInfo, @FormParam( 'email' ) final String email, @FormParam( 'firstName' ) final String firstName, @FormParam( 'lastName' ) final String lastName ) {peopleService.addPerson( email, firstName, lastName ); return Response.created( uriInfo.getRequestUriBuilder().path( email ).build() ).build(); }@Produces( { MediaType.APPLICATION_JSON } ) @Path( '/{email}' ) @PUT public Person updatePerson( @PathParam( 'email' ) final String email, @FormParam( 'firstName' ) final String firstName, @FormParam( 'lastName' ) final String lastName ) {final Person person = peopleService.getByEmail( email ); if( firstName != null ) { person.setFirstName( firstName ); }if( lastName != null ) { person.setLastName( lastName ); }return person; }@Path( '/{email}' ) @DELETE public Response deletePerson( @PathParam( 'email' ) final String email ) { peopleService.removePerson( email ); return Response.ok().build(); } } Very simple JAX-RS service to manage people. All basic HTTP verbs are present and backed by Java implementation: GET, PUT, POST and DELETE. To be complete, let me also include some methods of the service layer as these ones raise some exceptions of our interest. package;import java.util.ArrayList; import java.util.Collection; import java.util.Iterator; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap;import org.springframework.stereotype.Service;import com.example.exceptions.PersonAlreadyExistsException; import com.example.exceptions.PersonNotFoundException; import com.example.model.Person;@Service public class PeopleService { private final ConcurrentMap< String, Person > persons = new ConcurrentHashMap< String, Person >();// ...public Person getByEmail( final String email ) { final Person person = persons.get( email );if( person == null ) { throw new PersonNotFoundException( email ); }return person; }public Person addPerson( final String email, final String firstName, final String lastName ) { final Person person = new Person( email ); person.setFirstName( firstName ); person.setLastName( lastName );if( persons.putIfAbsent( email, person ) != null ) { throw new PersonAlreadyExistsException( email ); }return person; }public void removePerson( final String email ) { if( persons.remove( email ) == null ) { throw new PersonNotFoundException( email ); } } } Very simple but working implementation based on ConcurrentMap. The PersonNotFoundException is being raised in a case when person with requested e-mail doesn’t exist. Respectively, the PersonAlreadyExistsException is being raised in a case when person with requested e-mail already exists. Each of those exceptions have a counterpart among HTTP codes: 404 NOT FOUND and 409 CONFLICT. And it’s the way we are telling JAX-RS about that: package com.example.exceptions;import; import; import;public class PersonAlreadyExistsException extends WebApplicationException { private static final long serialVersionUID = 6817489620338221395L;public PersonAlreadyExistsException( final String email ) { super( Response .status( Status.CONFLICT ) .entity( 'Person already exists: ' + email ) .build() ); } } package com.example.exceptions;import; import; import;public class PersonNotFoundException extends WebApplicationException { private static final long serialVersionUID = -2894269137259898072L;public PersonNotFoundException( final String email ) { super( Response .status( Status.NOT_FOUND ) .entity( 'Person not found: ' + email ) .build() ); } } The complete project is hosted on GitHub. Let’s finish with boring part and move on to the sweet one: BDD. Not a surprise that specs2 has a nice support for Given/When/Then style, as described in the documentation. So using specs2, our test case becomes something like this: 'Create new person with email <>' ^ br^ 'Given REST client for application deployed at ${http://localhost:8080}' ^ client^ 'When I do POST to ${rest/api/people}' ^ post( Map( 'email' -> '', 'firstName' -> 'Tommy', 'lastName' -> 'Knocker' ) )^ 'Then I expect HTTP code ${201}' ^ expectResponseCode^ 'And HTTP header ${Location} to contain ${http://localhost:8080/rest/api/people/}' ^ expectResponseHeader^ Not bad, but what are those ^, br, client, post, expectResponseCode and expectResponseHeader? The ^, br is just some sugar specs2 brings to support Given/When/Then chain. Others, post, expectResponseCode and expectResponseHeader are just couple of functions/variables we define to do actual work. For example, client is a new JAX-RS 2.0 client, which we create like that (using Scala syntax): val client: Given[ Client ] = ( baseUrl: String ) => ClientBuilder.newClient( new ClientConfig().property( 'baseUrl', baseUrl ) ) The baseUrl is taken from Given definition itself, it’s enclosed into ${…} construct. Also, we can see that Given definition has a strong type: Given[ Client ]. Later we will see that same is true for When and Then, they both do have respective strong types When[ T, V ] and Then[ V ]. The flow looks like this:start from Given definition, which returns Client. continue with When definition, which accepts Client from Given and returns Response end up with number of Then definitions, which accept Response from When and check actual expectationsHere is how post definition looks like (which itself is When[ Client, Response ]): def post( values: Map[ String, Any ] ): When[ Client, Response ] = ( client: Client ) => ( url: String ) => client .target( s'${client.getConfiguration.getProperty( 'baseUrl' )}/$url' ) .request( MediaType.APPLICATION_JSON ) .post( Entity.form( values.foldLeft( new Form() )( ( form, param ) => form.param( param._1, param._2.toString ) ) ), classOf[ Response ] ) And finally expectResponseCode and expectResponseHeader, which are very similar and have the same type Then[ Response ]: val expectResponseCode: Then[ Response ] = ( response: Response ) => ( code: String ) => response.getStatus() must_== code.toIntval expectResponseHeader: Then[ Response ] = ( response: Response ) => ( header: String, value: String ) => response.getHeaderString( header ) should contain( value ) Yet another example, checking response content against JSON payload: 'Retrieve existing person with email <>' ^ br^ 'Given REST client for application deployed at ${http://localhost:8080}' ^ client^ 'When I do GET to ${rest/api/people/}' ^ get^ 'Then I expect HTTP code ${200}' ^ expectResponseCode^ 'And content to contain ${JSON}' ^ expectResponseContent( ''' { 'email': '', 'firstName': 'Tommy', 'lastName': 'Knocker' } ''' )^ This time we are doing GET request using following get implementation: val get: When[ Client, Response ] = ( client: Client ) => ( url: String ) => client .target( s'${client.getConfiguration.getProperty( 'baseUrl' )}/$url' ) .request( MediaType.APPLICATION_JSON ) .get( classOf[ Response ] ) Though specs2 has rich set of matchers to perform different checks against JSON payloads, I am using spray-json, a lightweight, clean and simple JSON implementation in Scala (it’s true!) and here is the expectResponseContent implementation: def expectResponseContent( json: String ): Then[ Response ] = ( response: Response ) => ( format: String ) => { format match { case 'JSON' => response.readEntity( classOf[ String ] ).asJson must_== json.asJson case _ => response.readEntity( classOf[ String ] ) must_== json } } And the last example (doing POST for existing e-mail): 'Create yet another person with same email <>' ^ br^ 'Given REST client for application deployed at ${http://localhost:8080}' ^ client^ 'When I do POST to ${rest/api/people}' ^ post( Map( 'email' -> '' ) )^ 'Then I expect HTTP code ${409}' ^ expectResponseCode^ 'And content to contain ${Person already exists:}' ^ expectResponseContent^ Looks great! Nice, expressive BDD, using strong types and static compilation! For sure, JUnit integrations is available and works great with Eclipse.Not to forget about own specs2 reports (generated by maven-specs2-plugin): mvn clean testPlease, look for complete project on GitHub. Also, please note, as I am using the latest JAX-RS 2.0 milestone (final draft), the API may change a bit when be released.   Reference: Expressive JAX-RS integration testing with Specs2 and client API 2.0 from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Android Game Development with libgdx – Jumping, Gravity and improved movement, Part 3

This is the third part in the Building a Game with LibGdx series. Make sure you read the previous articles to build a context for this one.Part 1a Part 1b Part 2In the previous article we have animated Bob’s movement, but the movement is quite robotic. In this article I’ll try to make Bob jump and also move in a more natural way. I will achieve this by using a little physics. I will also clean up the code a little and fix some issues that crept into the code in the previous articles. Jumping – the physics Jumping is the action performed by an entity (Bob in our case) which propels itself into the air and lands back onto the ground (substrate). This is achieved by applying a force big enough against the force exercised by the ground (gravity) on the object. Identifying the objects we have:Bob – entity Ground – substrate Gravity (G) – the constant force of gravity that acts on all entities in the worldTo implement realistic jumping we will simply need to apply Newton’s laws of motion. If we add the necessary attributes (mass, gravity, friction) to Bob and the world we have everything we need to implement jumping. Look at the following diagram and examine its components. The left side is when we hold down the ‘jump’ button and the right side shows Bob in a jump.Let’s examine the forces in different states of Bob. 1. Bob is idle and on the ground (grounded). In this case, only the gravity acts on Bob. That means Bob is being pulled down with a constant force. The formula to calculate the force that pulls an object to the ground is where m is the mass (think weight although is not weight) and a is the acceleration. We are simplifying things and consider Bob as having a mass of 1 so the force is equal to the acceleration. If we apply a constant force to an object, its velocity increases infinitely. The formula to calculate an object’s velocity is: wherev – is the final velocity u – is the initial velocity (the velocity which t seconds ago) a – is the acceleration t – is the time elapsed since the acceleration is being appliedIf we place Bob in the middle of the air that means the starting velocity is 0. If we consider that the Earth’s gravitational acceleration is 9.8 and Bob’s weight (mass) is 1 then it’s easy to calculate his falling speed after a second.So after a second in free fall, Bob accelerated from 0 to 9.8 meters per second which is 35.28 kph or 21.92 mph. That is very fast. If we want to know his velocity after a further second we would use the same formula.That is 70.56 kph or 43.84 mph which is very fast. We already see that the acceleration is linear and that under a constant force an object will accelerate infinitely. This is in an ideal environment where there is no friction and drag. Because the air has friction and it also applies some forces to the falling object, the falling object will reach a terminal velocity at some point, past which it won’t accelerate. This depends on a lot of factors which we will ignore. Once the falling object hit the ground, it will stop, the gravity won’t affect it any more. This is not true however but we are not building a complete physics simulator but a game where Bob won’t get killed if he hits the ground at terminal velocity. Reformulating it, we check if Bob has hit the ground, and if so then we will ignore gravity. Making Bob jump To make Bob jump, we need a force pointing opposite gravity (upward) which not just cancels the effect of gravity but thrusts Bob into the air. If you check the diagram, that force (F) is much stronger (its magnitude or length is much greater than that of the gravity’s vector). By adding the 2 vectors together (G and F) we obtain the final force that will act on Bob. To simplify things, we can get rid of vectors and work only with their Y components. On Earth, . Because it is pointing down, we it’s actually . When Bob jumps, he does nothing more, than generating enough force to produce enough acceleration that will get him to height (h) before gravity (G) takes him back to the ground. Because Bob is a human like us, he can’t maintain the acceleration once he is airborne, not without a jetpack at least. To simulate this, we could create a huge force when we press the ‘jump’ key. By applying the above formulas, the initial velocity will be high enough so even if gravity will act on Bob he will still climb to a point after which he starts the free falling sequence. If we implement this method we will have a really nice realistic looking jump. If we carefully check the original star guard game, the hero can jump to different heights depending on how long we press down the jump button. This is easily dealt with if we keep the up pointing force applied as long as we hold down the jump key and cut it off after a certain amount of time, jut to make sure that Bob does not start to fly. Implement Jump I think it was enough physics, let’s see how we implement the jump. We will also do a little housekeeping task and reorganise the code. I want to isolate the jumping and movement so I will ignore the rest of the world. To see what has been modified in the code, scroll down to the Refactoring section. Open up This is the old but was renamed. It made sense since we control Bob with it. public class BobController {enum Keys { LEFT, RIGHT, JUMP, FIRE }private static final long LONG_JUMP_PRESS = 150l; private static final float ACCELERATION = 20f; private static final float GRAVITY = -20f; private static final float MAX_JUMP_SPEED = 7f; private static final float DAMP = 0.90f; private static final float MAX_VEL = 4f;// these are temporary private static final float WIDTH = 10f;private World world; private Bob bob; private long jumpPressedTime; private boolean jumpingPressed;// ... code omitted ... //public void jumpReleased() { keys.get(keys.put(Keys.JUMP, false)); jumpingPressed = false; }// ... code omitted ... // /** The main update method **/ public void update(float delta) { processInput();bob.getAcceleration().y = GRAVITY; bob.getAcceleration().mul(delta); bob.getVelocity().add(bob.getAcceleration().x, bob.getAcceleration().y); if (bob.getAcceleration().x == 0) bob.getVelocity().x *= DAMP; if (bob.getVelocity().x > MAX_VEL) { bob.getVelocity().x = MAX_VEL; } if (bob.getVelocity().x < -MAX_VEL) { bob.getVelocity().x = -MAX_VEL; }bob.update(delta); if (bob.getPosition().y < 0) { bob.getPosition().y = 0f; bob.setPosition(bob.getPosition()); if (bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } } if (bob.getPosition().x < 0) { bob.getPosition().x = 0; bob.setPosition(bob.getPosition()); if (!bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } } if (bob.getPosition().x > WIDTH - bob.getBounds().width ) { bob.getPosition().x = WIDTH - bob.getBounds().width; bob.setPosition(bob.getPosition()); if (!bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } } }/** Change Bob's state and parameters based on input controls **/ private boolean processInput() { if (keys.get(Keys.JUMP)) { if (!bob.getState().equals(State.JUMPING)) { jumpingPressed = true; jumpPressedTime = System.currentTimeMillis(); bob.setState(State.JUMPING); bob.getVelocity().y = MAX_JUMP_SPEED; } else { if (jumpingPressed && ((System.currentTimeMillis() - jumpPressedTime) >= LONG_JUMP_PRESS)) { jumpingPressed = false; } else { if (jumpingPressed) { bob.getVelocity().y = MAX_JUMP_SPEED; } } } } if (keys.get(Keys.LEFT)) { // left is pressed bob.setFacingLeft(true); if (!bob.getState().equals(State.JUMPING)) { bob.setState(State.WALKING); } bob.getAcceleration().x = -ACCELERATION; } else if (keys.get(Keys.RIGHT)) { // left is pressed bob.setFacingLeft(false); if (!bob.getState().equals(State.JUMPING)) { bob.setState(State.WALKING); } bob.getAcceleration().x = ACCELERATION; } else { if (!bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } bob.getAcceleration().x = 0;} return false; } } Take a bit of time to analyse what we have added to this class. The following lines are explained: #07 – #12 – constants containing values that affect the world and BobLONG_JUMP_PRESS – time in milliseconds before the thrust applied to jump is cut off. Remember that we are doing high jumps and the longer the player presses the button the higher Bob jumps. To prevent flying we will cut off the jump propulsion after 150 ms. ACCELERATION – this is actually used for walking/running. It is exactly the same principle as jumping but on the horizontal X axis GRAVITY – this is the gravity acceleration (G pointing down in the diagram) MAX_JUMP_SPEED – this is the terminal velocity which we will never exceed when jumping DAMP – this is to smooth out movement when Bob stops. He won’t stop that sudden. More on this later, ignore it for the jump MAX_VEL – the same as MAX_JUMP_SPEED but for movement on the horizontal axis#15 – this is a temporary constant and it’s the width of the world in world units. It is used to limit Bob’s movement to the screen #19 – jumpPressedTime is the variable that cumulates the time the jump button is being pressed for #20 – a boolean which is true if the jump button was pressed #26 – the jumpReleased() has to set the jumpingReleased variable to false. It is just a simple state variable. Following the main update method which does most of the work for us. #32 – calls the processInput as usual to check if any keys were pressed Moving to the processInput #71 – checks if the JUMP button is pressed #72 – #76 – in case Bob is not in the JUMPING state (meaning he is on the ground) the jumping is initiated. Bob is set to the jumping state and he is ready for take off. We cheat a little here and instead of applying the force pointing up, we set Bob’s vertical velocity to the maximum speed he can jump with (line #76). We also store the time in milliseconds when the jump was initiated. #77 – #85 – this gets executed whenever Bob is in the air. In case we still press the jump button we check if the time elapsed since the initiation of the jump is greater than the threshold we set and if we are still in the cut-off time (currently 150ms) we maintain Bob’s vertical speed. Ignore lines #87-107 as they are for horizontal walking. Going back to the update method we have: #34 – Bob’s acceleration is set to GRAVITY. This is because the gravity is a constant and we start from here: #35 – we calculate the acceleration for the time spent in this cycle. Our initial values are in units/seconds so we need to adjust the values accordingly. If we have 60 updates per second then the delta will be 1/60. It’s all handled for you by libgdx. #36 – Bob’s current velocity gets updated with his acceleration on both axis. Remember that we are working with vectors in the Euclidean space. #37 – This will smooth out Bob’s stopping. If we have NO acceleration on the X axis then we decrease it’s velocity by 10% every cycle. Having many cycles in a second, Bobo will come to a halt very quickly but very smoothly. #38 – #43 – making sure Bob won’t exceed his maximum allowed speed (terminal velocity). This guards agains the law that says that an object will accelerate infinitely if a constant force acts on it. #45 – calls Bob’s update method which does nothing else than updates Bob’s position according to his velocity. #46 – #66 – This is a very basic collision detection which prevents Bob to leave the screen. We simply check if Bob’s position is outside the screen (using world coordinates) and if so, then we just place Bob back to the edge. It is worth noting that whenever Bob hits the ground or reaches the edge of the world (screen), we set his status to Idle. This allows us to jump again. If we run the application with the above changes, we will have to following effect:Housekeeping – refactoring We notice that in the resulting application there are no tiles and Bob is not constrained only by the screen edges. There is also a different image for when Bob is in the air. One image when he is jumping and one when he is falling. We did the following:Renamed WorldController to BobController. It made sense since we control Bob with it. Commented out the drawBlocks() in WorldRenderer's render() method. We don’t render the tiles now because we ignore them. Added the setDebug() method to the WorldRendered and the supporting toggle function in Debug rendering now can be toggled by pressing D on the keyboard in desktop mode. WorldRenderer has new texture regions to represent the jumping and falling Bob. We still maintain just one state though. How the world renderer knows when to display which, takes place by checking Bob’s vertical velocity (on the Y axis). If it’s positive, Bob is jumping, if it’s negative, Bob is falling. public class WorldRenderer {// ... omitted ... //private TextureRegion bobJumpLeft; private TextureRegion bobFallLeft; private TextureRegion bobJumpRight; private TextureRegion bobFallRight;private void loadTextures() { TextureAtlas atlas = new TextureAtlas(Gdx.files.internal('images/textures/textures.pack'));// ... omitted ... //bobJumpLeft = atlas.findRegion('bob-up'); bobJumpRight = new TextureRegion(bobJumpLeft); bobJumpRight.flip(true, false); bobFallLeft = atlas.findRegion('bob-down'); bobFallRight = new TextureRegion(bobFallLeft); bobFallRight.flip(true, false); }private void drawBob() { Bob bob = world.getBob(); bobFrame = bob.isFacingLeft() ? bobIdleLeft : bobIdleRight; if(bob.getState().equals(State.WALKING)) { bobFrame = bob.isFacingLeft() ? walkLeftAnimation.getKeyFrame(bob.getStateTime(), true) : walkRightAnimation.getKeyFrame(bob.getStateTime(), true); } else if (bob.getState().equals(State.JUMPING)) { if (bob.getVelocity().y > 0) { bobFrame = bob.isFacingLeft() ? bobJumpLeft : bobJumpRight; } else { bobFrame = bob.isFacingLeft() ? bobFallLeft : bobFallRight; } } spriteBatch.draw(bobFrame, bob.getPosition().x * ppuX, bob.getPosition().y * ppuY, Bob.SIZE * ppuX, Bob.SIZE * ppuY); } } The above code excerpt shows the important additions. #5-#8 – The new texture regions for jumping. We need one for left and one for right. #15-#20 – The preparation of the assets. We need to add a few more png images to the project. Check the star-assault-android/images/ directory and there you will see bob-down.png and bob-up.png. These were added and also the texture atlas recreated with the ImagePacker2 tool. See Part 2 on how to create it. #28-#33 – is the part where we determine which texture region to draw when Bob is in the air. There were some bug fixes in The bounding box now has the same position as bob and the update takes care of that. Also the setPosition method updates the bounding boxes’ position. This had an impact on the drawDebug() method inside the WorldRenderer. Now we don’t need to worry about calculating the bounding boxes based on the tiles’ position as the boxes now have the same position as the entity. This was a stupid bug which I let to slip in. This will be very important when doing collision detection.This list pretty much sums up all the changes but it should be very easy to follow through. The source code for this project can be found here: You need to checkout the branch part3. To check it out with git: git clone -b part3 You can also download it as a zip file.   Reference: Android Game Development with libgdx – Jumping, Gravity and improved movement, Part 3 from our JCG partner Impaler at the Against the Grain blog. ...

Recent Java 8 News

Java 8 developments are starting to dominate the news again. Recent posts cover extending Milestone 7 of JDK 8 to ensure its feature complete, the Date/Time API now available in Java 8, and updates to the Java Tutorials to cover some Java 8 features. Extending JDK 8 M7 Mark Reinhold wrote in JDK 8 M6 status, and extending M7 that the ‘best results’ of an M7 ‘Developer Preview Release’ would be obtained if that M7 release was ‘feature complete.’ In particular, changes related to Project Lambda are best incorporated before the M7 release. Reinhold proposed slipping the M7 date, but did not estimate how long it needs to be extended because ‘it will take some time to determine how   much of an extension is needed.’ Exploring Lambdas Speaking of lambda expressions, Dhananjay Nene‘s blog post Exploring Java8 Lambdas. Part 1 provides an introductory look at Java 8 lambda expressions. The post begins with links to and quotes from fairly well-known resources on lambda expressions and moves onto greater detail and syntax focus. Another relatively recent post introducing lambdas is Java 8: The First Taste of Lambdas. Date/Time API in Java 8 In Java Time API Now In Java 8, Bienvenido David III writes:ThreeTen, the reference implementation of JSR 310 Date and Time API, is now included in JDK 8 Early Access b75. The Java Time API for JDK 8 is under the package java.time, moving away from the javax.time package of earlier implementations. The Java Time Javadoc draft is also available. Although Lamda is obviously the largest feature of Java 8, I think it’s safe to say that many of us are excited about a new and better date/time API. Java Tutorial Updated with Java 8 Features Sharon Zakhour writes in Java Tutorial Updated! that ‘an update of the Java Tutorial has gone live’ and that the new 1 March 2013 edition of the tutorial ‘contains early access information on some upcoming Java SE 8 features.’ Zakhour then highlights some of the new Java 8 features already covered in the Java Tutorials such as a page on Lambda expressions (in which lambda expressions are compared to anonymous classes). The new tutorial page When To Use Nested Classes, Local Classes, Anonymous Classes, and Lambda Expressions is interesting because it focuses on when to use these different types of constructs. Other Java 8-related tutorial enhancements include a lesson devoted to annotations and a page on type annotations. All of these changes are listed under the ‘What’s New’ section header on the main tutorials page. Conclusion Enthusiasm for Java 8 is likely to be renewed as we approach its release date and begin to see more posts and articles on the features it provides.   Reference: Recent Java 8 News from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

A Guide To Lifelong Employability For Tech Pros

There seem to be a rash of “Am I Unemployable?” posts and comments lately on sites that I frequent, and after reading details the answer in my head is usually “Not quite, but sounds like you are getting there“. In other words, someone will hire you for something, but many who assess themselves as unemployable are going to feel both underpaid and undervalued when they finally find work. How does a technology professional go from being consistently and happily employed for a number of years, only to find himself/herself suddenly unemployable? Better yet, what are the key differences between someone who spends months on a job search and someone who can unexpectedly lose a job Friday and start a new one the following Monday?How do certain people get job offers without having to even interview?It isn’t simply about skill, although that is obviously a factor. Even pros that are highly productive and well-regarded in some circles can encounter challenges in today’s hiring environment. It’s about creating relationships and developing your reputation/visibility. In my experience, pros that are always in demand and rarely (if ever) unemployed seem to share certain sets of habits, and while some of the material below is Career 101 there are some that you probably never considered. As a longtime user group leader and recruiter of software engineers for the past 15 years, I see this first hand on a daily basis. Let’s start with the habits that are the least obvious and progress to some that are more widely practiced. Interview How often do you interview when you are not actively looking for a job? For most the answer is never, and I’d encourage you to take at least a couple interviews a year. Going on the occasional interview can serve two purposes. First, they are a way to make new contacts and keep your name in the minds of potential employers, with the added benefit that these same interviewers may be working at new companies in a year or two. One interview could lead to an ‘in’ with four or five companies down the road. Second, it is the only way to keep interview skills sharp. Interview practice is best done live without a net, and failing the audition for a job you truly wanted is often attributed to rusty interview chops. Even a simple informational interview request (made by you) is an effective and creative way to make first contact with a potential employer. Know when to leave your job Without question, the group having the toughest time finding work are unemployed with say ten+ years at the same company, and a close second would be employed workers with that same ten year tenure. For anyone about to scream ‘ageism‘ please hold that thought, as older technologists that have made smart moves do not typically have this issue. I would add that older engineers who possess the habits outlined here are the group being hired without interviews. There are always exceptions, but tech pros can stagnate quicker than those in other industries due to the speed of change in technology. The definition of job hopping has morphed over the past fifteen years, and it is now understood that semi-regular movement is expected and accepted. Where other industries may interpret multiple employers as a symptom of disloyalty, in the software world a pattern of positive (moving to something better) job changes is often more indicative of a highly desirable candidate. Conversely, someone who has remained at a company for many years may be viewed by employers as loyal to a fault and potentially unambitious. If this person has solid skills, why has no company picked him/her up yet? Changing jobs before stagnating is critical to overall employability, and how quickly you stagnate will vary based primarily on your employer’s choices, your own ability to recognize that stagnation is happening, and your desire to not let it happen. Make ‘future marketability’ a primary criterion when choosing jobs or projects Carefully consider how a new position will impact your ability to find work later in your career and use that as one of your key incentives when evaluating opportunities. Details about your roles and responsibilities as well as the company’s technology choices and reputation in the industry are all potential factors. Does the company tend to use proprietary languages and frameworks that will not be useful later in your career? How will this look on my résumé? Many candidates today are choosing jobs or projects based on an opportunity to learn a new skill, and for this they are usually willing to sacrifice some other criteria. Reach out to others in the community (not coworkers) How many times have you sent an unsolicited email to someone in your field that you don’t know? “Congrats on your new release, product looks great!” or “Saw that you open sourced, look forward to checking it out” as an email or a tweet is an effective way to create a positive impression with a person or organization. Twitter is great as a public acknowledgment tool, and the character limit can actually be advantageous (no babbling). If you stumble on an article about a local company doing something interesting, there is much to be gained by a 140 character pat on the back. This is essentially networking without the investment of time. Lunch with others (again, not coworkers) You have to eat lunch anyway, so how about inviting someone you don’t know that well to lunch? Perhaps include a few people that share some common technology interest and turn it into a small roundtable discussion. Meeting with other tech pros outside an interview or meetup environment enables everyone to let their guard down, which leads to honest discussions about the experience of working at a company that you may consider in the future. It’s also an opportunity to learn about what technologies and tools are being used by other local shops. Public speaking This is an effective way to get attention as an authority in a subject matter, even on a local level. Preparing a presentation can be time consuming, but generally a wise investment. Even speaking to a somewhat small group once a year can help build your reputation. Attend a conference or group meeting This isn’t to be confused with going to every single meeting for every group in your area. Even getting to an event quarterly keeps you on the radar of others. Make an appearance just to show your face and say hi to a few people. Reading and writing about technology One could debate whether reading or writing has more value, but some combination of the two is likely the best formula. If you don’t know what to read, follow some peers and a few respected pros from your field on Twitter, LinkedIn or Google+, and make a point to read at least a few hours a week. As for writing, even just making comments and discussing articles has some value, with perhaps more value (for job hunting purposes) in places like Stack Overflow or Hacker News where your comments are scored and can be quantified. Creating your own body of written work should improve your understanding of a topic, demonstrate your ability to articulate that topic, and heighten your standing within the community. Build a personal code repo Many in the industry balk at this due to the time required, but having some code portfolio seems to be on the rise as an expectation hiring firms have for many senior level candidates. If the code you wrote at work is not available for demonstration during interviews, working on a personal project is more critical. Conclusion At first glance, this list may appear overwhelming, and I’m certain some readers will point to time constraints and the fact that they are working 60 hour weeks already. Some of these recommendations take considerable time, but at least a few require very little commitment. Employ a few of these tactics and hopefully you will never suffer through a prolonged job search again.   Reference: A Guide To Lifelong Employability For Tech Pros from our JCG partner Dave Fecak at the Job Tips For Geeks blog. ...

Advanced ListenableFuture capabilities

Last time we familiarized ourselves with ListenableFuture. I promised to introduced more advanced techniques, namely transformations and chaining. Let’s start from something straightforward. Say we have our ListenableFuture<String> which we got from some asynchronous service. We also have a simple method:               Document parse(String xml) {//... We don’t need String, we need Document. One way would be to simply resolve Future (wait for it) and do the processing on String. But much more elegant solution is to apply transformation once the results are available and treat our method as if was always returning ListenableFuture<Document>. This is pretty straightforward: final ListenableFuture<String> future = //... final ListenableFuture<Document> documentFuture = Futures.transform(future, new Function<String, Document>() { @Override public Document apply(String contents) { return parse(contents); } }); or more readable: final Function<String, Document> parseFun = new Function<String, Document>() { @Override public Document apply(String contents) { return parse(contents); } }; final ListenableFuture<String> future = //... final ListenableFuture<Document> documentFuture = Futures.transform(future, parseFun); Java syntax is a bit limiting, but please focus on what we just did. Futures.transform() doesn’t wait for underlying ListenableFuture<String> to apply parse() transformation. Instead, under the hood, it registers a callback, wishing to be notified whenever given future finishes. This transformation is applied dynamically and transparently for us at right moment. We still have Future, but this time wrapping Document. So let’s go one step further. We also have an asynchronous, possibly long-running method that calculates relevance (whatever that is in this context) of a given Document: ListenableFuture calculateRelevance(Document pageContents) {//... Can we somehow chain it with ListenableFuture<Document> we already have? First attempt: final Function<Document, ListenableFuture<Double>> relevanceFun = new Function<Document, ListenableFuture<Double>>() { @Override public ListenableFuture<Double> apply(Document input) { return calculateRelevance(input); } }; final ListenableFuture<String> future = //... final ListenableFuture<Document> documentFuture = Futures.transform(future, parseFun); final ListenableFuture<ListenableFuture<Double>> relevanceFuture = Futures.transform(documentFuture, relevanceFun); Ouch! Future of future of Double, that doesn’t look good. Once we resolve outer future we need to wait for inner one as well. Definitely not elegant. Can we do better? final AsyncFunction<Document, Double> relevanceAsyncFun = new AsyncFunction<Document, Double>() { @Override public ListenableFuture<Double> apply(Document pageContents) throws Exception { return calculateRelevance(pageContents); } }; final ListenableFuture<String> future = //comes from ListeningExecutorService final ListenableFuture<Document> documentFuture = Futures.transform(future, parseFun); final ListenableFuture<Double> relevanceFuture = Futures.transform(documentFuture, relevanceAsyncFun); Please look very carefully at all types and results. Notice the difference between Function and AsyncFunction. Initially we got an asynchronous method returning future of String. Later on we transformed it to seamlessly turn String into XML Document. This transformation happens asynchronously, when inner future completes. Having future of Document we would like to call a method that requires Document and returns future of Double. If we call relevanceFuture.get(), our Future object will first wait for inner task to complete and having its result (String -> Document) will wait for outer task and return Double. We can also register callbacks on relevanceFuture which will fire when outer task (calculateRelevance()) finishes. If you are still here, the are even more crazy transformations. Remember that all this happens in a loop. For each web site we got ListenableFuture<String> which we asynchronously transformed to ListenableFuture<Double>. So in the end we work with a List<ListenableFuture<Double>>. This also means that in order to extract all the results we either have to register listener for each and every ListenableFuture or wait for each of them. Which doesn’t progress us at all. But what if we could easily transform from List<ListenableFuture<Double>> to ListenableFuture<List<Double>>? Read carefully – from list of futures to future of list. In other words, rather than having a bunch of small futures we have one future that will complete when all child futures complete – and the results are mapped one-to-one to target list. Guess what, Guava can do this! final List<ListenableFuture<Double>> relevanceFutures = //...; final ListenableFuture<List<Double>> futureOfRelevance = Futures.allAsList(relevanceFutures); Of course there is no waiting here as well. Wrapper ListenableFuture<List<Double>> will be notified every time one of its child futures complete. The moment the last child ListenableFuture<Double> completes, outer future completes as well. Everything is event-driven and completely hidden from you. Do you think that’s it? Say we would like to compute the biggest relevance in the whole set. As you probably know by now, we won’t wait for a List<Double>. Instead we will register transformation from List<Double> to Double! final ListenableFuture<Double> maxRelevanceFuture = Futures.transform(futureOfRelevance, new Function<List<Double>, Double>() { @Override public Double apply(List<Double> relevanceList) { return Collections.max(relevanceList); } }); Finally, we can listen for completion event of maxRelevanceFuture and e.g. send results (asynchronously!) using JMS. Here is a complete code if you lost track: private Document parse(String xml) { return //... } private final Function<String, Document> parseFun = new Function<String, Document>() { @Override public Document apply(String contents) { return parse(contents); } }; private ListenableFuture<Double> calculateRelevance(Document pageContents) { return //... } final AsyncFunction<Document, Double> relevanceAsyncFun = new AsyncFunction<Document, Double>() { @Override public ListenableFuture<Double> apply(Document pageContents) throws Exception { return calculateRelevance(pageContents); } }; //... final ListeningExecutorService pool = MoreExecutors.listeningDecorator( Executors.newFixedThreadPool(10) ); final List<ListenableFuture<Double>> relevanceFutures = new ArrayList<>(topSites.size()); for (final URL siteUrl : topSites) { final ListenableFuture<String> future = pool.submit(new Callable<String>() { @Override public String call() throws Exception { return IOUtils.toString(siteUrl, StandardCharsets.UTF_8); } }); final ListenableFuture<Document> documentFuture = Futures.transform(future, parseFun); final ListenableFuture<Double> relevanceFuture = Futures.transform(documentFuture, relevanceAsyncFun); relevanceFutures.add(relevanceFuture); } final ListenableFuture<List<Double>> futureOfRelevance = Futures.allAsList(relevanceFutures); final ListenableFuture<Double> maxRelevanceFuture = Futures.transform(futureOfRelevance, new Function<List<Double>, Double>() { @Override public Double apply(List<Double> relevanceList) { return Collections.max(relevanceList); } }); Futures.addCallback(maxRelevanceFuture, new FutureCallback<Double>() { @Override public void onSuccess(Double result) { log.debug("Result: {}", result); } @Override public void onFailure(Throwable t) { log.error("Error :-(", t); } }); Was it worth it? Yes and no. Yes, because we learned some really important constructs and primitives used together with futures/promises: chaining, mapping (transforming) and reducing. The solution is beautiful in terms of CPU utilization – no waiting, blocking, etc. Remember that the biggest strength of Node.js is its “no-blocking” policy. Also in Netty futures are ubiquitous. Last but not least, it feels very functional. On the other hand, mainly due to Java syntax verbosity and lack of type inference (yes, we will jump into Scala soon) code seems very unreadable, hard to follow and maintain. Well, to some degree this holds true for all message driven systems. But as long as we don’t invent better APIs and primitives, we must learn to live and take advantage of asynchronous, highly parallel computations. If you want to experiment with ListenableFuture even more, don’t forget to read official documentation.   Reference: Advanced ListenableFuture capabilities from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound blog. ...

Implement Bootstrap Pagination With Spring Data And Thymeleaf

Twitter Bootstrap has a very nice pagination UI, and here I will show you how to implement it with Spring Data Web pagination function and Thymeleaf conditional evaluation features. Standard Pagination in BootstrapSimple pagination inspired by Rdio, great for apps and search results. The large block is hard to miss, easily scalable, and provides large click areas.The original source code to display pagination from Bootstrap document is very simple: <div class='pagination'> <ul> <li><a href='#'>Prev</a></li> <li><a href='#'>1</a></li> <li><a href='#'>2</a></li> <li><a href='#'>3</a></li> <li><a href='#'>4</a></li> <li><a href='#'>5</a></li> <li><a href='#'>Next</a></li> </ul> </div> You can see this is just a mock up code, and to make it display page number dynamically with correct hyperlink URL, I need to do many changes to my existing code. So let’s start from bottom up, change domain layer first, then application service layer, presentation layer. Finally the configuration to glue them together. Domain Layer Changes The only change in domain layer is at BlogPostRepository. Before it has a method to retrieve a list of published BlogPost sorted by publishedTime: public interface BlogPostRepository extends MongoRepository<BlogPost, String>{ ... List<BlogPost> findByPublishedIsTrueOrderByPublishedTimeDesc(); ... } Now we need to get paginated result list. With Spring Data Page, we will return Page<BlogPost> instead of List<BlogPost>, and pass Pageable parameter: public interface BlogPostRepository extends MongoRepository<BlogPost, String>{ ... Page<BlogPost> findByPublishedIsTrueOrderByPublishedTimeDesc(Pageable pageable); ... } Application Service Layer Changes: The applicaiton service layer change is also very simple, just using new function from BlogPostRepository: BlogService interface public interface BlogService { ... Page<BlogPost> getAllPublishedPosts(Pageable pageable); ... } BlogServiceImpl class public class BlogServiceImpl implements BlogService { ... private final BlogPostRepository blogPostRepository; ... @Override public Page<BlogPost> getAllPublishedPosts(Pageable pageable) { Page<BlogPost> blogList = blogPostRepository.findByPublishedIsTrueOrderByPublishedTimeDesc(pageable); return blogList; } ... } Presentation Layer Changes: Spring Data Page interface has many nice functions to get current page number, get total pages, etc. But it’s still lack of ways to let me only display partial page range of total pagination. So I created an adapter class to wrap Sprng Data Page interface with additional features. public class PageWrapper<T> { public static final int MAX_PAGE_ITEM_DISPLAY = 5; private Page<T> page; private List<PageItem> items; private int currentNumber; private String url;public String getUrl() { return url; }public void setUrl(String url) { this.url = url; }public PageWrapper(Page<T> page, String url){ = page; this.url = url; items = new ArrayList<PageItem>();currentNumber = page.getNumber() + 1; //start from 1 to match page.pageint start, size; if (page.getTotalPages() <= MAX_PAGE_ITEM_DISPLAY){ start = 1; size = page.getTotalPages(); } else { if (currentNumber <= MAX_PAGE_ITEM_DISPLAY - MAX_PAGE_ITEM_DISPLAY/2){ start = 1; size = MAX_PAGE_ITEM_DISPLAY; } else if (currentNumber >= page.getTotalPages() - MAX_PAGE_ITEM_DISPLAY/2){ start = page.getTotalPages() - MAX_PAGE_ITEM_DISPLAY + 1; size = MAX_PAGE_ITEM_DISPLAY; } else { start = currentNumber - MAX_PAGE_ITEM_DISPLAY/2; size = MAX_PAGE_ITEM_DISPLAY; } }for (int i = 0; i<size; i++){ items.add(new PageItem(start+i, (start+i)==currentNumber)); } }public List<PageItem> getItems(){ return items; }public int getNumber(){ return currentNumber; }public List<T> getContent(){ return page.getContent(); }public int getSize(){ return page.getSize(); }public int getTotalPages(){ return page.getTotalPages(); }public boolean isFirstPage(){ return page.isFirstPage(); }public boolean isLastPage(){ return page.isLastPage(); }public boolean isHasPreviousPage(){ return page.hasPreviousPage(); }public boolean isHasNextPage(){ return page.hasNextPage(); }public class PageItem { private int number; private boolean current; public PageItem(int number, boolean current){ this.number = number; this.current = current; }public int getNumber(){ return this.number; }public boolean isCurrent(){ return this.current; } } } With this PageWrapper, we can wrap Page<BlogPost> returned from BlogService and put it into SpringMVC UI model. See the controller code for the blog page: @Controller public class BlogController ... @RequestMapping(value = '/blog', method = RequestMethod.GET) public String blog(Model uiModel, Pageable pageable) { PageWrapper<BlogPost> page = new PageWrapper<BlogPost> (blogService.getAllPublishedPosts(pageable), '/blog'); uiModel.addAttribute('page', page); return 'blog'; } ... } The Pageable is passed in from PageableArgumentResolver, which I will explain later. Another trick is I also pass the view URL to PageWrapper, and it can be used to construct Thymeleaf hyperlinks in pagination bar. Since my PageWrapper is very generic, I created an html fragment for pagination bar, so I can use it for anywhere in my application pages when pagination needed. This fragment html uses Thymeleaf th:if to dynamically switch between static text or hyperlink based on if the link is disabled or not. And it uses th:href to construct URL with correct page number and page size. <!-- Pagination Bar --> <div th:fragment='paginationbar'> <div class='pagination pagination-centered'> <ul> <li th:class='${page.firstPage}? 'disabled' : '''> <span th:if='${page.firstPage}'>← First</span> <a th:if='${not page.firstPage}' th:href='@{${page.url}(,page.size=${page.size})}'>← First</a> </li> <li th:class='${page.hasPreviousPage}? '' : 'disabled''> <span th:if='${not page.hasPreviousPage}'>«</span> <a th:if='${page.hasPreviousPage}' th:href='@{${page.url}(${page.number-1},page.size=${page.size})}' title='Go to previous page'>«</a> </li> <li th:each='item : ${page.items}' th:class='${item.current}? 'active' : '''> <span th:if='${item.current}' th:text='${item.number}'>1</span> <a th:if='${not item.current}' th:href='@{${page.url}(${item.number},page.size=${page.size})}'><span th:text='${item.number}'>1</span></a> </li> <li th:class='${page.hasNextPage}? '' : 'disabled''> <span th:if='${not page.hasNextPage}'>»</span> <a th:if='${page.hasNextPage}' th:href='@{${page.url}(${page.number+1},page.size=${page.size})}' title='Go to next page'>»</a> </li> <li th:class='${page.lastPage}? 'disabled' : '''> <span th:if='${page.lastPage}'>Last →</span> <a th:if='${not page.lastPage}' th:href='@{${page.url}(${page.totalPages},page.size=${page.size})}'>Last →</a> </li> </ul> </div> </div> Spring Configuration Change The last step is to put them all together. Fortunately I did some research before I updated my code. There is a very good blog post Pagination with Spring MVC, Spring Data and Java Config by Doug Haber. In his blog, Doug mentioned several gotchas, especially the Pageable parameter needs some configuration magic:In order for Spring to know how to convert the parameter to a Pageable object you need to configure a HandlerMethodArgumentResolver. Spring Data provides a PageableArgumentResolver but it uses the old ArgumentResolver interface instead of the new (Spring 3.1) HandlerMethodArgumentResolver interface. The XML config can handle this discrepancy for us, but since we’re using Java Config we have to do it a little more manually. Luckily this can be easily resolved if you know the right magic incantation… Doug HaberWith Doug’s help, I added this argument resolver to my WebConfig class: @Configuration @EnableWebMvc @ComponentScan(basePackages = '') public class WebConfig extends WebMvcConfigurerAdapter { ... @Override public void addArgumentResolvers(List<HandlerMethodArgumentResolver> argumentResolvers) { PageableArgumentResolver resolver = new PageableArgumentResolver(); resolver.setFallbackPagable(new PageRequest(1, 5)); argumentResolvers.add(new ServletWebArgumentResolverAdapter(resolver)); } ... } After all those changes, my Blog list will have pagination bar at the top and bottom of the page, and it always have at most 5 page numbers, with current number at the middle and disabled. The pagination bar also has First and Previous links at the beginning, Next and Last links at the end. I also used it in my admin pages, for user list and comment list, and it works very well.   Reference: Implement Bootstrap Pagination With Spring Data And Thymeleaf from our JCG partner Yuan Ji at the Jiwhiz blog. ...

Best Practices Ever for Software Development

Write programs for people, not computers.a program should not require its readers to hold more than a handful of facts in memory at once names should be consistent, distinctive and meaningful code style and formatting should be consistent all aspects of software development should be broken down into tasks roughly an hour longAutomate repetitive tasks.rely on the computer to repeat tasks save recent commands in a file for re-use use a build tool to automate scientific workflowsUse the computer to record tools should be used to track computational work automaticallyMake incremental in small steps with frequent feedback and course correctionUse version control.use a version control system everything that has been created manually should be put in version controlDon’t repeat yourself (or others).every piece of data must have a single authoritative representation in the system code should be modularized rather than copied and pasted re-use code instead of rewriting itPlan for mistakes.add assertions to programs to check their operation use an off-the-shelf unit testing library use all available oracles when testing programs turn bugs into test cases use a symbolic debuggerOptimize software only after it works correctly.use a profiler to identify bottlenecks write code in the highest-level language possibleDocument design and purpose, not mechanics.document interfaces and reasons, not implementations refactor code instead of explaining how it works embed the documentation for a piece of software in that softwareCollaborate.use pre-merge code reviews use pair programming when bringing someone new up to speed and when tackling particularly tricky problemsThe only extra I would have included would be: 11. Maintain and update older code.   Reference: Best Practices Ever for Software Development from our JCG partner Andriy Andrunevchyn at the Java User Group of Lviv blog. ...

Git – let’s make errors (and learn how to revert them)

It’s not a secret that git is not a very easy tool to use. I am able to use it more or less; but I always feel a little scared and confused about what’s happening. I feel that I want more informations. I have followed some tutorial and read distractedly some book but, with too much information I always end up with just the feeling that I could do what I want to do. But I do not know how to do it. I want to fix this situation so I have started to investigate more and I am trying to stick some key concept in my head, hoping to never forget them. Let me start giving the credits for my source: and I have found those articles interesting and helpful, the first in particular, but they are saying already   too many things for the simplified model that I need. Let’s assume that you are already a git user. You have a local repo, a remote one used to pull and push you work. And you also are aware of the existence of branches and you are using them. But still, with this basic knowledge you have the feeling of not being sure about your actions. I guess that one of the key of this confusion is the role of the staging area. Please, note that in my discussion I am giving my understanding of it, and that I could be wrong. But nevertheless I can build my knowledge on this concept and be able to give my self a rationale behind it that helps me to gain confidence with the tool. I like to think at the staging area, (that place where your modifications get tracked when you perform a git add operation, as a way to do ‘lighter commit‘. What I mean with lighter commit is that, since you are not forced to give a comment upon this action, you have less constraints. And since you are not even saving your action yet, you are clearly encouraged to perform add much more often than  commit. Let’s give a scenario for our add use case: the process of adding some new functionality to your codebase; it probably involves the creation of new files and ideas just pop in your mind in an unordered fashion. To give an example, let’s pretend that you are creating 2 files with a reference to each other. Maybe a source code file and it’s configuration file. You create the first, and start working on it. When you have finished to work on it you could think to commit it, but since your logical unit of work is not complete until you’ll also create the configuration file you have the opportunity to perform the mentioned ‘lighter commit, i.e. a simple add. After you have finished the work with the configuration file, you have to add it to the index and now you can commit it. Or if you prefer you can do a git commit -a , to obtain the same result. Since we have given a use case for the staging area, it should become easier to figure it’s role in the git workflow. It’s the logical place that stays between the current untracked directory and the committed (and safe) repository. We are calling it a ‘place’ so we can assume that we are interested in interacting with it. The already well known way to put things in it is the command: git add and it has 2 companion commands that you will use very often: git diff As the name suggests, it lists differences. But which ones? In its form without parameters, it lists differences between your current folder and the staging area. touch test.txt echo 'text' >> test.txt git add test.txt echo 'added1' >> test.txt git diff returns this output: gittest$ git diffdiff --git a/test.txt b/test.txt index 8e27be7..2db9057 100644 --- a/test.txt +++ b/test.txt @@ -1 +1,2 @@ text +added1 Ok we are now able to see differences between our working folder and the tracking context. We can obviously track the new modification, with an add command but we want also the opportunity to throw away our modification. This is obtained with git checkout . Git checkout, without parameters (others than ‘dot’, representing the current folder) will throw away the modification to your files and revert the status to the one tracked in the staging area with the previous add commands. gittest$ git status# On branch master # Changes to be committed: # (use 'git reset HEAD <file>...' to unstage) # # new file: test.txt # # Changes not staged for commit: # (use 'git add <file>...' to update what will be committed) # (use 'git checkout -- <file>...' to discard changes in working directory) # # modified: test.txt #gittest$ git checkout .gittest$ git status # On branch master # Changes to be committed: # (use 'git reset HEAD <file>...' to unstage) # # new file: test.txt # We have given a meaning to the staging area. And we can also think about it as the very first ‘environment’ we are facing with, since every command without specific parameters works on staging. Let’s move on. We are now able to add or discard changings to the staging area. We also know how to persistently store the changings, via git commit. What we do not yet know how to do is to discard completely our staging area. With a parallel with what we just did before, discarding staging is performed with: git checkout HEAD . That technically means that we are reverting to a specific commit point, the last one(HEAD). Before testing this we have to perform a couple of interactions since inconsistent git behaviour doesn’t allow us to execute the test right away. The reason is because our file was a ‘new’ file and not a ‘modified’ one. This breaks the symmetry but let me come back on this concept later. @pantinor gittest$ git status # On branch master # Changes to be committed: # (use 'git reset HEAD <file>...' to unstage) # # new file: test.txt # pantinor@pantinor gittest$ git commit -m 'added new file' [master f331e52] added new file 1 file changed, 1 insertion(+) create mode 100644 test.txt pantinor@pantinor gittest$ git status # On branch master nothing to commit (working directory clean) pantinor@pantinor gittest$ echo 'added' >> test.txt pantinor@pantinor gittest$ git status # On branch master # Changes not staged for commit: # (use 'git add <file>...' to update what will be committed) # (use 'git checkout -- <file>...' to discard changes in working directory) # # modified: test.txt # no changes added to commit (use 'git add' and/or 'git commit -a') pantinor@pantinor gittest$ git add test.txt pantinor@pantinor gittest$ git status # On branch master # Changes to be committed: # (use 'git reset HEAD <file>...' to unstage) # # modified: test.txt # pantinor@pantinor gittest$ git checkout HEAD . pantinor@pantinor gittest$ git status # On branch master nothing to commit (working directory clean) We have just learnt how to revert to a clean situation. We are now much less scared of the staging area. But we are still bad git users. We always forget to branch before starting to modify a working folder as suggested here: In my case it often goes like this: I have a stable situation, than I start to tweak something. But the tweaking is not linear and after some minutes I have lots of modified files. Yes, I could stage them all and commit them, but I do not trust myself and I do not want to pollute the master branch. It would have been much better if I was on a dev branch from the beginning of my modifications. What I could do now? We can create on the fly a branch and switch to it. pantinor@pantinor gittest$ echo something >> test.txt pantinor@pantinor gittest$ git status # On branch master # Changes not staged for commit: # (use 'git add <file>...' to update what will be committed) # (use 'git checkout -- <file>...' to discard changes in working directory) # # modified: test.txt # no changes added to commit (use 'git add' and/or 'git commit -a') pantinor@pantinor gittest$ git checkout -b dev M test.txt Switched to a new branch 'dev' On this new branch we will still accessing the shared staging area as you can see from my output: pantinor@pantinor gittest$ git status # On branch dev # Changes not staged for commit: # (use 'git add <file>...' to update what will be committed) # (use 'git checkout -- <file>...' to discard changes in working directory) # # modified: test.txt # no changes added to commit (use 'git add' and/or 'git commit -a') What we want to do now, is to add the working situation to the staging and to commit it, so to be able to flush the shared staging area. pantinor@pantinor gittest$ git add . pantinor@pantinor gittest$ git commit -m unstable [dev 5d597b2] unstable 1 file changed, 1 insertion(+) pantinor@pantinor gittest$ git status # On branch dev nothing to commit (working directory clean)pantinor@pantinor gittest$ cat test.txt text something and then, when we will go back to our master, we can find it free of all our experimental modification, not mature for the master branch: pantinor@pantinor gittest$ git checkout master Switched to branch 'master' pantinor@pantinor gittest$ git status # On branch master nothing to commit (working directory clean) pantinor@pantinor gittest$ echo test.txt test.txt Great. Keeping our commands relatively simple and free of parameters and flag we are able to do all the errors that we are inevitable going to do anyway. Let’s now introduce another pattern to cope with our other typical errors. The situation is similar to the one just described, but a little worse. Again, we haven’t branched before starting to play with the code, but this time we have also committed a couple of times before realizing that what we have committed is not as good as we thought. What we want to do this time, is to keep our unstable situation, but we want to move it away(hard reset) from the current branch. Let’s do a couple of commits: pantinor@pantinor gittest$ git status # On branch master nothing to commit (working directory clean) pantinor@pantinor gittest$ cat test.txt text pantinor@pantinor gittest$ echo 'modification1' >> test.txt pantinor@pantinor gittest$ git commit -a -m'first commit' [master 9ad2aa8] first commit 1 file changed, 1 insertion(+) pantinor@pantinor gittest$ echo 'modification2' >> test.txt pantinor@pantinor gittest$ git commit -a -m'second commit' [master 7005a92] second commit 1 file changed, 1 insertion(+) pantinor@pantinor gittest$ cat test.txt text modification1 modification2 pantinor@pantinor gittest$ git log commit 7005a92a3ceee37255dc7143239d55c7c3467551 Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 21:05:48 2012 +0000second commitcommit 9ad2aa8fae1cbd844f34da2701e80d2c6e39320e Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 21:05:23 2012 +0000first commitcommit f331e52f41a862d727869b52e2e42787aa4cb57f Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 20:20:15 2012 +0000added new file At this point we want to move the last 2 commit to a different branch: git branch unstable We created a new branch, but we haven’t switched to it. The just created new branch has obviously everything that was present at the time of its creation, i.e. the 2 commits that we want to remove. So we can revert our current branch to a previous commit, discarding completely the recent ones that will remain available on the unstable branch. To see what’s the commit that we want to revert to: git log we need to read the hashcode associated with the commit, to be able to perform our rollback(hard reset): pantinor@pantinor gittest$ git reset --hard f331e52f41a862d727869b52e2e42787aa4cb57f HEAD is now at f331e52 added new file If you now execute a git status or a git log, you will see no trace of the unstable commit, that are instead accessible in the unstable branch. On current: pantinor@pantinor gittest$ cat test.txt text pantinor@pantinor gittest$ git log commit f331e52f41a862d727869b52e2e42787aa4cb57f Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 20:20:15 2012 +0000added new file On branch: pantinor@pantinor gittest$ git checkout unstable Switched to branch 'unstable' pantinor@pantinor gittest$ cat test.txt text modification1 modification2 pantinor@pantinor gittest$ git log commit 7005a92a3ceee37255dc7143239d55c7c3467551 Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 21:05:48 2012 +0000second commitcommit 9ad2aa8fae1cbd844f34da2701e80d2c6e39320e Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 21:05:23 2012 +0000first commitcommit f331e52f41a862d727869b52e2e42787aa4cb57f Author: Paolo Antinori <pantinor''> Date: Sun Dec 16 20:20:15 2012 +0000added new file   Reference: Git – let’s make errors (and learn how to revert them) from our JCG partner Paolo Antinori at the Someday Never Comes blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: