What's New Here?


What I Learnt about JavaFX Today

In case you haven’t heard, JavaFX 2 is the new Desktop / web / client framework for Java. It’s had a considerable overhaul since JavaFX 1 (which was frankly not that impressive). Out has gone the custom scripting language, and instead you can write it using standard Java and an XML-based language for the actual UI presentation. So today, a friend and I got together at one of our places to teach ourselves a bit of JavaFX. Here’s what we learned, starting with some of the yak-shaving we had to do:First of all, install the JavaFX developer preview – get it here You have to unzip it, and place the resulting directory somewhere sensible, chown’d to root.I put it in /usr/local/javafx-sdk2.1.0-beta/Next, you’ll want an IDE to go with thatNetbeans is the IDE which is the most advanced and usable with JavaFX 2 You want Netbeans 7.1 RC2To get this to install on a Mac, you need JavaForMacOSX10.7.dmg – no lower version of official Apple Java will do, and an OpenJDK build won’t work either (even if it’s the correct version or higher) Once it’s installed, Netbeans will work fine with other JREs (I was mostly running it against the Java 7 Developer Preview) To start new JavaFX projects, you need to tell NetBeans where to find JavaFX. For this, you need to create a new JavaSE platform profile, and add the JavaFX dependencies in manually.Once it was installed, we started working with JavaFX properly. Our project for the day was to try to replicate some of Victor Grazi’s concurrency animations in JavaFX – both to teach ourselves the JavaFX technology, and also create some teaching tools as outputs.JavaFX uses Application as the main class to subclass The API docs are hereIf you’ve done any Flex development, JavaFX will seem very natural. E.g.The FXML file provides the UI and layout The top level FXML element has a fx:controller attriubte, which defines the Control for this View FXML elements are bound to members contained in the controller class which have been annotated with the @FXML annotation The fx:id property is used to define the name of the member that is being bound to the FXML element Binding also occurs to methods. E.g. buttons bind use an onAction handler, like this: onAction="#isFutureDone" The #methodName syntax is used to say which method should be called when the button is pressed.From this it’s very easy to get started with building up a basic application. Some things that we found:The UI thread can be quite easy to tie up. Don’t ever call a blocking method directly from the Control object, as triggering this code path on the UI thread will cause the display to hang. Be careful of exception swallowing. If you have a method in an object which is updating a UI element, but which is not annotated with @FXML, then you seem to need to call requestLayout() on the UI element after updating it. We’re not sure we got to the bottom of why – please enlighten us if you know why. The framework seems to use custom classloading to transform the FXML file into a “scene graph” of objects, seemingly a bit like how Spring does it.On the whole, we were quite impressed with our short hack session. The APIs seem clean, and the overall design of the framework seems sound. There were a few stability issues, but this is bleeding-edge tech on Mac – both the JDK and the JavaFX runtime are Developer Previews. We’ll definitely be back to do some more with JavaFX, and look forward to seeing it mature and become a fully-supported OSS framework for client development in Java. Reference: What I Learnt about JavaFX Today  from our JCG partner Martijn Verburg  at the Java 7 Developer Blog. Related Articles :Migrating from JavaFX 1.3 to JavaFX 2.0 JavaFX 2.0 beta sample application and after thoughts JavaOne is Rebuilding Momentum Sometimes in Java, One Layout Manager Is Not Enough...

Simplifying RESTful Search

Overview REST architectural pattern is based around two basic principles:Resources as URLs: A resource is something like an entity or a noun in modelling lingo. Anything on a web is identified as a resource and each unique resource is identified by a unique URL. Operations as HTTP methods: REST leverages existing HTTP methods, particularly GET, PUT, POST, and DELETE which map to resource’s read, create, modify and removal operations respectively.Any action performed by a client over HTTP, contains an URL and a HTTP method. The URL represents the resource and the HTTP method represents the action which needs to be performed over the resource. Being a broad architectural style, REST always have different interpretations. The ambiguity is exacerbated by the fact that there aren’t nearly enough HTTP methods to support common operations. One of the most common examples is the lack of a ‘search’ method. Search being one of the most extensively used features across different applications, but there have been no standards for implementing this feature. Due to this different people tend to design search in different ways. Given that REST aims to unify service architecture, any ambiguity must be seen as weakening the argument for REST. Further in this document, we shall be discussing how search over REST can be simplified. We are not aiming at developing standards for RESTful search, but we shall be discussing how this problem can be approached. Search Requirements Search being mostly used feature across different web applications, supports almost similar features around different applications. Below is the list of some common constituents of search features: Search based on one or more criteria at a time Search red colored cars of type hatchback color=red && type=hatchbackRelational and conditional operator support Search red or black car with mileage greater than 10 Colour=red|black && mileage > 10Wild card searchSearch car manufactured from company name starting with M company=M*PaginationList all cars but fetch 100 results at a time upperLimit=200 && lowerLimit=101Range searchesGet me all the cars launched between 2000 and 2010 launch year between (2000, 2010)When we support search with such features, search interface design itself becomes complex. And when implemented in a REST framework, meeting all these requirements (while still conforming to REST!) is challenging. Coming back to the basic REST principles, we are now left with following two questions:Which HTTP method to use for “search”? How to create effective resource URL for search?Query parameters versus Embedded URLs Modelling filter criteriaHTTP Method Selection Query Criteria vs. Embedded Criteria: Effectively, REST categorizes the operations by its nature and associates well-defined semantics with these categories. The idempotent operations are GET, PUT and DELETE (GET for read-only, PUT for update, DELETE for remove).   While POST method is used for non-idempotent procedures like create. By the definition itself, search is a read only operation, which is used to request for a collection of resources, filtered based on some criteria. So, GET HTTP method for search feature is an obvious choice. However, with GET, we are constrained with respect to URL size if we add complex criteria in the URL. URL Representation Let’s discuss this using an example: a user wish to search four-doored sedan cars of blue color; how shall the resource URL for this request look like? Below two different URLs are syntactically different but semantically same:/cars/?color=blue&type=sedan&doors=4 /cars/color:blue/type:sedan/doors:4Both of the above URLs conform to RESTful way of representing a resource query, but are represented differently. First one uses URL query criteria to add filtering details while the later one goes by an embedded URL approach. The embedded URL approach is more readable and can take advantage of the native caching mechanisms that exist on the web server for HTTP traffic. But this approach limits user to provide parameter in a specific order. Wrong parameter positions will cause an error or unwanted behaviour. Below two looks same but may not give you correct results/cars/color:red/type:sedan /cars/type:sedan/color:redAlso, since there’s no standardization for embedding criteria, people may tend to device their own way of representation. So, we consider query criteria approach over the embedded URL approach, though the representation is a bit complex and lacks readabilityModeling Filter Criteria: A search-results page is fundamentally RESTful even though its URL identifies a query. The URL shall be able to incorporate SQL like elements. While SQL is meant to filter data fetched from relational data, the new modelling language shall be able to filter data from hierarchical set of resources. This language shall help in devising a mechanism to communicate complex search requirements over URLs. In this section further, two such styles are discussed in detail.Feed Item Query Language (FIQL): The Feed Item Query Language (FIQL, pronounced “fickle”) is a simple but flexible, URI-friendly syntax for expressing filters across the entries in a syndicated feed. These filter expressions can be mapped at any RESTful service and can help in modelling complex filters. Below are some samples of such web URLs against their respective SQLs.SQLREST Search URLsselect * from actors where firstname=’PENELOPE’ and lastname=’GUINESS’ /actors?_s=firstname==PENELOPE;lastname==GUINESSselect * from actors where lastname like ‘PEN%’ /actors?_s=lastname==PEN*select * from films where filmid=1 and rentalduration <> 0 /films?_s=filmid==1;rentalduration!=0select * from films where filmid >= 995 /films?_s=filmid=ge=995select * from films where release date < ‘27/05/2005’ /film?_s=releasedate=le=2005-05-27T00:00:00.000%2B00:00Resource Query Language (RQL) : Resource Query Languages (RQL) defines a syntactically simple query language for querying and retrieving resources. RQL is designed to be URI friendly, particularly as a query component of a URI, and highly extensible. RQL is a superset of HTML’s URL encoding of form values, and a superset of Feed Item Query Language (FIQL). RQL basically consists of a set of nestable named operators which each have a set of arguments and operate on a collection of resources.Casestudy: Apache CXF advance search features To support advance search capabilities Apache CXF introduced FIQL support with its JAX-RS implementation since 2.3.0 release. With this feature, users can now express complex search expressions using URI. Below is the detailed note on how to use this feature: To work with FIQL queries, a SearchContext needs be injected into an application code and used to retrieve a SearchCondition representing the current FIQL query. This SearchCondition can be used in a number of ways for finding the matching data. @Path("books") public class Books { private Map books; @Context private SearchContext context;@GET public List getBook() {SearchCondition sc = searchContext.getCondition(Book.class); //SearchCondition is method can also be used to build a list of// matching beans iterate over all the values in the books map and // return a collection of         matching beans List found = sc.findAll(books.values()); return found; } } SearchCondition can also be used to get to all the search requirements (originally expressed in FIQL) and do some manual comparison against the local data. For example, SearchCondition provides a utility toSQL(String tableName, String… columnNames) method which internally introspects all the search expressions constituting a current query and converts them into an SQL expression: // find all conditions with names starting from 'ami' // and levels greater than 10 : // ?_s="name==ami*;level=gt=10" SearchCondition sc = searchContext.getCondition(Book.class); assertEquals("SELECT * FROM table WHERE name LIKE 'ami%' AND level > '10'", sq.toSQL("table")); Conclusion Data querying is a critical component of most applications. With the advance of rich client-driven Ajax applications and document oriented databases, new querying techniques are needed; these techniques must be simple but extensible, designed to work within URIs and query for collections of resources. The NoSQL movement is opening the way for a more modular approach to databases, and separating out modelling, validation, and querying concerns from storage concerns, but we need new querying approaches to match more modern architectural design.   Reference: Guava’s Strings Class from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

ZK Web Framework Thoughts

I’ve been asked several times to present some of my opinions about ZK. So, based of my experience of 4 years as a ZK user, here’s some thoughts: Overall developer experience, the community and documentation “It just works” Most of the stuff that ZK offers works very well, and the functionality is usually very intuitive to use if you have developed any desktop Java applications before. In 2007 I did a comparison of RIA technologies that included Echo2, ZK, GWT, OpenLaszlo and Flex. Echo2 and OpenLaszlo felt incomplete and buggy and didn’t seem to have proper Maven artifacts anywhere. GWT seemed more of a technical experiment than a good platform to build on. Flex was dropped because some important Maven artifacts were missing and Flash was an unrealistic requirement for the application. On the other hand, ZK felt the most “natural” and I was able to quickly get productive with it. During my 4 year long journey with ZK, I’ve gotten plenty of those “wow” moments when I’ve learned more and more of ZK and improved my architectural understanding of the framework. Nowadays I’ve got a pretty good understanding of what in ZK works, what doesn’t, and what has problems and what doesn’t. But still, after gaining all this good and bad insight, I consider ZK to be a very impressive product out of the box. The downside of this is of course the fact that the framework hides a lot of things from newcomers in order to be easy to use, and some of these things will bite you later on, especially if your application has lots of users. It’s very, very, very flexible ZK is very flexible and has plenty of integrations. Do you want use declarative markup to build component trees? Use ZUL files. Do you want to stick to plain Java? Use richlets. You can also integrate JSP, JSF, Spring, and use plenty of languages in zscript. The core framework is also pretty flexible and you can override a lot of stuff if you run into problems. The downside is that there are very many ways of doing things correctly, and even more ways of screwing up. Flexibility itself is not a negative point, but I think that the ZK documentation doesn’t guide users enough towards the best practices of ZK. What are the best practices anyway? Many tutorials use zscript, but the docs also recommend to avoid it due to performance reasons. The forum is quite active I think that the ZK forum is one of the best places to learn about ZK. It’s pretty active and the threads vary from beginner level to deep technical stuff. I read the forums myself almost every day and sometimes help people with their problems. There’s one thing that troubles me a bit: the English language in the forums isn’t usually very good and people often ask too broad questions. I know, it’s not fair to criticize the writings of non-native English speakers, especially when I’m not a native speaker myself. Regardless, I think that such a barrier exists. For example, take 5 random threads from the ZK forum and Spring Web forum. The threads in the Spring forums are typically more detailed and focused instead of “I’m a newbie and I need to create application x with tons of features, please tell me how to do everything”-type threads you see in the ZK forums and people clearly spend some time formulating good and detailed questions. You’ll see that you have to spend a bit more time in the ZK forum in order to understand the threads. It’s not anybody’s fault or anything, nor a bad thing, this is just an observation. Unfortunately for me it means that some of my limited time I have for the ZK community is spent just trying to understand what people are saying. Usually I answer a thread only when I know the answer right away, or if the thread concerns some deep technical stuff. There’s plenty of documentation In the past the ZK documentation was scattered, out of date and some of the more important stuff was completely missing. In the recent years the docs have improved a lot, and there’s now separate comprehensive references for ZK configuration, client-side ZK, and styling. I think the documentation is today very good, and most basic questions can be easily answered by reading the docs. As I mentioned above, ZK has a tendency to “just work”. The overall technical quality is impressive and on par with most Java web frameworks, but I believe there are some parts of ZK that are less impressive. Stuck on Java 1.4 ZK is built with Java 1.4, which greatly limits the flexibility of their API and internal code quality Negative effects on ZK internal codeThreadLocals not removed with remove() (calling set(null) does prevent leaking the contained object but does not properly remove a ThreadLocal)! Lots of custom synchronization code where simple java.util.concurrent data structures or objects would work (ConcurrentHashMap, Semaphore, Atomic*, etc) StringBuffer is used where StringBuilder would be appropriateNo annotations Personally I’m not a fan of annotation-heavy frameworks because annotations are an extralinquistic feature and usually you end up annotations with string-based values that have no type safety. However, I know that some people would be overjoyed to have an API based on them. No enums There are many places in the ZK API where proper enums would be much better than the hacks that are used at the moment. The worst offender is Messagebox. Just look at this signature: public static int show(String message, String title, int buttons, java.lang.String icon, int focus) Ugh..the magic integers remind me of SWT (which is a great library with an awful API). Let’s imagine an alternative version with enums and generics: public static Messagebox.Button show(String message, String title, Set<Messagebox.Button> buttons, Messagebox.Icon icon, Messagebox.Button focus) Much, much better and more typesafe. No more bitwise OR magic. I could code this in 10 minutes into ZK if it would use Java 1.5. No generics This is the worst part of being stuck on Java 1.4. I’ll just list some of the places where I’d like to see generics: Collection values in API signatures Example in org.zkoss.zk.ui.util.Initiator: void doInit(Page page, Map args); vs void doInit(Page page, Map<String, Object> args); Example in org.zkoss.zk.ui.Component: List getChildren(); vs List<Component> getChildren(); Collection-like classes Example in ListModel: public interface ListModel { ... Object getElementAt(int index); ... } vs public interface ListModel<T> { ... T getElementAt(int index); ... } All ListModel* classes should also be generic (most extend java.util.Collection). org.zkoss.zk.ui.event.EventListener public interface EventListener { public void onEvent(Event event); } vs public interface EventListener<T extends Event> { public void onEvent(T event); } org.zkoss.zk.ui.util.GenericAutowireComposer public class GenericAutowireComposer { protected Component self; ... } vs public class GenericAutowireComposer<T extends Component> { protected T self; ... } All *Renderer classes Example in org.zkoss.zul.RowRenderer: public interface RowRenderer { void render(Row row, Object data); } vs public interface RowRenderer<T> { void render(Row row, T data); } Unimpressive server push implementations The default PollingServerPush has latency and will absolutely kill your application server if there are many active users. CometServerPush is better, but it does not use non-blocking IO and will block servlet threads in your servlet container. Let’s put this into perspective: Tomcat 7.0 default configuration sets connector max threads to 200. This means that if you have 200 comet-enabled desktops, Tomcat will stop responding to other requests because all the threads are in use by comet. If the implementation used Servlet 3.0 or container-specific async APIs instead, you could run Tomcat even with one thread. It would of course be slow but it would not stop working! Also, CometServerPush requires ZK EE so regular users are stuck with PollingServerPush. I’d say that’s a pretty big limitation considering how server push is marketed. However, it’s not surprising: proper non-blocking comet is hard to implement and requires non-blocking components in all parts of the pathway from the browser to the servlet code. Zscript I don’t like zscript. It might have been a good feature many years ago, but I believe that today it should not be used at all. Why, oh why would someone want to replace typesafe compiled Java code with non-typechecked zscript mixed with ZUL templates?“I can use Python/Ruby/…”. This might be a valid point for some people but you’ll end up with unmaintainable code mangled inside ZUL templates “Changes are visible when you save the file”. True, but I would never sacrifice so much just for this feature. And besides, you can get a similar effect with JRebel.So, if you put “Java code” (=BeanShell code) in zscript, you might want to rethink that. Reliance on reflection Many useful features rely on reflection, which limits what things the compiler can check for you. This is very typical thing in many Java libraries/frameworks, so it’s not really a ZK-specific thing. As a Scala user I can see how the limitations of Java have guided most frameworks to the path of reflection/annotations. Reflection cannot always be avoided but I think it’s a bad sign if most of the useful features rely on reflection. Here are some features in ZK that use reflection:Any kind of event listening that does not use component.addEventListener. This includes any classes that extend GenericEventListener (such as all ZK-provided Composer classes except MultiComposer) Data binding EL expressions in ZUL templatesReference: Thoughts about the ZK Web Framework: Overall experience  & Thoughts about the ZK Web Framework: Technical stuff from our JCG partner Joonas Javanainen at the Jawsy Solutions technical blog Related Articles :Getting Started with SmartGWT for awesome GWT interfaces Advanced SmartGWT Tutorial, Part 1 Securing GWT apps with Spring Security GWT EJB3 Maven JBoss 5.1 integration tutorial Spring MVC3 Hibernate CRUD Sample Application Spring MVC Development – Quick Tutorial...

Top 10 JavaCodeGeeks posts for 2011

2011 is coming to its end, and like last year, we have created a compilation of the Top 10 Java Code Geeks posts for this year. This compilation serves as a reminder of our best moments for the year that is ending. The posts ranking was performed based on the absolute number of page views per post, not necessarily unique. It includes only articles published in 2011. So, let’s see in ascending order the top posts for 2011. 10) The top 9+7 things every programmer or architect should know This articles comprises a nice compilation of thoughts and topics about software development from very experienced authors. Hints and tips on things that programmers and architects should know. Closely related to an older article of ours, Things Every Programmer Should Know. 9) Hate Java? You’re fighting the wrong battle. An article that explains why hating Java is futile. The author provides some reasons on why Java is looked down by some developers nowadays and claims that those “accusations” are quite irrelevant. 8) Java Fork/Join for Parallel Programming This tutorial is a nice introduction to parallel programming with Java’s Fork/Join framework, published before the launch of Java 7 (which integrates those features into the JDK). It provides some examples on how to use the Fork/Join framework, which is designed to make divide-and-conquer algorithms easy to parallelize. 7) Android Quick Preferences Tutorial An Android development tutorial. It explains how to use the native preferences framework in order to show, save and manipulate user’s preferences very easily. Don’t forget to check out the Android snippets section in our Java Examples and Code Snippets site. 6) RESTful Web Services with RESTeasy JAX-RS on Tomcat 7 – Eclipse and Maven project The use of RESTful web services is constantly rising and this tutorial explains how to implement RESTful services using JBoss RESTeasy. The application is built with Maven and gets deployed on a Tomcat instance. Nice tutorial to get you started with REST. 5) Spring MVC Development – Quick Tutorial This tutorial will kickstart you with your web applications allowing to leverage Spring’s web framework. Spring MVC enables easy web application development with a framework based on the Model View Controller architecture (MVC) pattern. 4) Android JSON Parsing with Gson Tutorial The existence of this tutorial in the Top 10 was quite a surprise to me, especially since it rose to such a high position. It shows that Android developers look for efficient ways to handle JSON data and Google Gson is an elegant solution for this. 3) 10 Tips for Proper Application Logging A compilation of tips on logging and how to properly use it in your applications, highly recommended. The topic of logging is enormous so make sure to also check out The Java Logging Mess and Configure LogBack Logging with Spring. 2) Android Google Maps Tutorial Another Android hit. This tutorial shows how to integrate Google Maps into your Android application. The well established Google Maps API is used under the hood in order to bring the power of Google Maps to your Android applications. After this, also check out Android Location Based Services Application. 1) Funny Source Code Comments The most popular Java Code Geeks post for 2011 and, all in all, second only to our all time classic GWT 2 Spring 3 JPA 2 Hibernate 3.5 Tutorial. It is a collection of funny source code comments, provided by developers all over the world. Take a look at it, it could definitely make your day. That’s it guys. Our top posts for 2011. I hope you have enjoyed our blog during the past year and that you will continue to provide your support in the year to come. Happy new year everyone! From the whole Java Code Geeks team, our best wishes! Ilias Tsagklis ...

Configure Java EE applications or “Putting Bien into practice”

A lot has been talked about application configuration in the past. I don’t know who kicked off the debate but the most fundamental reading (with a look at future Java EE 7 and beyond) is Antonio Goncalves’ posting [Debate] – And what about configuration in Java EE 7. Fact is, with vanilla Java EE we all do application configuration day by day. Without having any special mechanism in place. Having seen Adam’s latest post from yesterday I would like to share a slight add-on to it, which I feel could fit to most of the projects out there. Why this post? The basics shown by Adam are really smart. You simply @Inject int yourConfigVariable;and you are done. You don’t have to care about properties or other configuration classes. But looking into it, you see, that you somehow need to fill your configuration from somewhere. And looking back at Antonio’s post, you see that you have a lot of options doing this. The one we are most comfortable with is probably Java’s Properties mechanism. Using this in combination with the code presented by Adam you end up having a Configuration.properties with an endless list of single word keys. That’s not what I would call maintainable. So basically this is why the post has the title: “Putting Bien into practice” ..oO(sorry for that, Adam!) :-) Here are my approaches to the problem. Fill your configuration from a properties file The most basic part is to add a Configuration.properties file to your application (default package). Now we are going to modify the configuration holder a bit to make it a Properties type. Now modify Adam’s fetchConfiguration() method to load it. private Properties configData;@PostConstruct public void fetchConfiguration() { String fileName = "Configuration.properties"; configData = loadPropertiesFromClasspath(fileName); }/** * Load properties file from classpath with Java 7 :-) * @param fileName * @return properties */ public static Properties loadPropertiesFromClasspath(String fileName) { try (InputStream in = Thread.currentThread().getContextClassLoader().getResourceAsStream( fileName)) { if (in != null) { props = new Properties(); props.load(in); } } catch (IOException ioe) { log.debug("Can't load properties.", ioe); }Now you have to modify the @Producer methods accordingly. I’m only showing the getString() method here to show you the concept: /** * Get a String property * @param point * @return String */ @Produces public String getString(InjectionPoint point) { String propertyPath = point.getMember().getDeclaringClass().getName()+ "."; String propertyName = point.getMember().getName(); String propertyValue = configData.getProperty(propertyPath+propertyName); return (propertyValue == null) ? "" : propertyValue; }For convenience reasons I added the name of the declaring class as propertyPath in order to have a bit more order within your property file. You use the producer methods as Adam has shown: package net.eisele.configuration; public class HitsFlushTimer { @Inject private String hitsFlushRate; }In this case you end up accessing a property with the key net.eisele.configuration.HitsFlushTimer.hitsFlushRate in your Configuration.properties file. One quick warning. If it happens to you, that you have to package separate ejb and war modules within an ear you probably need the javax.annotation.security.PermitAll annotation at your Configuration singleton. Then you end up with lots of duplicates That’s probably true. If you have the same configuration over an over again (e.g. httpProxy) this would force you to have the same value for different keys in your property file. The solution seems straight forward. We need our own Qualifier for that. Let’s go: @Retention(RUNTIME) @Target({FIELD, METHOD}) @Qualifier public @interface AppProperty { @Nonbinding public String value(); }Now we have our own Qualifier for that. Next is to change the @Producer accordingly: @Produces @AppProperty("") public String getString(InjectionPoint point) { String property = point.getAnnotated().getAnnotation(AppProperty.class).value(); String valueForFieldName = configData.getProperty(property); return (valueForFieldName == null) ? "" : valueForFieldName; }That’s it. Now you can use something like this wherever you like: @Inject @AppProperty("net.eisele.configuration.test2") String test2;I know, this isn’t half that elegant like Adam’s one @Inject annotation. But:You don’t have to guess a lot to see what’s happening and where your value is coming from. I consider this a big pro in projects with more than one developer. Yeah. Still not very maintainable. Ok. I know. You are still talking about refactoring property names. Right? What is left to do? You could think about using a CKey Enum which encapsulates all your property keys and use this instead of simply using the keys itself. But, I would prefer to simply use the plain String keys within my code. Happy configuring now. How are you configuring your applications? Let me know! Happy to receive comments :) Reference: Configure Java EE applications or “Putting Bien into practice”  from our JCG partner Markus Eisele  at the Enterprise Software Development with Java blog. Related Articles :From Spring to Java EE 6 Configuration Management in Java EE Java EE Past, Present, & Cloud 7 Java EE6 Events: A lightweight alternative to JMS Java EE6 Decorators: Decorating classes at injection time...

Android Game Postmortem – ArkDroid Development

Hello guys, As you might have noticed, we have recently delved into the world of mobile game programming. This was done after creating JCG Studios, an independent mobile game studio based on Athens, Greece. JCG here stands for Just Cool Games, it is our other acronym, except for Java Code Geeks of course. Our platform of choice is Android and our first attempt of developing a game from scratch resulted in the creation of ArkDroid. As its name suggests, ArkDroid is an Arkanoid clone for Android. It is what we would like to call “Brick Breaker Evolved”.ArkDroid features cinematic story line, appealing visual effects, deep space music themes, campaign and free play modes and sophisticated weaponry system among others. We would love it if you took a look at it and let us know what you think about it. Having finished the development of ArkDroid a while ago, I would like to share some of the experiences I gathered while being a member of the development team. Consider this as a basic game postmortem, similar to the ones you might have read if you have some experience in game programming.Let’s begin with a short overview of the game creation. The development team consisted of two people, Byron and me. The development lasted about 4 months, each of us dedicating about 15 hours per week during that period. It was actually a side project running along with our regular jobs and Java Code Geeks. We followed the usual approach of creating both a full version and a lite version. Following are some insights regarding game programming in general and more specifically about programming for the Android platform. R.T.F.M. (Read The Fine Manual) Do yourself a favor and make sure you have a good grasp of the Android fundamentals before embarking on the journey of making an Android game. Check out the developer’s guide and make sure to bookmark the Javadoc pages. These are also available in the Android SDK for offline browsing. It would be quite easy to get started if you have previous experience into making games, but make sure you dedicate the appropriate time before delving into the magic world of Android game programming. Our Android tutorials could be of help for this. We also recently introduced Android Game Development Tutorials.Collaborate Efficiently Collaboration is critical when a distributed team is involved. Don’t worry, nothing fancy is required. For our online communication, we used Skype and for a loose “project management” we played with Google Docs. A text document and a spreadsheet should be more than enough for two people in order to track bugs and assign new features. Regarding the code itself, a versioning system goes without saying. We decided to go with good-old Subversion, since this is what we have the most experience with, but any modern source management tool would be fine. There are also a bunch of sites that provide private repositories, do your search. Rev up your engine Resist falling victim of the “Not Invented Here” syndrom and embrace the power that a game engine can give you. When starting development, we evaluated some of the available Android game engines and we decided to go with the very nice AndEngine. AndEngine is quite mature, provides an abundance of features and shortcuts and I guarantee that it will help you kick-start your project in no time. It should definitely be noted tha libgdx was a close second and we decided to skip it because it is a bit low level for us. We decided in favour of the lower development time that AndEngine ensures. That being said, if performance is critical to your game’s success, libgdx is a no-brainer. As an added bonus, libgdx is cross-platform, meaning that it can be used to write both desktop and mobile (Android) games. Know thy engine As stated above, AndEngine was used to provide the core framework for building our game. Before writing a single line code however, we made sure to got through all the available tutorials. Unfortunately, there are no Javadocs (weird) so you will have to rely only on the examples and the public forums. And hey, you got the source code available, right?Hit that device One of the most major issues we had during development was the ridicuslouly slow time that the applications take in order to be deployed into the SDK’s emulator. I mean come on, I have a cutting edge laptop and it takes forever (ok about a minute and so) to redeploy the game on it. For this reason, Byron and I made sure to purchase real devices for ourselves and perform the majority of debugging and testing on them instead of the emulator. Test, test, test And yes, as you might have guessed, I mean test in the real device. You have to test the game in every possible way. Act as a non-experienced mobile user. Provide random input and witness how your game is going to react on that. Make sure that all corners are covered. Be warned, due to the nature of games, it might be difficult to even reproduce a bug, let alone debug the game and fix it. Cover the spectrum Android fragmentation is a real thing and it can be a major PITA. Different resolutions, screen sizes, CPU powers and the list goes on. So, if you are thinking of Android game programming in a more professional way, you will have to get more that one device in order to cover all bases, from low-end to the high-end devices. GC is your enemy Good games provide first of all a smooth experience to their players. You have to avoid hiccups during game play and as you might have guessed, hiccups are caused by Garbage Collections in Android. It is really a “stop the world” procedure which unfortunately gets noticed by the users. The best advice on this is to make absolutely sure that you do not create unnecessary objects. Your game consists actually of a main loop, that is executed multiple times per second, so this makes things worse. The resources are rather scarce in the mobile world, so proceed with big caution. Be Patient You probably have heard all those stories about creating a “game” in a weekend or a “few hours” time. Yeah, right… Let me warn you, creating a non-trivial game requires a great amount of effort and above all, patience.Assets? What is this? Ok, Byron and I are programmers at heart. Hardcore. Creating “art” sounds a bit bizarre to us. I believe that the term programmer’s art is a very successful and representative one. If you can afford to hire someone to create your graphics and sound, do it without thinking twice. Otherwise, you will have to make the most out of the available tools. We used GIMP for our image editing needs and I think that we actually managed to create some decent art work (mainly Byron). Beta testers wanted When near the end of development, make sure to beta test your game. Find some friends or relatives and let them experiment with your game. Ideally, you should be around when they play with it. Watch them and see what they liked, what troubled them and what felt weird to them. This feedback is invaluable, trust me on that. We were lucky enough to have some great beta testers which helped us track some bugs and gave helpful advice (thank you Pelagia, Eleftheria, Ben). Just ship It is quite common among developers to find unfinished projects. Applications that could have changed the world and games that could have written history, but instead they gather dust forgotten inside old hard disks. Fight all excuses and take the dive: - I cannot ship yet, it is not done… - Just ship… - Hey, this function does not perform as well… - Just ship! - But I haven’t finished feature X… - Just ship!! - Yes, but my competition has feature Y… - Just ship!! - With only 100 levels?… - Just ship!!! Just ship the thing, get your user’s feedback and build on that. Don’t quit your day job just yet This is a very common advice but I would like to repeat it here. Don’t rush into making quick decisions like quitting your day job. Mobile market is a very strange beast and while it could make you a millionnaire, it could also leave you scratching your head as why your super-duper game does not make a damned sell. Game development is life changing I personally come from a hardcore enterprise programming background (JEE to the max). You probably know the drill: persist some data, commit that transaction, generate some reports, move the data again, etc etc. This can get boring after a while. Game dev however is a totally refreshing change, a brave new world that I am happy to have found. Enjoy the game Finally, make sure that you make something you actually like yourself. If you get bored by your very own game, it is almost certain that you will drop out midway… or that you will end up with a horrible game. We decided to start our endeavor with an all-time classic. Who does not love Arkanoid? So, that’s all folks… Don’t forget to share! And of course, do not forget to check out our new Android Game, ArkDroid. You feedback will be more than helpful!Related Articles :JCG Studios – ArkDroid official launch Android Game Development Tutorials...

The Architecture Spike Kata

Do you know how to apply coding practices the technology stack that you use on a daily basis? Do you know how the technology stack works? For many programmers, it’s easy enough to use test-driven development with a trivial example, but it can be very hard to know how to apply it to the problems you face every day in your job. Java web+database applications are usually filled to the brim with technologies. Many of these are hard to test and many of these may not add value. In order to explore TDD and Java applications, I practiced the Java EE Spike Kata in 2010. Here’s a video of me and Anders Karlsen doing this kata at JavaZone 2010. A similar approach is likely useful for programmers using any technology. Therefore, I give you: The rules of the Architecture Spike Kata.The problem Create a web application that lets users register a Person with names and search for people. The Person objects should be saved in a data store that is similar to the technology you use daily (probably a relational database). The goal is to get a spike working as quickly as possible, so in the first iteration, the Person entity should probably only contain one field. You can add more fields and refactor the application later.The rules The most important rules are Robert Martin‘s three rules of Test-driven development:No code without test (that is, the code should never do something that isn’t required in order to get a test to pass) Only enough test to get to red (that is, the tests should run, give an error message and that error message should correct) Only enough code to get to green (that is, the tests should run and not give an error) (My addition: Refactor on green without adding functionality)Secondly, application should be driven from the outside in. That is, your first test should be a top-level acceptance test that tests through http and html. It’s okay to comment out or @Ignore this test after it has run red for the first time. Lastly, you should not introduce any technology before the pain of not doing so is blinding. The first time you do the kata in a language, don’t use a web framework beyond the language minimum (in Java, this means Servlets, in node.js it’s require('http'), in Ruby it means Rack). Don’t use a Object-Relational Mapping framework. Don’t use a dependency injection framework. Most definitely don’t use an application generator like Rails scaffold, Spring Roo or Lift. These frameworks can be real time savers, but this kata is about understanding how the underlying technology works. As a second iteration, use the technologies you use on a daily basis, but this time set up from scratch. For example, if your project uses Hibernate, try configuring the session factory by hand. By using frameworks in simplest way possible, you’ll both learn more about what they bring to the table and how to use them properly. For complex technology like Hibernate, there’s no substitute for deeper understanding.What to expect So far, I’ve only done the Architecture Spike Kata in Java. But on the other hand, I’ve done it around 50 times together with more than ten other developers. I’ve written about how to get started with the Java EE Spike Kata (in Norwegian) on my blog before. This is what I’ve learned about working with web applications in Java:Most Java web frameworks seem to harm more than they help Hibernate is a bitch to set up, but once it’s working, it saves a lot of hassle Using TDD with Hibernate helped me understand how to use Hibernate more effectively I’ve stopped using dependency injection frameworks (but kept on using dependency injection as a pattern) I have learned several ways to test web applications and database access independently and integrated I no longer have to expend mental energy to write tests for full stack applicationThe first time I write this kata with another developer, it takes around 3 to 5 hours, depending on the experience level of my pair. After running through it a few times, most developers can complete the task in less than an hour. We get better through practice, and the Architecture Spike Kata is a way to practice TDD with the technologies that you use daily and get a better understanding of what’s going on. Reference: The Architecture Spike Kata from our JCG partner Johannes Brodwall at the Thinking Inside a Bigger Box blog. Related Articles :How to start a Coding Dojo Iterationless Development – the latest New New Thing You can’t be Agile in Maintenance? (Part 1) Even Backlogs Need Grooming Agile software development recommendations for users and new adopters...

Red Hat Openshift: Getting started – Java EE6 in the Cloud

For a while now I’m looking into ‘the cloud’. Looking into its features, what it can do, why we should switch to ‘ the cloud’, going to talks, talking to people like @maartenballiauw, who is a cloud specialist at RealDolmen. I’ve already deployed an application on google app engine (for java) and I really liked the experience. Some new concepts come into play like distributed data and so on. But in the recent chain of events, being more interested in the future of java EE, I looked into OpenShift. OpenShift is a PaaS offerd by Red Hat. The basic idea is to run Java EE 6 in the cloud and that is exactly what we want to do. I’m using Ubuntu for this, so all my commands are based upon the Ubuntu distro. Be sure to register for an account on openshift.redhat.com, you will need it to create a domain and application. Starting of we have to install ruby gems. The ruby gems are the interface to manage our cloud domain. So first we install the gems. $ sudo apt-get install git ruby rubygems ruby1.8-dev We need git to checkout the code, the ruby packages is to install the gems. Now we install the gems. $ sudo gem install rhc The rhc (red hat cloud I presume) is the base for all the commands that will be used to manipulate our openshift domain. So first we need to create a domain. The gems are standard deployed installed in the /var/lib/gems/1.8/gems/bin folder. We best add it to our $PATH variable for easy access. Now everything is ready to start working with openshift. Now we want to create a domain. The domain is your work directory on OpenShift. Choose something unique and you will be able to access your applications via http://projectname-domainname.rhcloud.com. To create your domain we need the ‘rhc-create-domain’ command. $ ./rhc-create-domain -n domainname -l loginid Now you will be promted for you password, just type it and you are done. Your domain is created. Your domain is setup, we now want to create an application. $ ./rhc-create-app -a applicationName -t jbossas-7.0 The -t parameter indicates we will be running the application on a jbossas-7.0. The cool thing about creating an application on OpenShift is that we now have a fully setup git repository. When we push, the application is pushed to OpenShift. To start of I forked the seambooking example on github (https://github.com/openshift/seambooking-example). I did not really need to fork it, but it offers a good basic setup for OpenShift project. Once I’ve added the code to my OpenShift git repository, I can simply do a git push. $ git push The sample app is running, running in the cloud… More information on http://openshift.redhat.com and https://github.com/openshift/seambooking-example Reference:  Red Hat Openshift: Getting started – Java EE6 in the Cloud from our JCG partner Jelle Victoor at the Styled Ideas blog. Related Articles :From Spring to Java EE 6 Java EE6 CDI, Named Components and Qualifiers Oracle WebLogic Java Cloud Service – Behind the scenes. Java EE Past, Present, & Cloud 7 Developing and Testing in the Cloud...

JAXB, SAX, DOM Performance

This post investigates the performance of unmarshalling an XML document to Java objects using a number of different approaches. The XML document is very simple. It contains a collection of Person entities.  <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persons> <person> <id>person0</id> <name>name0</name> </person> <person> <id>person1</id> <name>name1</name> </person> ...There is a corresponding Person Java object for the Person entity in the XML .. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "id", "name" }) public class Person { private String id; private String name; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String value) { this.name = value; } }and a PersonList object to represent a collection of Persons.  @XmlAccessorType(XmlAccessType.FIELD) @XmlRootElement(name = "persons") public class PersonList { @XmlElement(name="person") private List<person> personList = new ArrayList<person>(); public List<person> getPersons() { return personList; } public void setPersons(List<person> persons) { this.personList = persons; } }The approaches investigated were:Various flavours of JAXB SAX DOMIn all cases, the objective was to get the entities in the XML document to the corresponding Java objects. The JAXB annotations on the Person and PersonList POJOS are used in the JAXB tests. The same classes can be used in SAX and DOM tests (the annotations will just be ignored). Initially the reference implementations for JAXB, SAX and DOM were used. The Woodstox STAX parsing was then used. This would have been called in some of the JAXB unmarshalling tests. The tests were carried out on my Dell Laptop, a Pentium Dual-Core CPU, 2.1 GHz running Windows 7. Test 1 – Using JAXB to unmarshall a Java File. @Test public void testUnMarshallUsingJAXB() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); PersonList obj = (PersonList)unmarshaller.unmarshal(new File(filename)); }Test 1 illustrates how simple the progamming model for JAXB is. It is very easy to go from an XML file to Java objects. There is no need to get involved with the nitty gritty details of marshalling and parsing. Test 2 – Using JAXB to unmarshall a StreamsourceTest 2 is similar Test 1, except this time a Streamsource object wraps around a File object. The Streamsource object gives a hint to the JAXB implementation to stream the file. @Test public void testUnMarshallUsingJAXBStreamSource() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); StreamSource source = new StreamSource(new File(filename)); PersonList obj = (PersonList)unmarshaller.unmarshal(source); }Test 3 – Using JAXB to unmarshall a StAX XMLStreamReader Again similar to Test 1, except this time an XMLStreamReader instance wraps a FileReader instance which is unmarshalled by JAXB. @Test public void testUnMarshallingWithStAX() throws Exception { FileReader fr = new FileReader(filename); JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmler = xmlif.createXMLStreamReader(fr); PersonList obj = (PersonList)unmarshaller.unmarshal(xmler); }Test 4 – Just use DOM This test uses no JAXB and instead just uses the JAXP DOM approach. This means straight away more code is required than any JAXB approach. @Test public void testParsingWithDom() throws Exception { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document doc = builder.parse(filename); List personsAsList = new ArrayList(); NodeList persons = doc.getElementsByTagName("persons"); for (int i = 0; i <persons.getLength(); i++) { Element person = (Element)persons.item(i); NodeList children = (NodeList)person.getChildNodes(); Person newperson = new Person(); for (int j = 0; j < children.getLength(); j++){ Node child = children.item(i); if (child.getNodeName().equalsIgnoreCase("id")) { newperson.setId(child.getNodeValue()); } else if (child.getNodeName().equalsIgnoreCase("name")) { newperson.setName(child.getNodeValue()); } } personsAsList.add(newperson); } }Test 5 – Just use SAX Test 5 uses no JAXB and uses SAX to parse the XML document. The SAX approach involves more code and more complexity than any JAXB approach. The Developer has to get involved with the parsing of the document. @Test public void testParsingWithSAX() throws Exception { SAXParserFactory factory = SAXParserFactory.newInstance(); SAXParser saxParser = factory.newSAXParser(); final List<person> persons = new ArrayList<person>(); DefaultHandler handler = new DefaultHandler() { boolean bpersonId = false; boolean bpersonName = false; public void startElement(String uri, String localName,String qName, Attributes attributes) throws SAXException { if (qName.equalsIgnoreCase("id")) { bpersonId = true; Person person = new Person(); persons.add(person); } else if (qName.equalsIgnoreCase("name")) { bpersonName = true; } } public void endElement(String uri, String localName, String qName) throws SAXException { } public void characters(char ch[], int start, int length) throws SAXException { if (bpersonId) { String personID = new String(ch, start, length); bpersonId = false; Person person = persons.get(persons.size() - 1); person.setId(personID); } else if (bpersonName) { String name = new String(ch, start, length); bpersonName = false; Person person = persons.get(persons.size() - 1); person.setName(name); } } }; saxParser.parse(filename, handler); }The tests were run 5 times for 3 files which contain a collection of Person entities. The first first file contained 100 Person entities and was 5K in size. The second contained 10,000 entities and was 500K in size and the third contained 250,000 Person entities and was 15 Meg in size. In no cases was any XSD used, or any validations performed. The results are given in result tables where the times for the different runs are comma separated. TEST RESULTS The tests were first run using JDK 1.6.26, 32 bit and the reference implementation for SAX, DOM and JAXB shipped with JDK was used.Unmarshall Type 100 Persons time (ms) 10K Persons time (ms)  250K Persons time (ms)JAXB (Default)  48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591JDK 1.6.26 Test commentsThe first time unmarshalling happens is usually the longest.The memory usage for the JAXB and SAX is similar. It is about 2 Meg for the file with 10,000 persons and 36 – 38 Meg file with 250,000.  DOM Memory usage is far higher.  For the 10,000 persons file it is 6 Meg, for the 250,000 person file it is greater than 130 Meg. The performance times for pure SAX are better. Particularly, for very large files.The exact same tests were run again, using the same JDK (1.6.26) but this time the Woodstox implementation of StAX parsing was used.Unmarshall Type 100 Persons time (ms) 10K Persons time (ms)  250K Persons time (ms)JAXB (Default)  168,3,5,8,3 294, 43, 46, 43, 42 2055, 1354, 1328, 1319, 1319JAXB(Streamsource) 11, 3,3,3,4 43,42,47,44,42 1147, 1149, 1176, 1173, 1159JAXB (StAX) 30,0,1,1,0 67,37,40,37,37 1301, 1236, 1223, 1336, 1297DOM 103,1,1,1,2 136,52,49,49,50 1882, 1883, 1821, 1835, 1822SAX 4, 2, 2,1,1 31,25,25,38,25 613, 609, 607, 595, 613JDK 1.6.26 + Woodstox test commentsAgain, the first time unmarshalling happens is usually proportionally longer.Again, memory usage for SAX and JAXB is very similar. Both are far better than DOM.  The results are very similar to Test 1.The JAXB (StAX) approach time has improved considerably. This is due to the Woodstox implementation of StAX parsing being used.The performance times for pure SAX are still the best. Particularly for large files.The the exact same tests were run again, but this time I used JDK 1.7.02 and the Woodstox implementation of StAX parsing.Unmarshall Type 100 Persons time (ms) 10,000 Persons time (ms)  250,000 Persons time (ms)JAXB (Default)  165,5, 3, 3,5 611,23, 24, 46, 28 578, 539, 511, 511, 519JAXB(Streamsource) 13,4, 3, 4, 3 43,24, 21, 26, 22 678, 520, 509, 504, 627JAXB (StAX) 21,1,0, 0, 0 300,69, 20, 16, 16 637, 487, 422, 435, 458DOM 22,2,2,2,2 420,25, 24, 23, 24 1304, 807, 867, 747, 1189SAX 7,2,2,1,1 169,15, 15, 19, 14 366, 364, 363, 360, 358JDK 7 + Woodstox test comments: The performance times for JDK 7 overall are much better.   There are some anomolies – the first time the  100 persons and the 10,000 person file is parsed. The memory usage is slightly higher.  For SAX and JAXB it is 2 – 4 Meg for the 10,000 persons file and 45 – 49 Meg for the 250,000 persons file.  For DOM it is higher again.  5 – 7.5 Meg for the 10,000 person file and 136 – 143 Meg for the 250,000 persons file.Note: W.R.T. all testsNo memory analysis was done for the 100 persons file. The memory usage was just too small and so it would have pointless information.The first time to initialise a JAXB context can take up to 0.5 seconds. This was not included in the test results as it only took this time the very first time. After that the JVM initialises context very quickly (consistly < 5ms). If you notice this behaviour with whatever JAXB implementation you are using, consider initialising at start up.These tests are a very simple XML file. In reality there would be more object types and more complex XML. However, these tests should still provide a guidance.Conclusions:The peformance times for pure SAX are slightly better than JAXB but only for very large files. Unless you are using very large files the performance differences are not worth worrying about. The progamming model advantages of JAXB win out over the complexitiy of the SAX programming model.  Don’t forget JAXB also provides random accses like DOM does. SAX does not provide this.Performance times look a lot better with Woodstox, if JAXB / StAX is being used.Performance times with 64 bit JDK 7 look a lot better. Memory usuage looks slightly higher.Reference: JAXB, SAX, DOM Performance from our JCG partner Alex Staveley at the Dublin’s Tech Blog . Related Articles :Using JAXB to generate XML from XSD Mapping Objects to Multiple XML Schemas – Weather Example Develop Restful web services using Spring MVC Android XML Binding with Simple Framework Tutorial Boost your Android XML parsing with XML Pull...

Spring MVC and REST at Google App Engine

Some time ago I wrote about how to implement your Restful Web API using Spring MVC. Read my previous post to know about it. In that post it was developed a simple Rest example. For testing the application, file was copied into a web server (Tomcat for example), and then accessing to http://localhost:8080/RestServer/characters/1 information of character 1 was returned. In current post I am going to explain how to transform that application to a Google App Engine and be deployed into Google’s infrastructure using Maven. Of course in this case we are going to deploy a Rest Spring MVC application, but same approach can be used for migrating a Spring MVC web application (or any other application developed with other web framework) to GAE. First of all, obviously you should create a Google Account and register a new application (remember the name because will be used in next step). After that you can start the migration. Three changes are required, create appengine-web.xml defining application name; add server tag to settings.xml with Google account information, and modify pom.xml for adding GAE plugin and its dependencies. Let’s start with appengine-web.xml. This file is used by GAE to configure application and is created into WEB-INF directory (at same level of web.xml). <?xml version="1.0" encoding="utf-8"?> <appengine-web-app xmlns="http://appengine.google.com/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://appengine.google.com/ns/1.0 http://googleappengine.googlecode.com/svn/branches/1.2.1/java/docs/appengine-web.xsd"> <application>alexsotoblog</application> <version>1</version><system-properties> <property name="java.util.logging.config.file" value="WEB-INF/classes/logging.properties"/> </system-properties> <precompilation-enabled>false</precompilation-enabled> <sessions-enabled>true</sessions-enabled> </appengine-web-app>The most important field is application tag. This tag contains the name of our application (defined when you register a new Google Application). Other tags are version, system properties and environment variables, and misc configuration like if you want a precompilation to enhance performance or if your application requires sessions. And your project should not be modified anymore, now only Maven files will be touched. In settings.xml, account information should be added: <settings xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd"> <localRepository>/media/share/maven_repo</localRepository> <servers> <server> <id>appengine.google.com</id> <username>my_account@gmail.com</username> <password>my_password</password> </server></servers> </settings>See that it is as easy as registering any other server in Maven. And finally the most tedious part, modifying pom.xml. First thing is adding new properties: <gae.home>/media/share/maven_repo/com/google/appengine/appengine-java-sdk/1.5.5/appengine-java-sdk-1.5.5</gae.home> <gaeApplicationName>alexsotoblog</gaeApplicationName> <gaePluginVersion>0.9.0</gaePluginVersion> <gae.version>1.5.5</gae.version><!-- Upload to http://test.latest.<applicationName>.appspot.com by default --> <gae.application.version>test</gae.application.version>At first line we are defining Appengine Java SDK location. If you have already installed then insert location in this tag, if not, copy same location of this pom and simply change maven repository directory, in my case /media/share/maven_repo, to yours. Typically your Maven repository location will be /home/user/.m2/repositories. Maven will download SDK for you at deploy time. Next step is adding Maven GAE repository. <repositories> <repository> <id>maven-gae-plugin-repo</id> <url>http://maven-gae-plugin.googlecode.com/svn/repository</url> <name>maven-gae-plugin repository</name> </repository> </repositories><pluginRepositories> <pluginRepository> <id>maven-gae-plugin-repo</id> <name>Maven Google App Engine Repository</name> <url>http://maven-gae-plugin.googlecode.com/svn/repository/</url> </pluginRepository> </pluginRepositories>Because our project is dummy project, Datanucleus are not used. In case of more complex projects, that database access is required using, for example JDO, next dependencies should be added: <dependency> <groupId>javax.jdo</groupId> <artifactId>jdo2-api</artifactId> <version>2.3-eb</version> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency><dependency> <groupId>com.google.appengine.orm</groupId> <artifactId>datanucleus-appengine</artifactId> <version>1.0.6.final</version> </dependency><dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-core</artifactId> <version>1.1.5</version> <scope>runtime</scope> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>geronimo-jta_1.1_spec</artifactId> <version>1.1.1</version> <scope>runtime</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>geronimo-jpa_3.0_spec</artifactId> <version>1.1.1</version> <scope>runtime</scope> </dependency>And in case you are using Datanucleus, maven-datanucleus-plugin should be registered. Take care to configure it properly depending on your project. <plugin> <groupId>org.datanucleus</groupId> <artifactId>maven-datanucleus-plugin</artifactId> <version>1.1.4</version> <configuration> <!-- Make sure this path contains your persistent classes! --> <mappingIncludes>**/model/*.class</mappingIncludes> <verbose>true</verbose> <enhancerName>ASM</enhancerName> <api>JDO</api> </configuration> <executions> <execution> <phase>compile</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-core</artifactId> <version>1.1.5</version> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-rdbms</artifactId> <version>1.1.5</version> </dependency> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-enhancer</artifactId> <version>1.1.5</version> </dependency> </dependencies> </plugin>Now Google App Engine dependencies are added. <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-1.0-sdk</artifactId> <version>${gae.version}</version> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-tools-api</artifactId> <version>1.3.7</version> </dependency>Then if you want to test GAE functionalities (not used in our dummy project), next GAE libraries are added: <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-labs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-stubs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-testing</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency>Next change is a modification on maven-war-plugin including appengine-web.xml into generated package: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <webResources> <resource> <directory>src/main/webapp</directory> <filtering>true</filtering> <includes> <include>**/appengine-web.xml</include> </includes> </resource> </webResources> </configuration> </plugin>And finally adding maven-gae-plugin and configuring it to upload application to appspot. <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>${gaePluginVersion}</version> <configuration> <serverId>appengine.google.com</serverId> </configuration> <dependencies> <dependency> <groupId>net.kindleit</groupId> <artifactId>gae-runtime</artifactId> <version>${gae.version}</version> <type>pom</type> </dependency> </dependencies> </plugin>See that <serviceId> tag contains the server name defined previously in settings.xml file. Also if you are using maven-release-plugin you can upload application to the appspot automatically, during release:perform goal: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <version>2.2.1</version> <configuration> <goals>gae:deploy</goals> </configuration> </plugin>Now run gae:deploy goal. If you have already installed Appengine Java SDK, then your application will be uploaded to your GAE site. But if it is the first time you run the plugin, you will receive an error. Do not panic, this error occurs because Maven plugin does not find Appengine SDK into directory you specified in <gae.home> tag. But if you have configured gae.home location into your local Maven repository, simply run gae:unpack goal, and SDK will be installed correctly so when you rerun gae:deploy your application will be uploaded into Google infrastructure. In post example you can go to http://alexsotoblog.appspot.com/characters/1http://alexsotoblog.appspot.com/characters/1 and character information in JSON format is displayed into your browser. As I have noted at the beginning of the post, the same process can be used for any web application, not only for Spring Rest MVC. Because of teaching purpose all modifications have been made into application pom. My advice is that you create a parent pom with GAE related tags, so each project that must be uploaded into Google App Engine extends from same pom file. I wish you have found this post useful. This week I am at devoxx, meet me there ;) I will be speaking on Thursday 17 at 13:00 about Speeding Up Javascript & CSS Download Times With Aggregation and Minification. Full pom file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.springframework</groupId> <artifactId>rest</artifactId> <name>Rest</name> <packaging>war</packaging> <version>1.0.0-BUILD-SNAPSHOT</version> <properties> <java-version>1.6</java-version> <org.springframework-version>3.0.4.RELEASE</org.springframework-version> <org.aspectj-version>1.6.9</org.aspectj-version> <org.slf4j-version>1.5.10</org.slf4j-version> <!-- Specify AppEngine version for your project. It should match SDK version pointed to by ${gae.home} property (Typically, one used by your Eclipse plug-in) --><gae.home>/home/alex/.m2/repository/com/google/appengine/appengine-java-sdk/1.5.5/appengine-java-sdk-1.5.5</gae.home> <gaeApplicationName>alexsotoblog</gaeApplicationName> <gaePluginVersion>0.9.0</gaePluginVersion> <gae.version>1.5.5</gae.version><!-- Upload to http://test.latest.<applicationName>.appspot.com by default --> <gae.application.version>test</gae.application.version> </properties> <dependencies><!-- Rest --> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.2.4-1</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-lgpl</artifactId> <version>1.8.5</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-lgpl</artifactId> <version>1.8.5</version> </dependency><!-- GAE libraries for local testing as described here: http://code.google.com/appengine/docs/java/howto/unittesting.html --> <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-labs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-stubs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-testing</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-1.0-sdk</artifactId> <version>${gae.version}</version> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-tools-api</artifactId> <version>1.3.7</version> </dependency><!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework-version}</version> <exclusions> <!-- Exclude Commons Logging in favor of SLF4j --> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${org.springframework-version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework-version}</version> </dependency><!-- AspectJ --> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${org.aspectj-version}</version> </dependency><!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${org.slf4j-version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.15</version> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>com.sun.jdmk</groupId> <artifactId>jmxtools</artifactId> </exclusion> <exclusion> <groupId>com.sun.jmx</groupId> <artifactId>jmxri</artifactId> </exclusion> </exclusions> <scope>runtime</scope> </dependency><!-- @Inject --> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency><!-- Servlet --> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency><!-- Test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency></dependencies> <repositories> <!-- For testing against latest Spring snapshots --> <repository> <id>org.springframework.maven.snapshot</id> <name>Spring Maven Snapshot Repository</name> <url>http://maven.springframework.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> <!-- For developing against latest Spring milestones --> <repository> <id>org.springframework.maven.milestone</id> <name>Spring Maven Milestone Repository</name> <url>http://maven.springframework.org/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <!-- GAE repositories --> <repository> <id>maven-gae-plugin-repo</id> <url>http://maven-gae-plugin.googlecode.com/svn/repository</url> <name>maven-gae-plugin repository</name> </repository> </repositories><pluginRepositories> <pluginRepository> <id>maven-gae-plugin-repo</id> <name>Maven Google App Engine Repository</name> <url>http://maven-gae-plugin.googlecode.com/svn/repository/</url> </pluginRepository> </pluginRepositories><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${java-version}</source> <target>${java-version}</target> </configuration> </plugin><!-- Adding appengine-web into war --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <webResources> <resource> <directory>src/main/webapp</directory> <filtering>true</filtering> <includes> <include>**/appengine-web.xml</include> </includes> </resource> </webResources> <warName>abc</warName> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>install</id> <phase>install</phase> <goals> <goal>sources</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <!-- Have to use version 1.2 since version 1.3 does not appear to work with ITDs --> <version>1.2</version> <dependencies> <!-- You must use Maven 2.0.9 or above or these are ignored (see MNG-2972) --> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${org.aspectj-version}</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjtools</artifactId> <version>${org.aspectj-version}</version> </dependency> </dependencies> <executions> <execution> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> <configuration> <outxml>true</outxml> <source>${java-version}</source> <target>${java-version}</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <junitArtifactName>junit:junit</junitArtifactName> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> <version>1.0-beta-1</version> </plugin> <!-- The actual maven-gae-plugin. Type "mvn gae:run" to run project, "mvn gae:deploy" to upload to GAE. --> <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>${gaePluginVersion}</version> <configuration> <serverId>appengine.google.com</serverId> </configuration> <dependencies> <dependency> <groupId>net.kindleit</groupId> <artifactId>gae-runtime</artifactId> <version>${gae.version}</version> <type>pom</type> </dependency> </dependencies> </plugin> </plugins> </build> </project>Download Code. Music: http://www.youtube.com/watch?v=Nba3Tr_GLZU Reference: Spring MVC and REST at Google App Engine from our JCG partner Alex Soto at the One Jar To Rule Them All blog. Related Articles :Develop Restful web services using Spring MVC Spring MVC Development – Quick Tutorial jqGrid, REST, AJAX and Spring MVC Integration Building a RESTful Web Service with Spring 3.1 and Java based Configuration, part 2 Spring MVC3 Hibernate CRUD Sample Application Multitenancy in Google AppEngine (GAE)...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books