Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

java-logo

Pass Streams Instead of Lists

Opening disclaimer: this isn’t always a good idea. I’ll present the idea, along with some of the reasons why it’s a good idea, but then I’ll talk about some instances where it’s not so great. Being Lazy As you may know, I’ve been dabbling in Python nearly as much as I’ve been working with Java. One thing that I’ve liked about Python as soon I found out about it is generators. They allow for lazy operations on collections, so you can pass iterators/generators around until you finally actually need the final result of the operations – without affecting the original collection (under most circumstances; but you’re not likely to affect it accidentally). I really enjoy the power of this idea. The laziness allows you to not do practically any work until the results are needed, and it also makes it so there isn’t useless memory used to store intermediate collections. Being Lazy in Java Java has iterators too, but not generators. But it does have something that works fairly similarly when it comes to lazy operations on collections: Streams. While not quite as versatile as generators in Python, Streams can largely be used the same way. Passing Streams Around There are a lot of cases where you should return Streams instead of the resulting Lists (or other collections). This does something for you, even besides the lazy benefits mentioned above. If the receiver of the returned object wants to collect() it into something other than the List you had planned on returning, or they want to reduce() it in a way you never expected, you can give them a Stream and have nothing to worry about. They can then get what they need with a Stream method call or two. What Sucks About This There is a problem that can be difficult to deal with when it comes to Streams being passed around like they’re collections: They’re one-time-use-only. This means that if a function such as the one below wants to use a Stream instead of a List, it can’t do it easily, since it needs to do two separate things with the List. public static List normalize(List input) { int total = input.stream() .mapToInt(i -> i) .sum();return input.stream() .map(i -> i * 100 / total) .collect(Collectors.toList()); } In order to take in a Stream instead, you need to collect() it, then run the two operations on it. public static Stream normalize(Stream input) { List inputList = input.collect(Collectors.toList());int total = inputList.stream() .mapToInt(i -> i) .sum();return inputList.stream() .map(i -> i * 100 / total); } This slightly defeats the purpose of passing the Streams around. It’s not horrible, since we’re trying to use a “final” result of the Stream. Except that it’s not a final result. It’s an intermediate result that is used to calculate the next Stream output. It creates an intermediate collection which wastes memory. There are ways around this, akin to how this “article” solves it, but they’re either complicated to implement or prone to user errors. I guess it’s kind of okay to just use the second method I showed you, since it’s still likely a pretty good performance boost over how the first one did it, but it just bugs me. Interesting (But Probably A Little Silly) Alternative If you’re familiar with my posts, you may feel like this article is against an article I had written a while back about transforming collections using decorators. Technically, this post does think of that as a rather naive idea, especially since the idea was inspired by Streams. But, there is one major benefit to the decorator idea over the Streams idea presented in this article: you can iterate over the decorated collections over and over again. It’s probably not as efficient as Streams – especially since I’m not sure how to parallelize it – but the it certainly has reusability going for it. There’s a chance I’ll look into the idea again and see if I can figure out a better way to do it, but I doubt it. Outro So, that’s my idea. You can take it or leave it. I’m not sure how often this can be useful in typical projects, but I think I’m going to give it a try in my current and future projects. Thanks for reading. If you’ve got an opinion about this, comment below and let me know.Reference: Pass Streams Instead of Lists from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
java-interview-questions-answers

Simplifying JAX-RS caching with CDI

This post explains (via a simple example) how you can use CDI Producers to make it a little easier to leverage cache control semantics in your RESTful services The Cache-Control header was added in HTTP 1.1 as a much needed improvement over the Expires header available in HTTP 1.0. RESTful web services can make use of this header in order to scale their applications and make them more efficient e.g. if you can cache a response of a previous request, then you obviously need not make the same request to the server again if you are certain of the fact that your cached data is not stale!     How does JAX-RS help ? JAX-RS has had support for the Cache-Control header since its initial (1.0) version. The CacheControl class represents the real world Cache-Control HTTP header and provides the ability to configure the header via simple setter methods. More on the CacheControl class in the JAX-RS 2.0 javadocsSo how to I use the CacheControl class? Just return a Response object around which you can wrap an instance of the CacheControl class. @Path("/testcache") public class RESTfulResource { @GET @Produces("text/plain") public Response find(){ CacheControl cc = new CacheControl(); cc.setMaxAge(20); return Response.ok(UUID.randomUUID().toString()).cacheControl(cc).build(); } } Although this is relatively convenient for a single method, repeatedly creating and returning CacheControl objects can get irritating for multiple methods CDI Producers to the rescue! CDI Producers can help inject instances of classes which are not technically beans (as per the strict definition) or for classes over which you do not have control as far as decorating them with scopes and qualifiers are concerned. The idea is toHave a custom annotation (@CacheControlConfig) to define default values for Cache-Control header and allow for flexibility in case you want to override it @Retention(RUNTIME) @Target({FIELD, PARAMETER}) public @interface CachControlConfig { public boolean isPrivate() default true; public boolean noCache() default false; public boolean noStore() default false; public boolean noTransform() default true; public boolean mustRevalidate() default true; public boolean proxyRevalidate() default false; public int maxAge() default 0; public int sMaxAge() default 0;}Just use a CDI Producer to create an instance of the CacheControl class by using the InjectionPoint object (injected with pleasure by CDI !) depending upon the annotation parameters public class CacheControlFactory {@Produces public CacheControl get(InjectionPoint ip) {CachControlConfig ccConfig = ip.getAnnotated().getAnnotation(CachControlConfig.class); CacheControl cc = null; if (ccConfig != null) { cc = new CacheControl(); cc.setMaxAge(ccConfig.maxAge()); cc.setMustRevalidate(ccConfig.mustRevalidate()); cc.setNoCache(ccConfig.noCache()); cc.setNoStore(ccConfig.noStore()); cc.setNoTransform(ccConfig.noTransform()); cc.setPrivate(ccConfig.isPrivate()); cc.setProxyRevalidate(ccConfig.proxyRevalidate()); cc.setSMaxAge(ccConfig.sMaxAge()); }return cc; } }Just inject the CacheControl instance in your REST resource class and use it in your methods @Path("/testcache") public class RESTfulResource { @Inject @CachControlConfig(maxAge = 20) CacheControl cc;@GET @Produces("text/plain") public Response find() { return Response.ok(UUID.randomUUID().toString()).cacheControl(cc).build(); } }Additional thoughtsIn this case, the scope of the produced CacheControl instance is @Dependent i.e. it will live and die with the class which has injected it. In this case, the JAX-RS resource itself is RequestScoped (by default) since the JAX-RS container creates a new instance for each client request, hence a new instance of the injected CacheControl instance will be created along with each HTTP request You can also introduce CDI qualifiers to further narrow the scopes and account for corner cases You might think that the same can be achieved using a JAX-RS filter. That is correct. But you would need to set the Cache-Control header manually (within a mutable MultivaluedMap) and the logic will not be flexible enough to account for different Cache-Control configurations for different scenariosResults of the experiment Use NetBeans IDE to play with this example (recommended)Deploy the WAR and browse to http://localhost:8080/JAX-RS-Caching-CDI/testcache A random string which would be cached for 20 seconds (as per configuration via the @CacheControl annotation)A GET Request to the same URL will not result in an invocation of your server side REST service. The browser will return the cached value.Although the code is simple, if you are feeling lazy, you can grab the (maven) project from here and play around Have fun!Reference: Simplifying JAX-RS caching with CDI from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
java-logo

Starting out with jHiccup

After writing my post on ‘How to detect and diagnose slow code in production’ I was encouraged by a reader to try out jHiccup from Azul systems. Last year I went to a talk by jHiccup’s creator Gil Tene on the correct way to measure latency, where, amongst other things, he introduced us to jHiccup. It had been on my todo list of products to investigate and this gave me the impetus to finally get on with my investigation. JHiccup measures the system latency over and above your actual program code. For examples GC times and other OS and hardware events that add latency spikes to the smooth running of your program.  It is critical to understand these because your program can never have better latencies than the underlying environment on which it is running. To cut a long story short, I love the tool and think that it’s going to be really useful to me now that I’ve started using it.  This post is not about teaching you everything about jHiccup, I refer you to the documentation for that. This post is a ‘starting out with jHiccup guide’, to show you how I got it running and hopefully whet your appetite to try it out in your own code. Step 1: Download product Download the code, it’s open source and you can get it from here. Unzip the file and you will see the jHiccup.jar which we will use in the next step. Step 2: Run your program with jHiccup There are a number of way for running jHiccup but this is how I did it.  All you need to do is add this vm option to your command line: -javaagent:jHiccup.jar="-d 0 -i 1000 -l hiccuplog -c" There are lots of configurations, the ones chosen here mean:-d The delay before which to start recording latencies – this allow for ignoring any code warm up period. (default after 30s) -i Interval data, how often the data is recorded. (default every 5s) -l The name of the log file to which data is recorded. -c Launches a control JVM and records data to logFile.c.  Super useful to compare against actual program to see if the was a global event on the machine.Step 3: Process the log file Run this command on your log file (you can process both the program log file and the .c control log file). jHiccupLogProcessor -i hiccuplog -o myhlog This produces two files, we are interesting in the one call myhlog (not myhlog.hgram) which we will use in the final step. Step 4: Produce graph in Excel Now for the really nice bit.  Open the spreadsheet jHiccupPlotter.xls and make sure you enable the macros. You will see a sheet like this:Just select your file from step 3 and choose a chart title (this is a really useful feature when you come to comparing your graphs) and in a matter of seconds you will have a your latency distribution graphs. Example I had a program (not particularly latency sensitive) and wanted to understand the effects that the different garbage collects had on latency. All I had to do was to run my program with different garbage collector settings and compare the graphs. Of course this was a slightly manufactured example I happened to have to hand but you get the point, it’s easy to change jvm settings or code and get comparable results. The control program is also critical to understand what else is happening on your machine that might be affecting the latency of your program. These are my results: It’s interesting to see how the different GCs effect the latency and this is demonstrated beautifully using jHiccup. Using the serial collector:Using the G1 collector:Using the G1 collector – max pause set to 1ms:Using the CMS collector:Using the parallel GC:Reference: Starting out with jHiccup from our JCG partner Daniel Shaya at the Rational Java blog....
mongodb-logo

Using Go to build a REST service on top of mongoDB

I’ve been following go (go-lang) for a while now and finally had some time to experiment with it a bit more. In this post we’ll create a simple HTTP server that uses mongoDB as a backend and provides a very basic REST API. In the rest of this article I assume you’ve got a go environment setup and working. If not, look at the go-lang website for instructions (https://golang.org/doc/install). Before we get started we need to get the mongo drivers for go. In a console just type the following:       go get gopkg.in/mgo.v2 This will install the necessary libraries so we can access mongoDB from our go code. We also need some data to experiment with. We’ll use the same setup as we did in my previous article (http://www.smartjava.org/content/building-rest-service-scala-akka-http-a…). Loading data into MongoDB We use some stock related information which you can download from here (http://jsonstudio.com/wp-content/uploads/2014/02/stocks.zip). You can easily do this by executing the following steps: First get the data: wget http://jsonstudio.com/wp-content/uploads/2014/02/stocks.zip Start mongodb in a different terminal mongod --dbpath ./data/ And finally use mongoimport to import the data unzip -c stocks.zip | mongoimport --db akka --collection stocks --jsonArray And as a quick check run a query to see if everything works: jos@Joss-MacBook-Pro.local:~$ mongo akka MongoDB shell version: 2.4.8 connecting to: akka > db.stocks.findOne({},{Company: 1, Country: 1, Ticker:1 } ) { "_id" : ObjectId("52853800bb1177ca391c17ff"), "Ticker" : "A", "Country" : "USA", "Company" : "Agilent Technologies Inc." } > At this point we have our test data and can start creating our go based HTTP server. You can find the complete code in a Gist here: https://gist.github.com/josdirksen/071f26a736eca26d7ea4 In the following section we’ll look at various parts of this Gist to explain how to setup a go based HTTP server. The main function When you run a go application, go will look for the main function. For our server this main function looks like this: func main() {server := http.Server{ Addr: ":8000", Handler: NewHandler(), }// start listening fmt.Println("Started server 2") server.ListenAndServe()} This will configure a server to run on port 8000, and any request that comes in will be handled by the NewHandler() instance we create in line 64. We start the server by calling the server.listenAndServe() function. Now lets look at our handler that will respond to requests. The myHandler struct Lets first look at what this handler looks like: // Constructor for the server handlers func NewHandler() *myHandler { h := new(myHandler) h.defineMappings()return h }// Definition of this struct type myHandler struct { // holds the mapping mux map[string]func(http.ResponseWriter, *http.Request) }// functions defined on struct func (my *myHandler) defineMappings() {session, err := mgo.Dial("localhost") if err != nil { panic(err) }// make the mux my.mux = make(map[string]func(http.ResponseWriter, *http.Request))// matching of request path my.mux["/hello"] = requestHandler1 my.mux["/get"] = my.wrap(requestHandler2, session) }// returns a function so that we can use the normal mux functionality and pass in a shared mongo session func (my *myHandler) wrap(target func(http.ResponseWriter, *http.Request, *mgo.Session), mongoSession *mgo.Session) func(http.ResponseWriter, *http.Request) { return func(resp http.ResponseWriter, req *http.Request) { target(resp, req, mongoSession) } }// implements serveHTTP so this struct can act as a http server func (my *myHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if h, ok := my.mux[r.URL.String()]; ok { // handle paths that are found h(w, r) return } else { // handle unhandled paths io.WriteString(w, "My server: "+r.URL.String()) } } Lets split this up and look at the various parts. The first thing we do is define a constructor: func NewHandler() *myHandler { h := new(myHandler) h.defineMappings()return h }When we call this constructor this will instantiate a myHandler type and call the defineMappings() function. After that it will return the instance we created. How does the type look we instantiate: type myHandler struct { // holds the mapping mux map[string]func(http.ResponseWriter, *http.Request) } As you can we define a struct with a mux variable as a map. This map will contain our mapping between a request path and a function that can handle the request. In the constructor we also called the defineMappings function. This funtion, which is defined on out myHandler struct, looks like this: func (my *myHandler) defineMappings() {session, err := mgo.Dial("localhost") if err != nil { panic(err) }// make the mux my.mux = make(map[string]func(http.ResponseWriter, *http.Request))// matching of request path my.mux["/hello"] = requestHandler1 my.mux["/get"] = my.wrap(requestHandler2, session) } In this (badly named) function we define the mapping between incoming requests and a specific function that handles the request. And in this function we also create a session to mongoDB using the mgo.Dial function. As you can see we define the requestHandlers in two different ways. The handler for “/hello” directly points to a function, an the handler for the “/get” path, points to a wrap function which is also defined on the myHandler struct: func (my *myHandler) wrap(target func(http.ResponseWriter, *http.Request, *mgo.Session), mongoSession *mgo.Session) func(http.ResponseWriter, *http.Request) { return func(resp http.ResponseWriter, req *http.Request) { target(resp, req, mongoSession) } } This is a function, which returns a function. The reason we do this, is that we also want to pass our mongo session into the request handler. So we create a custom wrapper function, which has the correct signature, and just pass each call to a function where we also provide the mongo session. (Note that we also could have changed the ServeHTTP implementation we explain below) Finally we define the ServeHTTP function on our struct. This function is called whenever we receive a request: func (my *myHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if h, ok := my.mux[r.URL.String()]; ok { // handle paths that are found h(w, r) return } else { // handle unhandled paths io.WriteString(w, "My server: "+r.URL.String()) } } In this function we check whether we have a match in our mux variable. If we do, we call the configured handle function. If not, we just respond with a simple String. The request handle functions Lets start with the handle function which handles the “/hello” path: func requestHandler1(w http.ResponseWriter, r *http.Request) { io.WriteString(w, "Hello world!") }Couldn’t be easier. We simply write a specific string as HTTP response. The “/get” path is more interesting: func requestHandler2(w http.ResponseWriter, r *http.Request, mongoSession *mgo.Session) { c1 := make(chan string) c2 := make(chan string) c3 := make(chan string)go query("AAPL", mongoSession, c1) go query("GOOG", mongoSession, c2) go query("MSFT", mongoSession, c3)select { case data := <-c1: io.WriteString(w, data) case data := <-c2: io.WriteString(w, data) case data := <-c3: io.WriteString(w, data) }}// runs a query against mongodb func query(ticker string, mongoSession *mgo.Session, c chan string) { sessionCopy := mongoSession.Copy() defer sessionCopy.Close() collection := sessionCopy.DB("akka").C("stocks") var result bson.M collection.Find(bson.M{"Ticker": ticker}).One(&result)asString, _ := json.MarshalIndent(result, "", " ")amt := time.Duration(rand.Intn(120)) time.Sleep(time.Millisecond * amt) c <- string(asString) } What we do here is that we make use of the channel functionality of go to run three queries at the same time. We get the ticker information for AAPL, GOOG and MSFT and return a result to the specific channel. When we receive a response on one of the channels we return that response. So each time we call this service we either get the results for AAPL, for GOOG or for MSFT. With that we conclude this first step into go-lang :)Reference: Using Go to build a REST service on top of mongoDB from our JCG partner Jos Dirksen at the Smart Java blog....
redhat-openshift-logo

Quick Start: Spring Boot and WildfFly 8.2 on OpenShift

A really “Quick Start” with Spring Boot, WildFly and OpenShift as opposed to my last, more descriptive article. Prerequisite Before we can start building the application, we need to have an OpenShift free account and client tools installed. Step 1: Create WildFly application To create an application using client tools, type the following command: rhc create-app <app-name> jboss-wildfly-8 --scaling This command creates an application using WildFly 8.2 cartridge with scalability option and clones the repository to <app-name> directory. Step 2: Delete Template Application Source code OpenShift creates a template project that can be freely removed: git rm -rf .openshift deployments src README.md pom.xml Commit the changes: git commit -am "Removed template application source code" Step 3: Pull Source code from GitHub git remote add upstream https://github.com/kolorobot/openshift-wildfly-spring-boot.git git pull -s recursive -X theirs upstream master Step 4: Push changes The basic template is ready to be pushed to OpenShift: git push The initial deployment (build and application startup) will take some time (up to several minutes). Subsequent deployments are a bit faster. You can now browse to: http://<app-name>-<domain>.rhcloud.com and you should see home page with the form.Reference: Quick Start: Spring Boot and WildfFly 8.2 on OpenShift from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
jboss-logo

Plug in Policies Into JBoss Apiman

The JBoss apiman project did just release 1.0.3.Final this week.  It’s mostly a bug fix release, with just a couple of relatively minor improvements. One particular feature, that made it’s way into the framework since I last blogged about it is the support for plugins. Those plugins can easily be added to the system in order to provide additional functionality. Add Policies As Plugins Currently the only functionality that can be contributed through the plugin framework is new policies. Fortunately policies are also the most important aspect of apiman, as they are responsible for doing all of the important work at runtime. Creating a Plugin An apiman plugin is basically a java web archive (WAR) with a little bit of extra sauce. This approach makes it very easy to build using maven, and should be quite familiar to most Java developers. Because a plugin consists of some resources files, compiled java classes, front-end resource such as HTML and javascript, and dependencies in the form of JARs, the WAR format is a natural choice. If you want to give it a try yourself, make sure to dig trough the extensive documentation in the developer guide. The following video walks you through it quickly:How To Run Apiman There is a very handy quickstart available, which allows you to build, deploy and start apiman on WildFly with a single command: $ mvn clean install -Pinstall-all-wildfly8 $ cd tools/server-all/target/wildfly-8.1.0.Final/ $ ./bin/standalone.sh Make sure to also read my previous blog posts about API Management with apiman:API Management in WildFly 8.1 with Overlord Kickstart on API Management with JBoss Apiman 1.0You can follow @apiman_io and chat with the team on IRC.Reference: Plug in Policies Into JBoss Apiman from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
software-development-2-logo

Tips for Effective Session Submissions at Technology Conferences

Several of us go through the process of submitting talks at a technology conference. This requires thinking of a topic that you seem worthy of a presentation. Deciding a topic can be a blog by itself, but once the topic is selected then it involves creating a title and an abstract that will hopefully get selected. The dreaded task of preparing the slides and demos after that is a secondary story, but this blog will talk about dos and don’ts of an effective session submission that can improve your chances of acceptance. What qualifies me to write this blog? I’ve been speaking for 15+ years in ~40 countries, around the world, in a variety of technology conferences. In early years, this involved submitting a session at a conference, getting rejected and accepted, and then speaking at some of the conferences. The sessions were reviewed by a Program Committee which is typically a bunch of people, expert in their domain, and help shape the conference agenda. For the past several years, I’ve participated in Program Committees of several conferences, either as an individual member or leading the track with multiple individual members. Now, I’ve had my share of rejects, and still get rejected at conferences. There are multiple reasons for that such as too many sessions by a speaker, more compelling abstract by another speaker, Program Committee looking for a real-life practitioner, and others. But the key part is that these rejects never let me down. I do miss the opportunity to talk to the attendees at that conference though. For example, I’ve had rejects from a conference three years in a row but got accepted on the fourth year. And hopefully will get invited again this year Lets see what I practice to write a session title/abstract. And what I expect from other sessions when I’m part of the Program Committee! Tips for Effective Session SubmissionNo product pitches - In a technology-focused conference, any product, marketing or seemingly market-ish talk is put at the bottom of the list, or basically rejected right away. Most vendors have their product specific conference and such talks are better suited there. Catchy title – Title is typically 50-80 characters that explain what your talk is all about. Make your title is catchy and conveys the intention. Program Committee will read through the entire submission but more likely to look at yours first if the title is catchy. Attendees are more likely to read the abstract, and possibly attend the talk, if they like the title.Some more points on title:Politically correct language – However, don’t lean on the side of making it arcane or at least use any foul language. You must remember that Program Committee has both male and female members and people from different cultures. Certain words may be appropriate in a particular culture but not so on a global level. So make sure you check global political correctness of the title before picking the words. Use numbers, if possible - Instead of saying “Tips for Java EE 7″, use “50 Tips in 50 Minutes for Java EE 7″. This talk got me a JavaOne 2013 Rockstar Award. Now this was not entirely due to the title but I’ve seen a few other talks with similar titles in JavaOne 2014. I guess the formula works And there is something about numbers and how the human brain operate. If something is more quantified then you are more likely to pay attention to it!Coherent Abstract – Abstract is typically 500-1500 characters, some times more than that, that describes what you are going to do in your session. Session abstracts can differ based upon what is being presented. But typically as a submitter, I divide it into three parts – setup/define the problem space, show what will be presented (preferably with an outline), and then the lessons learned by the attendees. I also include any demos, cast studies, customer/partner participation, that will be included in the talk. As a Program Committee member, I’m looking at similar points and how the title/abstract is going to fit in the overall rhythm of the conference.Some additional points about abstract since that is where most of the important information is available.WIIFM (Whats In It For Me) – Prepare an abstract that will allow the attendees to connect with you. Is this something that they may care about? Something that they face in their daily life? Think if you were an attendee, would you be interested in attending this session by reading the abstract? Think WIIFM from attendee’s perspective. Use all the characters – Conferences have different limit of characters to pitch your abstract. The reviewers may not know you or your product at all and you get N characters to pitch your idea. Make sure to use all of them, to the last Nth character. Review your abstract – Make sure to read your abstract multiple times to ensure that you are giving all the relevant information. Think through your presentation and see if you are leaving out any important aspects. Also look if the abstract has any redundant information that will not required by the reviewers. You can also consider getting your abstract peer reviewed.I’m always happy to provide that service to my blog readers Coordinate within team – Make sure to coordinate within your team before the submission – multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your “google presence” and/or review committee’s prior knowledge of the speaker. Its unfortunate if the selected speaker is not the most appropriate one.Make sure you don’t write an essay here, or at least provide a TLDR; version. Just pick the three most important aspect of your session and highlight them. Hands-on Labs: Hands-on labs is where attendees sit through the session from two to four hours, and learn a tool, build/debug/test an application, practice some methodology, or something else in a hands-on manner. Make sure you clearly highlight flow of the lab, down to every 30 mins, if possible. The end goal such as “Attendees will build an end-to-end Java EE 7 application using X, Y, X” or “Attendees will learn the tools and techniques for adopting DevOps in their team”. A broad outline of the content is still very important so that Program Committee can understand attendees’ experience. Appropriate track – Typically conferences have multiple tracks and as a submitter you typically one as a primary track, and possibly another as a secondary. Give yourself time to read through track descriptions and choose the appropriate track for your talk. In some cases, the selected track may be inappropriate, either by accident, or some other reason. In that case, Program Committee will try their best to recategorize the talk to an appropriate track, if it needs to. But please ensure that you are filing in the right track to have all the right eyeballs looking at it. It would be really unfortunate, for the speaker and the conference, if an excellent talk gets dropped because of being in the inappropriate track. Use tags – Some conferences have the ability to apply tags to a submission. Feel free to use the existing tags, or create something that is more likely to be searched by the Program Committee. This provides a different dissection of all the submissions, and possibly some more eyes on your submission. First time speaker – If you are a newbie, or a first time presenter, then consider paying close attention to the CFP sections which gives you an opportunity to toot your horn. Make sure to include a URL of your video presentation that has been done elsewhere. If you never ever presented at a public conference or speaking at this conference for the first time, then you can consider recording a technical presentation and upload the video on YouTube or Vimeo. This will allow the Program Committee to know you slightly better. Links to slideshare profile are recommended as well in this case. Very often the Program Committee members will google the speaker. So make sure your social profile, at least Twitter and LinkedIn, are up to date. Please don’t say “call me at xxx-xxx-xxxx to find out the details” Run spell checker – Make sure to run spell checker in everything you submit as part of the session. Spelling mistakes turn off some of the Program Committee members, including myself This will generally never be a sole criteria of rejection but shows lack of attention, and only makes us wonder about the quality of session.Never Give Up! If your session does not get accepted, don’t give up and don’t take it personally. Each conference has a limited number of session slots and typically the number of submissions is more, sometimes way more, than that. The Program Committee tries, to the best of their ability, to pick the right sessions that fits in the rhythm of the conference. You’ve done the hard work of preparing a compelling title/abstract, submit at other conferences. At the least, try giving the talk at a local Java User Group and get feedback from the attendees there. You can always try out Virtual JUG as well for a more global audience. Even though these tips are based upon my experience on presenting and selecting sessions at technology conferences, but most of these would be valid at others as well. If your talk do get approved and you go through the process of creating compelling slides and sizzling demos, the attendees will always be a mixed bunchEnjoy, good luck, and happy conferencing! Any more tips to share?Reference: Tips for Effective Session Submissions at Technology Conferences from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
software-development-2-logo

A Vision of the Future of the Software Developer’s Platform

How will the developer’s platform change over the next three years? Will you still be using desktop-based development tools? Cloud-based software development options are getting more powerful, but will they completely replace the desktop? For a certain set of developers, cloud-based software development tools will be a natural fit and so we should expect migration to tools like Che. But the desktop will remain viable and vibrant well into the future: for many classes of problem, the desktop is the right solution. Of course, there will be grey areas. Some problems can be addressed equally well by desktop- and cloud-based solutions. For these sorts of problems, the choice of development tools may be–at least in part–a matter of developer preference. There will be other drivers, of course (the exact nature of which is difficult to speculate). For this grey area, the ability for a software developer to pick and choose the tools that are most appropriate for the job is important. Further, the ability to mix and match development tool choices across a team will be a key factor. I’ve spent a good part of the last few months working with a group of Eclipse developers to hammer out a vision for the future of the developer’s platform. Here’s what we came up with: Our vision is to build leading desktop and cloud-based development solutions, but more importantly to offer a seamless development experience across them. Our goal is to ensure that developers will have the ability to build, deploy, and manage their assets using the device, location and platform best suited for the job at hand. Eclipse projects, the community, and ecosystem will all continue to invest in and grow desktop Eclipse. Full-function cloud-based developer tools delivered in the browser will emerge and revolutionize software development. Continued focus on quality and performance, out-of-the-box experience, Java 9, and first class Maven, Gradle, and JVM Languages support also figure prominently in our vision of a powerful developer’s platform. To paraphrase:Desktop Eclipse will remain dominant for the foreseeable future; Cloud-based developer environments like Che and Orion will revolutionize software development; Developers will be able to choose the most appropriate tools and environment; Projects can move from desktop to cloud and back; Desktop Eclipse developer tools will gain momentum; The community will continue to invest in desktop Eclipse-based IDEs; Java™ 9 will be supported; Developer environments will have great support for Maven and Gradle; Support for JVM languages will continue to improve; and User experience will become a primary focusYou’ve likely noticed that this is focused pretty extensively on Java development. This is not intended to exclude support for other programming languages, tools, and projects. As the expression goes, “a rising tide lifts all boats”: as we make improvements and shift focus to make Java development better, those improvements will have a cascading effect on everybody else. My plan for the near future (Mars time-frame) is to get the Che project boot-strapped and latch onto the that last bullet with regard to the desktop IDE: user experience. While user experience is an important consideration for most Eclipse projects, it needs to be a top focus. This vision of the future isn’t going to just happen. To succeed, we need organizations and individuals to step up and contribute. I’m aware that project teams are stretched pretty thin right now and many of the things on the list will require some big effort to make happen. Our strategy, then, is to start small. I’m buoyed (in keeping with my sea metaphors) by the overwhelmingly positive response that we got when we turned line numbers on by default in some of the packages. I’ll admit that I don’t quite understand the excitement (it’s such an easy thing to toggle), but for many of our users, this was a very big and important change. The curious thing is that–while the change was preceded by a lengthy and time-consuming discussion–making the actual change was relatively simple. My take away is that we can have some pretty big wins by doing some relatively small things. With this in mind, I’ve been poking at an informal programme that I’ve been calling “Every Detail Matters” (I borrowed this name from the Gnome community). Every Detail Matters will initially tackle things like names and labels, default settings for preferences, documentation, and the website/download experience (I’ve set up an Every Detail Matters for Mars umbrella bug to capture the issues that I believe make up the success criteria). We’re also trying to tackle some relatively big things. The “installer problem” is one that I’m hopeful we’ll be able to address with via the Oomph project. I’m also pretty excited by the prospect of having Eclipse release bits available from the Fedora software repository on GA day. In parallel, we’ve launched a more formal Great Fixes for Mars competition with prizes for winning contributors of fixes that improve the Java development experience.Enter the Great Fixes for Mars skills competition; there’ll be prizes! I’ll set up a BoF session at EclipseCon to discuss the vision and our strategy for making it real. It’d be great to see you there!I wrote about the Platform Vision in the November Eclipse newsletter.Reference: A Vision of the Future of the Software Developer’s Platform from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....
software-development-2-logo

What I’ve Learned After 15 Years as a Java Group Leader

After founding the Philadelphia Area Java Users’ Group in 2000 and leading it for 15 years, I’ve decided to resign my post and pass on leadership to someone else. It’s time. At our first meeting in a small and long-forgotten dot com, 35 Java developers came to eat pizza and listen to a presentation on XML and JAXP. Since then we’ve had about 100 events (a few with 200 attendees) and a mailing list that peaked around 1,500 members. My experience running this group has revealed some patterns that may be useful for other user group leaders (or those looking to start one), ideas for speakers, and observations on the career paths of many members I’ve known for an extended period of time. Some thoughts. Members Topic suggesters and early adopters – A group of roughly ten members regularly suggested meeting topics then unfamiliar to me, but became widely popular a few years later. I relied on this group heavily for ideas at different times, and many of the suggestions were a bit beyond the scope for a JUG. Early on I typically rejected non-Java/JVM topic suggestions, so many of these meetings never developed. Consecutive meetings on Scala and Clojure in 2009 come to mind as an example of being timed ahead of popular adoption. These ten members included both experienced developers and even a couple who were then only computer science undergrad students. Without exception, the career paths of these specific individuals have been noticeably positive relative to the average career path, and more than half have been promoted from developer to CTO or equivalent. I believe all of this small group have now gone on to using other languages as their primary tool, and all still code regularly. Language migration – Of the overall membership, the vast majority (perhaps 80%+) are still using Java as their primary language by day. About 5% now focus on mobile development. Another 5% combined currently focus on Python or Ruby, and maybe 3% work in functional languages (Scala and Clojure). Age – Although I don’t have data, it’s fairly clear that the average age of a member or meeting attendee has increased over the years even as the member roster changed. The group has received far fewer membership requests over the past five years than in the past, and new members are less likely to be fresh graduates than they were in the group’s early days. Groups sense overhyped technologies – Some of our meeting topics were technologies or products that were heavily marketed through multiple channels (conferences, speaker tours, newsletters) at the time, yet failed to gain traction.  Many that RSVP’d to these meetings commented on their suspicions, and some admitted to a desire to attend in order to poke some holes in the hype. Regulars – At any given meeting, about 50% of the attendees were regulars that attended almost every event regardless of their specific interest in that evening’s topic. Many of these people also regularly attend events held by other groups. Presenters Speaker name recognition – This should surprise no one, but our three largest events by far were all with speakers who had a fair amount of celebrity and industry credibility. These were open source advocate/author Eric ‘ESR’ Raymond (YouTube link), Spring framework creator Rod Johnson, and a joint meeting with Hibernate author Gavin King and JBoss founder Marc Fleury. We had Johnson, King and Fleury around the height of their products’ popularity, and ESR (who is not a figure specific to Java) in 2012. Each event was SRO, with many more in attendance than had RSVP’d. The next level of attendance was for speakers who had either founded a company or created a product/tool, but perhaps did not have top-tier name recognition. We had eleven meetings of this nature (including the three mentioned), most drawing large crowds (150). For speakers without a product that were relatively unknown, the strength of a bio definitely impacted attendance. Current job title, employer name recognition, overall industry experience, past speaking experience, and even academic credentials clearly influenced our members.Local speakers were our lifeblood – About 80% of our speakers lived within an hour drive of our meeting space. We had four presenters who combined for fifteen meetings, and another eleven who all spoke twice. Fifteen local speakers delivered almost 40% of our presentations. Speakers benefit from presenting – Several of our local speakers have shared anecdotes of being discovered by an employer or new client through a JUG presentation. Even though we did not allow recruiting or sales/marketing people at events, most speakers are easy to contact. Speaking allowed some members to start building a ‘brand’ and increased visibility in the tech community. The best way to sell is by not selling – Our official policy was to forbid pure product demos, but we knew that when a company flies out their ‘evangelist’ and buys pizza for 150 people, we’re getting at least a minimal level of demo. Speakers who dove into a sales pitch early on would almost always see members start to leave, a few times in droves. The evangelists who were most effective in keeping an audience often used a similar presentation style where the first hour defined a problem and how it can be solved, and concluded with something like “My presentation is over. You can leave now, or if you want to stay another 15 minutes I’ll show you how our product solves the problem.” This usually led to discussions with the speaker that lasted beyond the meeting schedule, and sales. Making the demo optional and at the very end is a successful tactic. Accomplished technologists aren’t all great speakers – A strong biography and list of accomplishments does not always result in a strong presentation. We were lucky that most of our speakers were quite good, but we did have at least a few disappointments from those active on the conference speaker circuit. Sponsors Ask everyone for something – Companies willing to spend money on sponsorship and travel costs clearly understand the value of goodwill and community visibility. There are also those that want to get that visibility and goodwill for free, and they ask leaders to announce a conference or product offering as a favor or “for the good of the community“. These requests are an opportunity to add additional value to the members. Instead of complying with these requests, I would always request something in return. For conference announcements, I would ask for a discount code for members or a free pass to raffle off. Sponsors with a product might be asked for a license to give away, or at worst some swag. Most were willing to barter in exchange for their request being met. Maintain control – Some sponsors clearly just want your membership roster and email addresses, which they may try to acquire by using the “fishbowl business card drawing” approach to raffles, sign-in sheets, speaker review forms, surveys, or perhaps through a bold request. Don’t sell your members’ private information to sponsors under any circumstances, and companies will still be willing to sponsor if you deny their attempts. We allowed a business card drawing once for an iPad, and all members were aware that if they decided to enter that drawing they would likely be getting a call from the vendor.Reference: What I’ve Learned After 15 Years as a Java Group Leader from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
jboss-wildfly-logo

Openshift: Build Spring Boot application on Wildfly 8.2.0 with Java 8

OpenShift DIY cartridge is a great way to test unsupported languages on OpenShift. But it is not scalable (you can vote for Scalable DIY cartridge here) which makes it hard to use with production grade Spring Boot applications. But what if we deployed Spring Boot application to WildFly Application Server? Spring Boot can run with embedded servlet container like Tomcat or much faster Undertow, but it also can be deployed to a standalone application server. This would mean that it can be also deployed to WildFly application server that is supported by OpenShift. Let’s see how easy is to get started with creating a Spring Boot application from scratch and deploy it to WildFly 8.2 on OpenShift. Note: While browsing OpenShift documentation one can think that on WildFly 8.1 and Java 7 is supported (as of time of writing this blog post). But this is fortunately not true anymore: WildFly 8.2 and Java 8 will work fine and it is default in fact!. This was the first time when I was happy about documentation being outdated. Update: If you are looking for a quick start, without the step by step walkthrough have a look here: Quick Start: Spring Boot and WildfFly 8.2 on OpenShift Prerequisite Before you can start building the application, you need to have an OpenShift free account and client tools (rhc) installed. Create WildFly application To create a WildFly application using client tools, type the following command: rhc create-app boot jboss-wildfly-8 --scaling jboss-wildfly-8 cartridge is described as WildFly Application Server 8.2.0.Final. Scaling option is used as it will be impossible to set it later (vote here) When the application is created you should see username and password for an administration user created for you. Please store these credentials to be able to login to the WildFly administration console. Template Application Source code OpenShift creates a template project. The project is a standard Maven project. You can browse through pom.xml and see that Java 8 is used by default for this project. In addition, there are two non standard folders created: deployments, that is used to put the resulting archive into, and .openshift with OpenShift specific files. Please note .opensift/config. This is the place where WildFly configuration is stored. Spring Boot dependencies As the dependency management Spring IO Platform will be used. The main advantage of using Spring IO Platform is that it simplifies dependency management by providing versions of Spring projects along with their dependencies that are tested and known to work together. Modify the pom.xml by adding: <dependencyManagement> <dependencies> <dependency> <groupId>io.spring.platform</groupId> <artifactId>platform-bom</artifactId> <version>1.1.1.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Now, Spring Boot dependencies can be added. Please note that since the application will be deployed to WildFly, we need to explicitly remove dependency on Tomcat.: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> Configure the application Initialize Spring Boot Application Having all dependencies, we can add application code. Create Application.java in demo package. The Application class’s work is to initiate Spring Boot application, so it must extend from SpringBootServletInitializer and be annotated with @SpringBootApplication package demo;import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.context.web.SpringBootServletInitializer;@SpringBootApplication public class Application extends SpringBootServletInitializer {} @Entity, @Repository, @Controller Spring Data JPA, part of the larger Spring Data family, makes it easy to easily implement JPA based repositories. For those who are not familiar with the project please visit: http://projects.spring.io/spring-data-jpa/. Domain model for this sample project is just a Person with some basic fields: @Entity @Table(name = "people") public class Person {@Id @GeneratedValue(strategy = GenerationType.IDENTITY) protected Integer id;@Column(name = "first_name") @NotEmpty protected String firstName;@Column(name = "last_name") @NotEmpty protected String lastName;@Column(name = "address") @NotEmpty private String address;@Column(name = "city") @NotEmpty private String city;@Column(name = "telephone") @NotEmpty @Digits(fraction = 0, integer = 10) private String telephone;} The Person needs a @Repository, so we can createa basic one using Spring’s Data repository. Spring Data repositories reduce much of the boilerplate code thanks to a simple interface definition: @Repository public interface PeopleRepository extends PagingAndSortingRepository<Person, Integer> { List<Person> findByLastName(@Param("lastName") String lastName); } With the domain model in place some test data can be handy. The easiest way is to provide a data.sql file with the SQL script to be executed on the application start-up. Create src/main/resources/data.sql containing initial data for the people table (see below). Spring Boot will pick this file and run against configured Data Source. Since the Data Source used is connecting to H2 database, the proper SQL syntax must be used: INSERT INTO people VALUES (1, 'George', 'Franklin', '110 W. Liberty St.', 'Madison', '6085551023'); Having Spring Data JPA repository in place, we can create a simple controller that exposes data over REST: @RestController @RequestMapping("people") public class PeopleController {private final PeopleRepository peopleRepository;@Inject public PeopleController(PeopleRepository peopleRepository) { this.peopleRepository = peopleRepository; }@RequestMapping public Iterable<Person> findAll(@RequestParam Optional<String> lastName) { if (lastName.isPresent()) { return peopleRepository.findByLastName(lastName.get()); } return peopleRepository.findAll(); } } findAll method accepts optional lastName parameter that is bound to Java’s 8 java.util.Optional. Start page The project generated by OpenShift during project setup contain webapp folder with some static files. These files can be removed and index.html can be modified: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>OpenShift</title> </head> <body> <form role="form" action="people"> <fieldset> <legend>People search</legend> <label for="lastName">Last name:</label> <input id="lastName" type="text" name="lastName" value="McFarland"/> <input type="submit" value="Search"/> </fieldset> </form> <p> ... or: <a href="people">Find all ...</a> </p> </body> </html> It is just a static page, but I noticed that application will not start if there is not default mapping (/) or if returns code different than 200. Normally, there will be always a default mapping. Configuration Create src/main/resources/application.properties and put the following values:management.context-path=/manage: actuator default management context path is /. This is changed to /manage, because OpenShift exposes /health endpoint itself that covers Actuator’s /health endpoint . spring.datasource.jndi-name=java:jboss/datasources/ExampleDS: since the application uses Spring Data JPA, we want to bind to the server’s Data Source via JNDI. Please look at .openshift/config/standalone.xml for other datasources. This is important if you wish to configure MySql or PostgreSQL to be used with your application. Read more about connecting to JNDI Data Source in Spring Boot here: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#boot-features-connecting-to-a-jndi-datasource spring.jpa.hibernate.ddl-auto=create-drop: create structure of the database based on the provided entities.Deploying to OpenShift The application is ready to be pushed to the repository. Commit your local changes and then push it to remote: git push The initial deployment (build and application startup) will take some time (up to several minutes). Subsequent deployments are a bit faster. You can now browse to: http://appname-yournamespace.rhcloud.com/ and you should see the form: Clicking search with default value will get record with id = 3: [ { "id": 3, "firstName": "2693 Commerce St.", "lastName": "McFarland", "address": "Eduardo", "city": "Rodriquez", "telephone": "6085558763" } ] Navigating to http://appname-yournamespace.rhcloud.com/people will return all records from the database. Going Java 7 If you want to use Java 7 in your project, instead of default Java 8, rename .openshift/markers/java8 to .openshift/markers/java7 and changte pom.xml accordingly: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> <maven.compiler.fork>true</maven.compiler.fork> </properties> Please note maven.compiler.executable was removed. Don’t forget to change the @Controller’s code and make it Java 7 compatible. Summary In this blog post you learned how to configure basic Spring Boot application and run it on OpenShift with WildfFly 8.2 and Java 8. OpenShift scales the application with the web proxy HAProxy. OpenShift takes care of automatically adding or removing copies of the application to serve requests as needed. Resourceshttps://github.com/kolorobot/openshift-wildfly-spring-boot – source code for this blog post.Reference: Openshift: Build Spring Boot application on Wildfly 8.2.0 with Java 8 from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close