Featured FREE Whitepapers

What's New Here?

gradle-logo

Create a jar library with gradle using AAR info

Some posts ago, I talked about how to use gradle to push an aar to maven central. If you remember, we had to modify some files and so on, but the work we have to do helps other developers to simplify their development when they want to use our code/library. When our code is pushed to the maven central as aar we can resuse it as libray simply setting a gradle dependency. For example, if we want to use Weatherlib we have to write:         dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile 'com.android.support:appcompat-v7:19.+' compile 'com.survivingwithandroid:weatherlib:1.3.1' } Very simple! Anyway, this is true if we use Android Studio, but what if we use Eclipse or something else? In some cases it is easier to have a classic jar that can be imported in our Eclipse project and add a dependency in our project to this jar. If we use Android Studio this jar isn’t created easily (AFAIK) and I had to slightly modify the build.gradle to create the jar. I looked around on the net and I found a solution that I re-adapted so that we can reuse the information stored in the properties file. If you remember from the post about aar and gradle (if not look here), there are two properties files that I show it for simplicity: POM_NAME=Android Weather Library POM_ARTIFACT_ID=weatherlib POM_PACKAGING=aar and VERSION_NAME=1.3.1 VERSION_CODE=6 GROUP=com.survivingwithandroidPOM_DESCRIPTION=Android Weather Lib POM_URL=https://github.com/survivingwithandroid/WeatherLib POM_SCM_URL=https://github.com/survivingwithandroid/WeatherLib POM_SCM_CONNECTION=scm:git@github.com:survivingwithandroid/weatherlib.git POM_SCM_DEV_CONNECTION=scm:git@github.com:survivingwithandroid/weatherlib.git POM_LICENCE_NAME=The Apache Software License, Version 2.0 POM_LICENCE_URL=http://www.apache.org/licenses/LICENSE-2.0.txt POM_LICENCE_DIST=repo POM_DEVELOPER_ID=survivingwithandroid POM_DEVELOPER_NAME=Francesco Azzola So I would like to use this information to create a jar with the name equals to the POM_ARTIFACT_ID combined with the VERSION_NAME, and this jar should be created under a specific directory. So we have to add under android section in build.gradle: android { ...sourceSets { main { java { srcDir 'src/main/java' } resources { srcDir 'src/../lib' } } } .. } and after the dependencies section: task clearJar(type: Delete) { delete 'build/libs/' + POM_ARTIFACT_ID + '_' + VERSION_NAME + '.jar' }task makeJar(type: Copy) { from('build/bundles/release/') into('release/') include('classes.jar') rename ('classes.jar', POM_ARTIFACT_ID + '_' + VERSION_NAME + '.jar') }makeJar.dependsOn(clearJar, build) Now if you run the task makeJar, AS will create a jar under the directory called release.If you want to have the build.gradle file you can get it here.Reference: Create a jar library with gradle using AAR info from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
java-logo

Playing with Java 8 – Lambdas, Paths and Files

I needed to read a whole bunch of files recently and instead of just grabbing my old FileUtils.java that I and probably most developers have and then copy from project to project, I decided to have quick look at how else to do it… Yes, I know there is Commons IO and Google IO, why would I even bother?  They probably do it better, but I wanted to check out the NIO jdk classes and play with lambdas as well…and to be honest, I think this actually ended up being a very neat bit of code. So I had a specific use case: I wanted to read all the source files from a whole directory tree, line by line. What this code does, it uses Files.walk to recursively get all the paths from the starting point, it creates a stream, which I then filter to only files that end with the required extension. For each of those files, I use Files.lines to create a stream of Strings, one per line. I trim that, filter out the empty ones and add them to the return collection. All very concise thanks to the new constructs. package net.briandupreez.blog.java8.io;import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory;import java.io.IOException; import java.nio.charset.Charset; import java.nio.file.FileVisitOption; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.util.ArrayList; import java.util.List; import java.util.stream.Stream;/** * RecursiveFileLineReader * Created by Brian on 2014-05-26. */ public class RecursiveFileLineReader {private transient static final Log LOG = LogFactory.getLog(RecursiveFileLineReader.class);/** * Get all the non empty lines from all the files with the specific extension, recursively. * * @param path the path to start recursion * @param extension the file extension * @return list of lines */ public static List<String> readAllLineFromAllFilesRecursively(final String path, final String extension) { final List<String> lines = new ArrayList<>(); try (final Stream<Path> pathStream = Files.walk(Paths.get(path), FileVisitOption.FOLLOW_LINKS)) { pathStream .filter((p) -> !p.toFile().isDirectory() && p.toFile().getAbsolutePath().endsWith(extension)) .forEach(p -> fileLinesToList(p, lines)); } catch (final IOException e) { LOG.error(e.getMessage(), e); } return lines; }private static void fileLinesToList(final Path file, final List<String> lines) { try (Stream<String> stream = Files.lines(file, Charset.defaultCharset())) { stream .map(String::trim) .filter(s -> !s.isEmpty()) .forEach(lines::add); } catch (final IOException e) { LOG.error(e.getMessage(), e); } }}Reference: Playing with Java 8 – Lambdas, Paths and Files from our JCG partner Brian Du Preez at the Zen in the art of IT blog....
html5-logo

HTML5: Offline Upload of Images

I am currently working on an application which has needs to work offline. This has the beneficial side effect, we use the different HTML5 storage capabilities. One of the is the File API, which we are using to store images locally – before queuing them for upload to a backend server. In this article, I will share some code how we did this. The example works in Google Chrome – for DOM manipulation I will use JQuery. Starting simple, we have one file input for images and an area to show the uploaded images.   <body> <input type="file" accept="image/*" class="js-image-upload"/> <div class="js-image-container"></div> </body>When the user selects a file, we want to store the image. $(document).on('change', '.js-image-upload', function (event) { var file = event.target.files[0]; var fileName = createTempName(file);writeImage(fileName, file); });The image storage is handled by this method. function writeImage(fileName, file) { getFileSystem(function (fileSystem) { fileSystem.root.getFile(fileName, {create: true}, function (fileEntry) { fileEntry.createWriter(function (fileWriter) { fileWriter.onwriteend = writeSuccessFull; fileWriter.onerror = errorFct; fileWriter.write(file); }, errorFct); }); }); }What is happening here?Retrieve the file system Create a file by the specificied name on its root Create a writer for this file Configure a success and error callback when the asynchronous file write happend Write the blob of the file using the writerThe retrieval of the file system is a two step procedure. We need to request quota from the browser and then get the file system. var SIZE = 100 * 1024 * 1024; // 100 MB var getFileSystem = function (successFct) { navigator.webkitPersistentStorage.requestQuota(SIZE, function () { window.webkitRequestFileSystem(window.PERSISTENT, SIZE, successFct, errorFct); }, errorFct); };The user will be asked to grant the website the access to a persistent storage. There are some errors you can get, e.g. when the user does not accept our request. But let’s assume the user trusts us. Then we want to react to the successful write and show the image. We can use a local file storage url and add the file to a queue to upload the file to the server. var showImage = function (fileName) { var src = 'filesystem:' + window.location.origin + '/persistent/' + fileName; var img = $('').attr('src', src); $('.js-image-container').append(img); };var writeSuccessFull = function () { addToSyncQueue(fileName); showImage(fileName); };I’m omitting the queue logic here. You can keep a queue of images for uploaded in the web storage or IndexedDB of your application. To read the image from storage you can use something like this: var readImage = function (fileName, successFct) { getFileSystem(function (fileSystem) { fileSystem.root.getFile(fileName, {}, function (fileEntry) {fileEntry.file(successFct, errorFct);}, errorFct); } ); };So this is a brief overview of what we did here. The working example code can be found here: https://gist.github.com/jthoenes/3668856a188d600e02d6 Hope it has been useful to a few people dealing with similar issues. Feel free to ask questions, when something pops up in your mind.Reference: HTML5: Offline Upload of Images from our JCG partner Johannes Thones at the Johannes Thönes blog blog....
mongodb-logo

Rocking with mongodb on spring boot

I’m a fan of Spring Boot and here’s my mongodb example project on Spring Boot. Most of the mongodb example projects are so basic that you won’t go far with them. You can search for plain Spring Data examples but they can get much complex than you’d like. So here’s mine. Here’s the pom I’ll use.             <!--?xml version="1.0" encoding="UTF-8"?--> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelversion>4.0.0</modelversion><groupid>caught.co.nr</groupid> <artifactid>boottoymongodb</artifactid> <version>1.0-SNAPSHOT</version> <packaging>war</packaging><!-- Inherit defaults from Spring Boot --> <parent> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-parent</artifactid> <version>1.0.0.BUILD-SNAPSHOT</version> </parent><dependencies> <dependency> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-data-mongodb</artifactid> </dependency></dependencies><!-- Needed for fat jar --> <build> <plugins> <plugin> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-maven-plugin</artifactid> </plugin> </plugins> </build><repositories> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>http://repo.spring.io/snapshot</url> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories><pluginrepositories> <pluginrepository> <id>spring-snapshots</id> <url>http://repo.spring.io/snapshot</url> </pluginrepository> </pluginrepositories> </project> The only dependency I need is “spring-boot-starter-data-mongodb” which contains all necessary dependencies for a spring boot mongodb project. Next is the model for my collection. Document annotation points to my collection named “products”. It is need only if your model name does not match your collection name. You can see a field annotation which maps the field name in the collection to the model’s field name. @Document(collection = "products") public class Product { @Id private String id; private String sku;@Field(value = "material_name") private String materialName;private Double price; private Integer availability;public String getId() { return id; }public void setId(String id) { this.id = id; }public String getSku() { return sku; }public void setSku(String sku) { this.sku = sku; }public String getMaterialName() { return materialName; }public void setMaterialName(String materialName) { this.materialName = materialName; }public Double getPrice() { return price; }public void setPrice(Double price) { this.price = price; }public Integer getAvailability() { return availability; }public void setAvailability(Integer availability) { this.availability = availability; }@Override public String toString() { return "Product{" + "id='" + id + '\'' + ", sku='" + sku + '\'' + ", materialName='" + materialName + '\'' + ", price=" + price + ", availability=" + availability + '}'; } } Not we will need a DAO layer to manipulate my data. MongoRepository is the interface I should implement if I want to use autogenerated find methods in my DAO layer and I want that. Every field of my model can be queried with these autogenerated methods. For a complete list of method name syntax check here. My query below will take a sku name and search my collection for this name and return the matching ones. public interface ProductRepository extends MongoRepository < Product, String >{ public List < Product > findBySku(String sku); } Now I’ll introduce a Service which will call my DAO interface. But wait a minute, I didn’t implement this interface and wrote necessary code for fetching the models right? Yep, these methods are autogenerated and I don’t need an implementation for this interface. @Service public class ProductService { @Autowired private ProductRepository repository;public List < Product > getSku(String sku){ return repository.findBySku(sku); } } Next, lets launch our Boot example. Here’s our main class: @Configuration @EnableAutoConfiguration @ComponentScan public class BootMongoDB implements CommandLineRunner {@Autowired private ProductService productService;private static final Logger logger = LoggerFactory.getLogger(BootMongoDB.class);public void run(String... args) throws Exception { List < Product > sku = productService.getSku("NEX.6"); logger.info("result of getSku is {}", sku); }public static void main(String[] args) throws Exception { SpringApplication.run(BootMongoDB.class, args); } } If you have a connection to a mongodb instance and a sku matching to the name you searched than you should see one or more Products as a result. What we did was quite basic. What if I want more complex queries? For instance if I want a specific sku with an availability equal to “1″? I can’t do it without using some @Query magic. So I’m updating my DAO class. public interface ProductRepository extends MongoRepository < Product, String >{ public List < Product > findBySku(String sku);@Query(value = "{sku: ?0, availability : 1}") public List < Product > findBySkuOnlyAvailables(String sku); } I provided a direct query for mongodb where sku in the signature of my method will be inserted to “?0″ in the query and will be sent to mongodb. You can update your Service and then your main method to see if it works. You may not like writing queries which are not much readable if you’re not very familiar with mongodb’s syntax. Then this is the time for adding custom DAO classes. It’s not possible to add and use methods other than the autogenerated ones to ProductRepository. So we will add few classes and have a nice featured methods. Our repository class was named “ProductRepository”. We will add a new interface named “ProductRepositoryCustom” and a new method which will find available skus for the given name (twin of findBySkuOnlyAvailables method). public interface ProductRepositoryCustom { public List < Product > findBySkuOnlyAvailablesCustom(String sku); } Then provide an implementation for this. Below you see that we inject ProductRepositoryCustom’s mongotemplate and do stuff with it. We create two criteria. First one is for the sku name and the second one is for availability. public class ProductRepositoryImpl implements ProductRepositoryCustom { @Autowired private MongoTemplate mongoTemplate;public List < Product > findBySkuOnlyAvailablesCustom(String sku) { Criteria criteria = Criteria.where("sku").is(sku). andOperator(Criteria.where("availability").is(1)); return mongoTemplate.find(Query.query(criteria), Product.class); } } The last step for custom implemetation is the update of ProductRepository class. As you can see below the only update I need is the addition of my ProductRepositoryCustom so we can link both of them together. All this naming can sound a little stupid. But notice that although the name of your custom interface is not important, a change in the name of the implementation will result in the throw of an exception: Invocation of init method failed; nested exception is org.springframework.data.mapping.PropertyReferenceException: No property only found for type String! Traversed path: Product.sku. To fix this make sure that the name of your implementation class is “ProductRepositoryImpl” which is the concatenation of the name of the interface that extends MongoRepository and “Impl”. public interface ProductRepository extends MongoRepository < Product, String>, ProductRepositoryCustom If we add our new method to our Service layer: @Service public class ProductService { @Autowired private ProductRepository repository;public List < Product > getSku(String sku){ return repository.findBySku(sku); }public List < Product > getAvailableSkuCustom(String sku){ return repository.findBySkuOnlyAvailablesCustom(sku); } } Then update our main class’ run method: public void run(String... args) throws Exception { List < Product > sku = productService.getSku("NEX.6"); logger.info("result of getSku is {}", sku);List < Product > availableSkuCustom = productService.getAvailableSkuCustom("NEX.6"); logger.info("result of availableSkuCustom is {}", availableSkuCustom); } Again you must see something in the log! You can check the whole project on github.Reference: Rocking with mongodb on spring boot from our JCG partner Sezin Karli at the caught Somewhere In Time = true; blog....
spring-logo

Implementing correlation ids in Spring Boot (for distributed tracing in SOA/microservices)

After attending Sam Newman’s microservice talks at Geecon last week I started to think more about what is most likely an essential feature of service-oriented / microservice platforms for monitoring, reporting and diagnostics: correlation ids. Correlation ids allow distributed tracing within complex service oriented platforms, where a single request into the application can often be dealt with by multiple downstream service. Without the ability to correlate downstream service requests it can be very difficult to understand how requests are being handled within your platform. I’ve seen the benefit of correlation ids in several recent SOA projects I have worked on, but as Sam mentioned in his talks, it’s often very easy to think this type of tracing won’t be needed when building the initial version of the application, but then  very difficult to retrofit into the application when you do realise the benefits (and the need for!). I’ve not yet found the perfect way to implement correlation ids within a Java/Spring-based application, but after chatting to Sam via email he made several suggestions which I have now turned into a simple project using Spring Boot to demonstrate how this could be implemented. Why? During both of Sam’s Geecon talks he mentioned that in his experience correlation ids were very useful for diagnostic purposes. Correlation ids are essentially an id that is generated and associated with a single (typically user-driven) request into the application that is passed down through the stack and onto dependent services. In SOA or microservice platforms this type of id is very useful, as requests into the application typically are ‘fanned out’ or handled by multiple downstream services, and a correlation id allows all of the downstream requests (from the initial point of request) to be correlated or grouped based on the id. So called ‘distributed tracing’ can then be performed using the correlation ids by combining all the downstream service logs and matching the required id to see the trace of the request throughout your entire application stack (which is very easy if you are using a centralised logging framework such as logstash). The big players in the service-oriented field have been talking about the need for distributed tracing and correlating requests for quite some time, and as such Twitter have created their open source Zipkin framework (which often plugs into their RPC framework Finagle), and Netflix has open-sourced their Karyon web/microservice framework, both of which provide distributed tracing. There are of course commercial offering in this area, one such product being AppDynamics, which is very cool, but has a rather hefty price tag. Creating a proof-of-concept in Spring Boot As great as Zipkin and Karyon are, they are both relatively invasive, in that you have to build your services on top of the (often opinionated) frameworks. This might be fine for some use cases, but no so much for others, especially when you are building microservices. I’ve been enjoying experimenting with Spring Boot of late, and this framework builds on the much known and loved (at least by me!) Spring framework by providing lots of preconfigured sensible defaults. This allows you to build microservices (especially ones that communicate via RESTful interfaces) very rapidly. The remainder of this blog pos explains how I implemented a (hopefully) non-invasive way of implementing correlation ids. GoalsAllow a correlation id to be generated for a initial request into the application Enable the correlation id to be passed to downstream services, using as method that is as non-invasive into the code as possibleImplementation I have created two projects on GitHub, one containing an implementation where all requests are being handled in a synchronous style (i.e. the traditional Spring approach of handling all request processing on a single thread), and also one for when an asynchronous (non-blocking) style of communication is being used (i.e., using the Servlet 3 asynchronous support combined with Spring’s DeferredResult and Java’s Futures/Callables). The majority of this article describes the asynchronous implementation, as this is more interesting:Spring Boot asynchronous (DeferredResult + Futures) communication correlation id Github repoThe main work in both code bases is undertaken by the CorrelationHeaderFilter, which is a standard Java EE Filter that inspects the HttpServletRequest header for the presence of a correlationId. If one is found then we set a ThreadLocal variable in the RequestCorrelation Class (discussed later). If a correlation id is not found then one is generated and added to the RequestCorrelation Class: public class CorrelationHeaderFilter implements Filter {//...@Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {final HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest; String currentCorrId = httpServletRequest.getHeader(RequestCorrelation.CORRELATION_ID_HEADER);if (!currentRequestIsAsyncDispatcher(httpServletRequest)) { if (currentCorrId == null) { currentCorrId = UUID.randomUUID().toString(); LOGGER.info("No correlationId found in Header. Generated : " + currentCorrId); } else { LOGGER.info("Found correlationId in Header : " + currentCorrId); }RequestCorrelation.setId(currentCorrId); }filterChain.doFilter(httpServletRequest, servletResponse); }//...private boolean currentRequestIsAsyncDispatcher(HttpServletRequest httpServletRequest) { return httpServletRequest.getDispatcherType().equals(DispatcherType.ASYNC); } The only thing in this code that may not instantly be obvious is the conditional check currentRequestIsAsyncDispatcher (httpServletRequest), but this is here to guard against the correlation id code being executed when the Async Dispatcher thread is running to return the results (this is interesting to note, as I initially didn’t expect the Async Dispatcher to trigger the execution of the filter again!). Here is the RequestCorrelation Class, which contains a simple ThreadLocal<String> static variable to hold the correlation id for the current Thread of execution (set via the CorrelationHeaderFilter above): public class RequestCorrelation {public static final String CORRELATION_ID = "correlationId";private static final ThreadLocal<String> id = new ThreadLocal<String>();public static String getId() { return id.get(); }public static void setId(String correlationId) { id.set(correlationId); } } Once the correlation id is stored in the RequestCorrelation Class it can be retrieved and added to downstream service requests (or data store access etc) as required by calling the static getId() method within RequestCorrelation. It is probably a good idea to encapsulate this behaviour away from your application services, and you can see an example of how to do this in a RestClient Class I have created, which composes Spring’s RestTemplate and handles the setting of the  correlation id within the header transparently from the calling Class. @Component public class CorrelatingRestClient implements RestClient {private RestTemplate restTemplate = new RestTemplate();@Override public String getForString(String uri) { String correlationId = RequestCorrelation.getId(); HttpHeaders httpHeaders = new HttpHeaders(); httpHeaders.set(RequestCorrelation.CORRELATION_ID, correlationId);LOGGER.info("start REST request to {} with correlationId {}", uri, correlationId);//TODO: error-handling and fault-tolerance in production ResponseEntity<String> response = restTemplate.exchange(uri, HttpMethod.GET, new HttpEntity<String>(httpHeaders), String.class);LOGGER.info("completed REST request to {} with correlationId {}", uri, correlationId);return response.getBody(); } }//... calling Class public String exampleMethod() { RestClient restClient = new CorrelatingRestClient(); return restClient.getForString(URI_LOCATION); //correlation id handling completely abstracted to RestClient impl } Making this work for asynchronous requests… The code included above works fine when you are handling all of your requests synchronously, but it is often a good idea in a SOA/microservice platform to handle requests in a non-blocking asynchronous manner. In Spring this can be achieved by using the DeferredResult Class in combination with the Servlet 3 asynchronous support. The problem with using ThreadLocal variables within the asynchronous approach is that the Thread that initially handles the request (and creates the DeferredResult/Future) will not be the Thread doing the actual processing. Accordingly, a bit of glue code is needed to ensure that the correlation id is propagated across the Threads. This can be achieved by extending Callable with the required functionality: (don’t worry if example Calling Class code doesn’t look intuitive – this adaption between DeferredResults and Futures is a necessary evil within Spring, and the full code including the boilerplate ListenableFutureAdapter is in my GitHub repo): public class CorrelationCallable<V> implements Callable<V> {private String correlationId; private Callable<V> callable;public CorrelationCallable(Callable<V> targetCallable) { correlationId = RequestCorrelation.getId(); callable = targetCallable; }@Override public V call() throws Exception { RequestCorrelation.setId(correlationId); return callable.call(); } }//... Calling Class@RequestMapping("externalNews") public DeferredResult<String> externalNews() { return new ListenableFutureAdapter<>(service.submit(new CorrelationCallable<>(externalNewsService::getNews))); } And there we have it – the propagation of correlation id regardless of the synchronous/asynchronous nature of processing! You can clone the Github report containing my asynchronous example, and execute the application by running mvn spring-boot:run at the command line. If you access http://localhost:8080/externalNews in your browser (or via curl) you will see something similar to the following in your Spring Boot console, which clearly demonstrates a correlation id being generated on the initial request, and then this being propagated through to a simulated external call (have a look in the ExternalNewsServiceRest Class to see how this has been implemented): [nio-8080-exec-1] u.c.t.e.c.w.f.CorrelationHeaderFilter : No correlationId found in Header. Generated : d205991b-c613-4acd-97b8-97112b2b2ad0 [pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : start REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0 [nio-8080-exec-2] u.c.t.e.c.w.f.CorrelationHeaderFilter : Found correlationId in Header : d205991b-c613-4acd-97b8-97112b2b2ad0 [pool-1-thread-1] u.c.t.e.c.w.c.CorrelatingRestClient : completed REST request to http://localhost:8080/news with correlationId d205991b-c613-4acd-97b8-97112b2b2ad0 Conclusion I’m quite happy with this simple prototype, and it does meet the two goals I listed above. Future work will include writing some tests for this code (shame on me for not TDDing!), and also extend this functionality to a more realistic example. I would like to say a massive thanks to Sam, not only for sharing his knowledge at the great talks at Geecon, but also for taking time to respond to my emails. If you’re interested in microservices and related work I can highly recommend Sam’s Microservice book which is available in Early Access at O’Reilly. I’ve enjoyed reading the currently available chapters, and having implemented quite a few SOA projects recently I can relate to a lot of the good advice contained within. I’ll be following the development of this book with keen interest! Resources I used Tomasz Nurkiewicz’s excellent blog several times for learning how best to wire up all of the DeferredResult/Future code in Spring: http://www.nurkiewicz.com/2013/03/deferredresult-asynchronous-processing.htmlReference: Implementing correlation ids in Spring Boot (for distributed tracing in SOA/microservices) from our JCG partner Daniel Bryant at the The Tai-Dev Blog blog....
javascript-logo

Widgets and dashboard with Atlasboard, Node.js and d3.js

So it’s been a while since I added something to this blog. I’ve been very busy with work and at the same time finishing up my second book on Three.js. For our company we’re looking for a new and flexible way to create dashboards. We want to use these kinds of dashboard to monitor the various development teams, provide a complete realtime overview for the IT manager and even try to monitor our complete devops process. A colleague of mine mentioned Atlasboard from atlassian. A really simple and straightforward dashboard with a very modular architecture. In this article I’ll give a quick overview on how easy it is to create your own widgets and jobs. Installing Atlasboard Atlasboard uses node.js as a container. So to install Atlasboard, first make sure you’ve got node.js and npm installed. Once installed you can use npm to install Atlasboard: npm install -g atlasboard Note that when you do this on windows you might run into some issues starting Atlasboard. See the following thread here for the solution: https://bitbucket.org/atlassian/atlasboard/issue/61/cant-create-dashboar… And that’s it. Now you can create a dashboard that runs on Atlasboard. Go to the directory where you want to create your dashboard and do the following: atlasboard new mydashboard This will create a directory called mydashboard. Move to this directory and start the atlasboard server: cd mydashboard atlasboard start 3333 Now open your browser, point it to http://localhost:3333 and you’ll see your first dashboard:Understanding Atlasboard To understand atlasboard you need to understand it’s main components. The wiki from Atlasboard has some basic information on this, but its very easy to understand. The following figure from Atlasboard wiki explains pretty much everything:Basically to work with Atlasboard you need to understand the following features:Jobs: A job is a simple javascript file that is executed by node.js at a scheduled interval. With this job you can pull in information from various sources which can then be displayed by a widget. This can be any type of information. For instance github statistics, jenkins statistics, build results, sonar results, twitter feeds etc. Atlasboard doesn’t use a database by default to keep track of the information retrieved by its jobs, but if you’d like you could easily ad mongoDB or something else. Widgets: Consists out of a javascript file, an html template and a css file. When a job fires, it sends the information it retrieved to a widget, which can then display it. Atlasboard comes with a couple of standard widgets you can use, but, as you’ll see in this article, creating custom widgets is very easy. Dashboards: Defines the size of the individual widgets and their position on screen. In this file you also define which widget and job are tied together, the interval for each of the jobs, and provides you with a convenient way to configure widgets and jobs.In the next section we’ll show you how to create your own custom widgets and jobs. The very simple goal is to create the following dashboard, which shows a gauge and a graph which both show the same information: the free memory on my system:Setup the dashboard First, lets setup the dashboard file. In the directories created by Atlasboard, you’ll find a file called myfirst_dashboard.json. If you open this file you’ll see the configuration for all the example widgets and jobs for the demo dashboard. Open this file, and change it to this: { "layout": { "title": false, "customJS" : ["jquery.peity.js"], "widgets" : [ {"row" : 1, "col" : 1, "width" : 4, "height" : 2, "widget" : "freemem", "job" : "freemem", "config": "freemem-config"}, {"row" : 2, "col" : 1, "width" : 4, "height" : 2, "widget" : "gauge", "job" : "freemem2", "config": "gauge-config"} ] },   "config" : {   "gauge-config" : { "interval" : 1000 },   "freemem-config" : { "interval" : 1000 } } } In the layout part of this file we define where the widgets are positioned on screen, which widget to use, widget job to use and which config to use. So when we look at the first line, you can see that we expect the freemem widget to use the freemem job and the freemem-config. This last part can directly be seen in the config part of this file (note there is config inheritance in atlasboard, but I’ll skipt that for now). Create the jobs So lets create the appropriate jobs and widgets. Atlasboard provides a command line for this. So do the following in your dashboard directory: atlasboard generate widget freemem atlasboard generate widget gauge atlasboard generate job freemem atlasboard generate job freemem2 This will create the necessary files for the widgets and jobs in the packages/default directory. Lets start by looking at the job (freemem and freemem2 are the same). The following listing provides the complete content for the freemem.js job: var os = require('os');   /** * Job: freemem * * Expected configuration: * * { } */     var data = []; var MAX_LENGTH = 100;   module.exports = function(config, dependencies, job_callback) {   // add the correct information to the data array var length = data.push(os.freemem()); if (length > MAX_LENGTH) { data.shift(); }   job_callback(null, { data: data, total: os.totalmem() }); }; This code uses the node.js os module to read the free and total memory of my system. By using the job_callback function we send two data elements to the widget at the front. The freemem2.js file is exactly the same. It seems that atlasboard can’t use the same job for the same widgets twice. Since I wanted to share the information I just created two jobs that look the same. Not the best solution, but the best I found so far! Create the widgets Now all that is left to do is create the widgets. Lets first look at the graph. For the graph I used rickshaw which is included in atlasboard. Atlasboard also provides an easier to use interface to create graphs (https://bitbucket.org/atlassian/atlasboard-charts-package), but I like the direct Rickshaw approach better. The code for a widget is very simple: freemem.css: .content { font-size: 35px; color: #454545; font-weight: bold; text-align: center; } freemem.html: <h2>freemem</h2> <div class="content"> <div id="graph"></div> <div id="text"></div> </div> freemem.js: widget = { //runs when we receive data from the job onData: function (el, data) {   //The parameters our job passed through are in the data object //el is our widget element, so our actions should all be relative to that if (data.title) { $('h2', el).text(data.title); }   var graphElement = document.querySelector("#graph"); var textElement = document.querySelector("#text");   while (graphElement.firstChild) { graphElement.removeChild(graphElement.firstChild); }   var dataArray = []; var count = 0; data.data.forEach(function(e){ dataArray.push({x: count++, y:e});   });   var graph = new Rickshaw.Graph({ element: document.querySelector("#graph"), height: 350, renderer: 'area', stroke: true, series: [ { data: dataArray, color: 'rgba(192,132,255,0.3)', stroke: 'rgba(0,0,0,0.15)' } ] });     graph.renderer.unstack = true; graph.render();     $(textElement).html("" + new Date()); } }; If you look through the code of freemem.js you can see that we don’t really do anything complex. We just parse the data we receive and use Rickshaw to draw the graph. Easy right? If you look at the sourcecode for gauge it isn’t that much more complex. I’ve used the d3.js based gauge from here: http://tomerdoron.blogspot.co.il/2011/12/google-style-gauges-using-d3js…. And changed to code so it’ll react to the updates from the job: widget = {   gauges: [],   //runs when we receive data from the job onData: function(el, data) {   //The parameters our job passed through are in the data object //el is our widget element, so our actions should all be relative to that if (data.title) { $('h2', el).text(data.title); }   var gaugeContainer = document.querySelector("#memoryGaugeContainer");   // if no gauge is there yet, create one; if (gaugeContainer.childNodes.length != 1) { this.gauges['memory'] = createGauge("memory", "Memory"); }   var freePercentage = 100-(data.data.pop()/(data.total/100));   var gauge = this.gauges['memory']; gauge.redraw(freePercentage); } };   function createGauge(name, label, min, max) ... }   function Gauge(placeholderName, configuration) { ... } the createGauge and Gauge object were taken from the previous link. I only implemented the widget code. Easy right? Conclusions That’s it for this first article on Atlasboard. We’re seriously considering implementing this at work, so I’ll try to give an update in a couple of weeks. Overall I really like the approach Atlasboard takes and how easy it is to create new widgets and jobs.Reference: Widgets and dashboard with Atlasboard, Node.js and d3.js from our JCG partner Jos Dirksen at the Smart Java blog....
java-logo

Java File I/O Basics

Java 7 introduced the java.nio.file package to provide comprehensive support for file I/O. Besides a lot of other functionality this package includes the Files class (if you already use this class you can stop reading here). Files contains a lot of static methods that can be used to accomplish common tasks when working with files. Unfortunately it looks to me that still a lot of newer (Java 7+) code is written using old (pre Java 7) ways of working with files. This does not have to be bad, but it can make things more complex than needed. A possible reason for this might be that a lot of articles and high rated Stackoverflow answers were written before the release of Java 7.     In the rest of this post I will provide some code samples that show how you can accomplish common file related tasks with Java 7 or newer. Working with files // Create directories // This will create the "bar" directory in "/foo" // If "/foo" does not exist, it will be created first Files.createDirectories(Paths.get("/foo/bar"));// Copy a file // This copies the file "/foo/bar.txt" to "/foo/baz.txt" Files.copy(Paths.get("/foo/bar.txt"), Paths.get("/foo/baz.txt"));// Move a file // This moves the file "/foo/bar.txt" to "/foo/baz.txt" Files.move(Paths.get("/foo/bar.txt"), Paths.get("/foo/baz.txt"));// Delete a file Files.delete(Paths.get("/foo/bar.txt"));// Delete a file but do not fail if the file does not exist Files.deleteIfExists(Paths.get("/foo/bar.txt"));// Check if a file exists boolean exists = Files.exists(Paths.get("/foo/bar.txt")); Most methods of Files take one or more arguments of type Path. Path instances represent a path to a file or directory and can be obtained using Paths.get(). Note that most methods shown here also have an additional varargs parameter that can be used to pass additional options. For example: Files.copy(Paths.get("/foo.txt"), Paths.get("/bar.txt"), StandardCopyOption.REPLACE_EXISTING); Iterating through all files within a directory Files.walkFileTree(Paths.get("/foo"), new SimpleFileVisitor<Path>() {   @Override   public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {     System.out.println("file: " + file);     return FileVisitResult.CONTINUE;   } }); Here the visitFile() method will be called for every file within the /foo directory. You can override additional methods of SimpleFileVisitor if you want to track directories too. Writing and reading files // Write lines to file List<String> lines = Arrays.asList("first", "second", "third"); Files.write(Paths.get("/foo/bar.txt"), lines, Charset.forName("UTF-8"));// Read lines List<String> lines = Files.readAllLines(Paths.get("/foo/bar.txt"), Charset.forName("UTF-8")); The shown methods work with characters. Similar methods are available if you need to work with bytes. Conclusion If you didn’t know about java.nio.file.Files you should definitely have a look at the Javadoc method summary. There is a lot of useful stuff inside.Reference: Java File I/O Basics from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
enterprise-java-logo

JPA 2.1 Entity Graph – Part 2: Define lazy/eager loading at runtime

This is my second post on JPA 2.1 Entity Graphs. The first post described the usage of named entity graphs. These can be used to define a graph of entities and/or attributes at compile time that shall be fetched with a find or query method. Dynamic entity graphs do to the same but in a dynamic way. This means you can use the EntityGraph API to define your entity graph at runtime. If you have missed the first post and want to read how to define a named entity graph or how lazy loading issues were solved with JPA 2.0, check this post: JPA 2.1 Entity Graph – Part 1: Named entity graphs. The example entities We will use the same example as in the previous post. So you can skip this paragraph if you have read the other one. We will use 3 entities. These are Order, OrderItem and Product. An Order might include multiple OrderItems and each OrderItem belongs to one Product. The FetchType of all these relations it FetchType.LAZY. So the entity manager will not fetch them from the database by default and initialize them with a proxy instead. The Order entity: @Entity @Table(name = "purchaseOrder") @NamedEntityGraph(name = "graph.Order.items", attributeNodes = @NamedAttributeNode(value = "items", subgraph = "items"), subgraphs = @NamedSubgraph(name = "items", attributeNodes = @NamedAttributeNode("product"))) public class Order implements Serializable {@Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false, nullable = false) private Long id = null; @Version @Column(name = "version") private int version = 0;@Column private String orderNumber;@OneToMany(mappedBy = "order", fetch = FetchType.LAZY) private Set<OrderItem> items = new HashSet<OrderItem>();... The OrderItem entity: @Entity public class OrderItem implements Serializable {@Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false, nullable = false) private Long id = null; @Version @Column(name = "version") private int version = 0;@Column private int quantity;@ManyToOne private Order order;@ManyToOne(fetch = FetchType.LAZY) private Product product; The Product entity: @Entity public class Product implements Serializable {@Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false, nullable = false) private Long id = null; @Version @Column(name = "version") private int version = 0;@Column private String name; Dynamic entity graph So let’s define a dynamic entity graph. We will do the same as in the first post and define a simple entity graph that tells the entity manager to fetch an Order with all associated OrderItems. Therefore we use the createEntityGraph(ClassrootType) method of the entity manager to create an entity graph for the Order entity. In the next step, we create a list of all attributes of the Order entity that shall be fetched with this entity graph. We only need to add the attribute items, because we will use this entity graph as a loadgraph and all other attributes are eager by default. If we would use this entity graph as a fetchgraph, we would need to add all attributes to the list that should be fetched from the database. EntityGraph<Order> graph = this.em.createEntityGraph(Order.class); graph.addAttributeNodes("items");Map<String, Object> hints = new HashMap<String, Object>(); hints.put("javax.persistence.loadgraph", graph);this.em.find(Order.class, orderId, hints); OK, dynamically defining which attributes of an entity shall be fetched from the database is nice. But what if we need a graph of entities? Like fetching an Order with all its OrderItems and their Product? This can be done with a sub graph. A sub graph is basically an entity graph that is embedded into another entity graph or entity sub graph. The definition of a sub graph is similar to the definition of an entity graph. To create and embed the sub graph into an entity graph, we need to call the addSubgraph(String attributeName) method on an EntityGraph object. This will create a sub graph for the attribute with the given name. In the next step, we need to define the list of attributes that shall be fetched with this sub graph. The following snippet shows the definition of an entity graph with an entity sub graph which tell the entity manager to fetch an Order with its OrderItems and their Product. EntityGraph<Order> graph = this.em.createEntityGraph(Order.class); Subgraph<OrderItem> itemGraph = graph.addSubgraph("items"); itemGraph.addAttributeNodes("product");Map<String, Object> hints = new HashMap<String, Object>(); hints.put("javax.persistence.loadgraph", graph);return this.em.find(Order.class, orderId, hints); What’s happening inside? As in the previous post, we want to have a look at the hibernate log and find out what hibernate is doing. As we can see, the result of a dynamic entity graph is the same as of a named entity graph. It creates a load plan and one select statement with all 3 entities. 2014-04-07 20:08:15,260 DEBUG [org.hibernate.loader.plan.build.spi.LoadPlanTreePrinter] (default task-2) LoadPlan(entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order) - Returns - EntityReturnImpl(entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order, querySpaceUid=, path=blog.thoughts.on.java.jpa21.entity.graph.model.Order) - CollectionAttributeFetchImpl(collection=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items, querySpaceUid=, path=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items) - (collection element) CollectionFetchableElementEntityGraph(entity=blog.thoughts.on.java.jpa21.entity.graph.model.OrderItem, querySpaceUid=, path=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items.) - QuerySpaces - EntityQuerySpaceImpl(uid=, entity=blog.thoughts.on.java.jpa21.entity.graph.model.Order) - SQL table alias mapping - order0_ - alias suffix - 0_ - suffixed key columns - {id1_2_0_} - JOIN (JoinDefinedByMetadata(items)) : -> - CollectionQuerySpaceImpl(uid=, collection=blog.thoughts.on.java.jpa21.entity.graph.model.Order.items) - SQL table alias mapping - items1_ - alias suffix - 1_ - suffixed key columns - {order_id4_2_1_} - entity-element alias suffix - 2_ - 2_entity-element suffixed key columns - id1_0_2_ - JOIN (JoinDefinedByMetadata(elements)) : -> - EntityQuerySpaceImpl(uid=, entity=blog.thoughts.on.java.jpa21.entity.graph.model.OrderItem) - SQL table alias mapping - items1_ - alias suffix - 2_ - suffixed key columns - {id1_0_2_}2014-04-07 20:08:15,260 DEBUG [org.hibernate.loader.entity.plan.EntityLoader] (default task-2) Static select for entity blog.thoughts.on.java.jpa21.entity.graph.model.Order [NONE:-1]: select order0_.id as id1_2_0_, order0_.orderNumber as orderNum2_2_0_, order0_.version as version3_2_0_, items1_.order_id as order_id4_2_1_, items1_.id as id1_0_1_, items1_.id as id1_0_2_, items1_.order_id as order_id4_0_2_, items1_.product_id as product_5_0_2_, items1_.quantity as quantity2_0_2_, items1_.version as version3_0_2_ from purchaseOrder order0_ left outer join OrderItem items1_ on order0_.id=items1_.order_id where order0_.id=? Conclusion After defining a named entity graph in the first post, we now used the EntityGraph API to define an dynamic entity graph. Using this entity graph, we can fetch a graph of multiple entities with only one query from the database. This can be used to solve LazyInitializationException and to improve the performance applications. What do you think about (dynamic) entity graphs? From my point of view this is a very useful extension compared to JPA 2.0. Especially the dynamic entity graphs are useful to define your fetch strategy based on runtime information like method parameters.Reference: JPA 2.1 Entity Graph – Part 2: Define lazy/eager loading at runtime from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....
java-logo

Double Checked Locking on Singleton Class in Java

Singleton class is quite common among Java developers, but it poses many challenges to junior developers. One of the key challenge they face is how to keep Singleton class as Singleton? i.e. how to prevent multiple instances of a Singleton due to whatever reasons. Double checked locking of Singleton is a way to ensure only one instance of Singleton class is created through application life cycle. As name suggests, in double checked locking, code checks for an existing instance of Singleton class twice with and without locking to double ensure that no more than one instance of singleton gets created. By the way, it was broken before Java fixed its memory models issues in JDK 1.5. In this article, we will see how to write code for double checked locking of Singleton in Java, why double checked locking was broken before Java 5 and How that was fixed. By the way this is also important from interview point of view, I have heard it’s been asked to code double checked locking of Singleton by hand on companies in both financial and service sector, and believe me it’s tricky, until you have clear understanding of what you are doing. You can also see my full list of Singleton design pattern questions to prepare well. Why you need Double checked Locking of Singleton Class? One of the common scenario, where a Singleton class breaks its contracts is multi-threading. If you ask a beginner to write code for Singleton design pattern, there is good chance that he will come up with something like below : private static Singleton _instance;public static Singleton getInstance() { if (_instance == null) { _instance = new Singleton(); } return _instance; } and when you point out that this code will create multiple instances of Singleton class if called by more than one thread parallel, he would probably make this whole getInstance() method synchronized, as shown in our 2nd code example getInstanceTS() method. Though it’s a thread-safe and solves issue of multiple instance, it’s not very efficient. You need to bear cost of synchronization all the time you call this method, while synchronization is only needed on first class, when Singleton instance is created. This will bring us to double checked locking pattern, where only critical section of code is locked. Programmer call it double checked locking because there are two checks for _instance == null, one without locking and other with locking (inside synchronized) block. Here is how double checked locking looks like in Java : public static Singleton getInstanceDC() { if (_instance == null) { // Single Checked synchronized (Singleton.class) { if (_instance == null) { // Double checked _instance = new Singleton(); } } } return _instance; }  On surface this method looks perfect, as you only need to pay price for synchronized block one time, but it still broken, until you make _instance variable volatile. Without volatile modifier it’s possible for another thread in Java to see half initialized state of _instance variable, but with volatile variable guaranteeing happens-before relationship, all the write will happen on volatile _instance before any read of _instance variable. This was not the case prior to Java 5, and that’s why double checked locking was broken before. Now, with happens-before guarantee, you can safely assume that this will work. By the way this is not the best way to create thread-safe Singleton, you can use Enum as Singleton, which provides inbuilt thread-safety during instance creation. Another way is to use static holder pattern. /* * A journey to write double checked locking of Singleton class in Java. */class Singleton {private volatile static Singleton _instance;private Singleton() { // preventing Singleton object instantiation from outside }/* * 1st version: creates multiple instance if two thread access * this method simultaneously */public static Singleton getInstance() { if (_instance == null) { _instance = new Singleton(); } return _instance; }/* * 2nd version : this definitely thread-safe and only * creates one instance of Singleton on concurrent environment * but unnecessarily expensive due to cost of synchronization * at every call. */public static synchronized Singleton getInstanceTS() { if (_instance == null) { _instance = new Singleton(); } return _instance; }/* * 3rd version : An implementation of double checked locking of Singleton. * Intention is to minimize cost of synchronization and improve performance, * by only locking critical section of code, the code which creates instance of Singleton class. * By the way this is still broken, if we don't make _instance volatile, as another thread can * see a half initialized instance of Singleton. */public static Singleton getInstanceDC() { if (_instance == null) { synchronized (Singleton.class) { if (_instance == null) { _instance = new Singleton(); } } } return _instance; } } That’s all about double checked locking of Singleton class in Java. This is one of the controversial way to create thread-safe Singleton in Java, with simpler alternatives available in terms of using Enum as Singleton class. I don’t suggest you to implement your Singleton like that as there are many better way to implement Singleton pattern in Java. Though, this question has historical significance and also teaches how concurrency can introduce subtle bugs. As I said before, this is very important from interview point of view. Practice writing double checked locking of Singleton class by hand before going for any Java interview. This will develop your insight on coding mistakes made by Java programmers. On related note, In modern day of Test driven development, Singleton is regarded as anti pattern because of difficulty it present to mock its behaviour, so if you are TDD practitioner better avoid using Singleton pattern.Reference: Double Checked Locking on Singleton Class in Java from our JCG partner Javin Paul at the Javarevisited blog....
java-logo

Parsing a file with Stream API in Java 8

Streams are everywhere in Java 8. Just look around and for sure you will find them. It also applies to java.io.BufferedReader. Parsing a file in Java 8 with Stream API is extremely easy. I have a CSV file that I want to be read. An example below:             username;visited jdoe;10 kolorobot;4 A contract for my reader is to provide a header as list of strings and all records as list of lists of strings. My reader accepts java.io.Reader as a source to read from. I will start with reading the header. The algorithm for reading the header is as follows:Open a source for reading, Get the first line and parse it, Split line by a separator, Get the first line and parse it, Convert the line to list of strings and return.And the implementation: class CsvReader {private static final String SEPARATOR = ";";private final Reader source;CsvReader(Reader source) { this(source); } List<String> readHeader() { try (BufferedReader reader = new BufferedReader(source)) { return reader.lines() .findFirst() .map(line -> Arrays.asList(line.split(SEPARATOR))) .get(); } catch (IOException e) { throw new UncheckedIOException(e); } } } Fairly simple. Self-explanatory. Similarly, I created a method to read all records. The algorithm for reading the records is as follows:Open a source for reading, Skip the first line, Split line by a separator, Apply a mapper on each line that maps a line to a list of strings.And the implementation: class CsvReader {List<List<String>> readRecords() { try (BufferedReader reader = new BufferedReader(source)) { return reader.lines() .substream(1) .map(line -> Arrays.asList(line.split(separator))) .collect(Collectors.toList()); } catch (IOException e) { throw new UncheckedIOException(e); } } } Nothing fancy here. What you could notice is that a mapper in both methods is exactly the same. In fact, it can be easily extracted to a variable: Function<String, List<String>> mapper = line -> Arrays.asList(line.split(separator)); To finish up, I created a simple test. public class CsvReaderTest {@Test public void readsHeader() { CsvReader csvReader = createCsvReader(); List<String> header = csvReader.readHeader(); assertThat(header) .contains("username") .contains("visited") .hasSize(2); }@Test public void readsRecords() { CsvReader csvReader = createCsvReader(); List<List<String>> records = csvReader.readRecords(); assertThat(records) .contains(Arrays.asList("jdoe", "10")) .contains(Arrays.asList("kolorobot", "4")) .hasSize(2); }private CsvReader createCsvReader() { try { Path path = Paths.get("src/test/resources", "sample.csv"); Reader reader = Files.newBufferedReader( path, Charset.forName("UTF-8")); return new CsvReader(reader); } catch (IOException e) { throw new UncheckedIOException(e); } } }Reference: Parsing a file with Stream API in Java 8 from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books