Featured FREE Whitepapers

What's New Here?

spring-logo

SpringBoot: Introducing SpringBoot

SpringBoot…there is a lot of buzz about SpringBoot nowadays. So what is SpringBoot? SpringBoot is a new spring portfolio project which takes opinionated view of building production-ready Spring applications by drastically reducing the amount of configuration required. Spring Boot is taking the convention over configuration style to the next level by registering the default configurations automatically based on the classpath libraries available at runtime.   Well.. you might have already read this kind of introduction to SpringBoot on many blogs. So let me elaborate on what SpringBoot is and how it helps developing Spring applications more quickly. Spring framework was created by Rod Johnson when many of the Java developers are struggling with EJB 1.x/2.x for building enterprise applications. Spring framework makes developing the business components easy by using Dependency Injection and Aspect Oriented Programming concepts. Spring became very popular and many more Spring modules like SpringSecurity, Spring Batch, Spring Data etc become part of Spring portfolio. As more and more features added to Spring, configuring all the spring modules and their dependencies become a tedious task. Adding to that Spring provides atleast 3 ways of doing anything! Some people see it as flexibility and some others see it as confusing. Slowly, configuring all the Spring modules to work together became a big challenge. Spring team came up with many approaches to reduce the amount of configuration needed by introducing Spring XML DSLs, Annotations and JavaConfig. In the very beginning I remember configuring a big pile of jar version declarations in section and lot of declarations. Then I learned creating maven archetypes with basic structure and minimum required configurations. This reduced lot of repetitive work, but not eliminated completely. Whether you write the configuration by hand or generate by some automated ways, if there is code that you can see then you have to maintain it. So whether you use XML or Annotations or JavaConfig, you still need to configure (copy-paste) the same infrastructure setup one more time. On the other hand, J2EE (which is dead long time ago) emerged as JavaEE and since JavaEE6 it became easy (compared to J2EE and JavaEE5) to develop enterprise applications using JavaEE platform. Also JavaEE7 released with all the cool CDI, WebSockets, Batch, JSON support etc things became even more simple and powerful as well. With JavaEE you don’t need so much XML configuration and your war file size will be in KBs (really??? for non-helloworld/non-stageshow apps also!) Naturally this “convention over configuration” and “you no need to glue APIs together appServer already did it” arguments became the main selling points for JavaEE over Spring. Then Spring team addresses this problem with SpringBoot! Now its time to JavaEE to show whats the SpringBoot’s counterpart in JavaEE land JBoss Forge?? I love this Spring vs JavaEE thing which leads to the birth of powerful tools which ultimately simplify the developers life! Many times we need similar kind of infrastructure setup using same libraries. For example, take a web application where you map DispatcherServlet url-pattern to “/”, implement RESTFul webservices using Jackson JSON library with Spring Data JPA backend. Similarly there could be batch or spring integration applications which needs similar infrastructure configuration. SpringBoot to the rescue. SpringBoot look at the jar files available to the runtime classpath and register the beans for you with sensible defaults which can be overridden with explicit settings. Also SpringBoot configure those beans only when the jars files available and you haven’t define any such type of bean. Altogether SpringBoot provides common infrastructure without requiring any explicit configuration but lets the developer overrides if needed. To make things more simpler, SpringBoot team provides many starter projects which are pre-configured with commonly used dependencies. For example Spring Data JPA starter project comes with JPA 2.x with Hibernate implementation along with Spring Data JPA infrastructure setup. Spring Web starter comes with Spring WebMVC, Embedded Tomcat, Jackson JSON, Logback setup. Aaah..enough theory..lets jump onto coding. I am using latest STS-3.5.1 IDE which provides many more starter project options like Facebbok, Twitter, Solr etc than its earlier version. Create a SpringBoot starter project by going to File -> New -> Spring Starter Project -> select Web and Actuator and provide the other required details and Finish. This will create a Spring Starter Web project with the following pom.xml and Application.java <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion><groupId>com.sivalabs</groupId> <artifactId>hello-springboot</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging><name>hello-springboot</name> <description>Spring Boot Hello World</description><parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.3.RELEASE</version> <relativePath/> </parent><dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies><properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.sivalabs.springboot.Application</start-class> <java.version>1.7</java.version> </properties><build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build></project> package com.sivalabs.springboot;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration;@Configuration @ComponentScan @EnableAutoConfiguration public class Application {public static void main(String[] args) { SpringApplication.run(Application.class, args); } } Go ahead and run this class as a standalone Java class. It will start the embedded Tomcat server on 8080 port. But we haven’t added any endpoints to access, lets go ahead and add a simple REST endpoint. @Configuration @ComponentScan @EnableAutoConfiguration @Controller public class Application {public static void main(String[] args) { SpringApplication.run(Application.class, args); } @RequestMapping(value="/") @ResponseBody public String bootup() { return "SpringBoot is up and running"; } } Now point your browser to http://localhost:8080/ and you should see the response “SpringBoot is up and running”. Remember while creating project we have added Actuator starter module also. With Actuator you can obtain many interesting facts about your application. Try accessing the following URLs and you can see lot of runtime environment configurations that are provided by SpringBoot.http://localhost:8080/beans http://localhost:8080/metrics http://localhost:8080/trace http://localhost:8080/env http://localhost:8080/mappings http://localhost:8080/autoconfig http://localhost:8080/dumpSpringBoot actuator deserves a dedicated blog post to cover its vast number of features, I will cover it in my upcoming posts. I hope this article provides some basic introduction to SpringBoot and how it simplifies the Spring application development. More on SpringBoot in upcoming articles.Reference: SpringBoot: Introducing SpringBoot from our JCG partner Siva Reddy at the My Experiments on Technology blog....
jboss-hornetq-logo

Evaluating persistent, replicated message queues

Message queues are useful in a number of situations; for example when we want to execute a task asynchronously, we enqueue it and some executor eventually completes it. Depending on the use case, the queues can give various guarantees of message persistence and delivery. For some use-cases, it is enough to have an in-memory message queue. For others, we want to be sure that once the message send completes, it is persistently enqueued and will be eventually delivered, despite node or system crashes. To be really sure that messages are not lost, we will be looking at queues which:    persist messages to disk replicate messages across the networkIdeally, we want to have 3 identical, replicated nodes containing the message queue. There is a number of open-source messaging projects available, but only a handful supports both persistence and replication. We’ll evaluate the performance and characteristics of 4 message queues:Amazon SQS Mongo DB RabbitMq HornetQWhile SQS isn’t an open-source messaging system, it matches the requirements and I’ve recently benchmarked it, so it will be interesting to compare self-hosted solutions with an as-a-service one. MongoDB isn’t a queue of course, but a document-based NoSQL database, however using some of its mechanisms it is very easy to implement a message queue on top of it. By no way this aims to be a comprehensive overview, just an evaluation of some of the projects. If you know of any other messaging systems, which provide durable, replicated queues, let me know! Update: As many readers pointed out (thx @tlockney, @Evanfchan, @conikeec and others), Kafka is missing! While it works a bit differently (consumers are stateful – clustered – they keep their offset), Kafka supports point-to-point messaging where each message is consumed by one consumer, so stay tuned for an updated version of the blog, this time with Kafka! Testing methodology All sources for the tests are available on GitHub. The tests run a variable number of nodes (1-8); each node either sends or receives messages, using a variable number of threads (1-25), depending on the concrete test setup. Each Sender thread tries to send the given number of messages as fast as possible, in batches of random size between 1 and 10 messages. For some queues, we’ll also evaluate larger batches, up to 100 or 1000 messages. After sending all messages, the sender reports the number of messages sent per second. The Receiver tries to receive messages (also in batches), and after receiving them, acknowledges their delivery (which should cause the message to be removed from the queue). When no messages are received for a minute, the receiver thread reports the number of messages received per second.The queues have to implement the Mq interface. The methods should have the following characteristics:send should be synchronous, that is when it completes, we want to be sure (what sure means exactly may vary) that the messages are sent receive should receive messages from the queue and block them; if the node crashes, the messages should be returned to the queue and re-delivered ack should acknowledge delivery and processing of the messages. Acknowledgments can be asynchronous, that is we don’t have to be sure that the messages really got deleted.The model above describes an at-least-once message delivery model. Some queues offer also other delivery models, but we’ll focus on that one to compare possibly similar things. We’ll be looking at how fast (in terms of throughput) we can send and receive messages using a single 2 or 3 node message queue cluster. Mongo Mongo has two main features which make it possible to easily implement a durable, replicated message queue on top of it: very simple replication setup (we’ll be using a 3-node replica set), and various document-level atomic operations, like find-and-modify. The implementation is just a handful of lines of code; take a look at MongoMq. We are also able to control the guarantees which send gives us by using an appropriate write concern when writing new messages:WriteConcern.ACKNOWLEDGED (previously SAFE) ensures that once a send completes, the messages have been written to disk (though it’s not a 100% durability guarantee, as the disk may have its own write caches) WriteConcern.REPLICA_ACKNOWLEDGED ensures that a message is written to the majority of the nodes in the clusterThe main downside of the Mongo-based queue is that:messages can’t be received in bulk – the find-and-modify operation only works on a single document at a time when there’s a lot of connections trying to receive messages, the collection will encounter a lot of contention, and all operations are serialised.And this shows in the results: sends are faster then receives. But overall the performance is quite good! A single-thread, single-node setup achieves 7 900 msgs/s sent and 1 900 msgs/s received. The maximum send throughput with multiple thread/nodes that I was able to achieve is about 10 500 msgs/s, while the maximum receive rate is 3 200 msgs/s, when using the “safe” write concern.Threads Nodes Send msgs/s Receive msgs/s1 1 7 968,60 1 914,055 1 9 903,47 3 149,0025 1 10 903,00 3 266,831 2 9 569,99 2 779,875 2 10 078,65 3 112,5525 2 7 930,50 3 014,00  If we wait for the replica to acknowledge the writes (instead of just one node), the send throughput falls to 6 500 msgs/s, and the receive to about 2 900 msgs/s.Threads Nodes Send msgs/s Receive msgs/s1 1 1 489,21 1 483,691 2 2 431,27 2 421,015 2 6 333,10 2 913,9025 2 6 550,00 2 841,00  In my opinion, not bad for a very straightforward queue implementation on top of Mongo. SQS SQS is pretty well covered in my previous blog, so here’s just a short recap. SQS guarantees that if a send completes, the message is replicated to multiple nodes. It also provides at-least-once delivery guarantees. We don’t really know how SQS is implemented, but it most probably spreads the load across many servers, so including it here is a bit of an unfair competition: the other systems use a single replicated cluster, while SQS can employ multiple replicated clusters and route/balance the messages between them. But since we have the results, let’s see how it compares. A single thread on single node achieves 430 msgs/s sent and the same number of msgs received. These results are not impressive, but SQS scales nicely both when increasing the number of threads, and the number of nodes. On a single node, with 50 threads, we can send up to 14 500 msgs/s, and receive up to 4 200 msgs/s. On an 8-node cluster, these numbers go up to 63 500 msgs/s sent, and 34 800 msgs/s received.RabbitMQ RabbitMQ is one of the leading open-source messaging systems. It is written in Erlang, implements AMQP and is a very popular choice when messaging is involved. It supports both message persistence and replication, with well documented behaviour in case of e.g. partitions. We’ll be testing a 3-node Rabbit cluster. To be sure that sends complete successfully, we’ll be using publisher confirms, a Rabbit extension to AMQP. The confirmations are cluster-wide, so this gives us pretty strong guarantees: that messages will be both written to disk, and replicated to the cluster (see the docs). Such strong guarantees probably explain the poor performance. A single-thread, single-node gives us 310 msgs/s sent&received. This scales nicely as we add nodes, up to 1 600 msgs/s:The RabbitMq implementation of the Mq interface is again pretty straightforward. We are using the mentioned publisher confirms, and setting the quality-of-service when receiving so that at most 10 messages are delivered unconfirmed. Interestingly, increasing the number of threads on a node doesn’t impact the results. It may be because I’m incorrectly using the Rabbit API, or maybe it’s just the way Rabbit works. With 5 sending threads on a single node, the throughput increases just to 410 msgs/s. Things improve if we send messages in batches up to 100 or 1000, instead of 10. In both cases, we can get to 3 300 msgs/s sent&received, which seems to be the maximum that Rabbit can achieve. Results for batches up to 100:Threads Nodes Send msgs/s Receive msgs/s1 1 1 829,63 1 811,141 2 2 626,16 2 625,851 4 3 158,46 3 124,921 8 3 261,36 3 226,40And for batches up to 1000:Threads Nodes Send msgs/s Receive msgs/s1 1 3 181,08 2 549,451 2 3 307,10 3 278,291 4 3 566,72 3 533,921 8 3 406,72 3 377,68HornetQ HornetQ, written by JBoss and part of the JBossAS (implements JMS) is a strong contender. Since some time it supports over-the-network replication using live-backup pairs. I tried setting up a 3-node cluster, but it seems that data is replicated only to one node. Hence here we will be using a two-node cluster. This raises a question on how partitions are handled; by default the backup server won’t automatically fail-over, the operator must do that (turn the backup server into a live one). That’s certainly a valid way of handling partitions, but usually not the preferred one. It is possible to add configuration to automatically detect that the primary died, but then we can easily end up with two live servers, and that rises the question what happens with the data on both primaries when the connection is re-established. Overall, the replication support and documentation is worse than for Mongo and Rabbit. Also, as far as I understand the documentation (but I think it isn’t stated clearly anywhere), replication is asynchronous, meaning that even though we send messages in a transaction, once the transaction commits, we can be sure that messages are written only on the primary node’s journal. That is a weaker guarantee than in Rabbit, and corresponds to Mongo’s safe write concern. The HornetMq implementation uses the core Hornet API. For sends, we are using transactions, for receives we rely on the internal receive buffers and turn off blocking confirmations (making them asynchronous). Interestingly, we can only receive one message at a time before acknowledging it, otherwise we get exceptions on the server. But this doesn’t seem to impact performance. Speaking of performance, it is very good! A single-node, single-thread setup achieves 1 100 msgs/s. With 25 threads, we are up to 12 800 msgs/s! And finally, with 25 threads and 4 nodes, we can achieve 17 000 msgs/s.Threads Nodes Send msgs/s Receive msgs/s1 1 1 108,38 1 106,685 1 4 333,13 4 318,2525 1 12 791,83 12 802,421 2 2 095,15 2 029,995 2 7 855,75 7 759,4025 2 14 200,25 13 761,751 4 3 768,28 3 627,025 4 11 572,10 10 708,7025 4 17 402,50 16 160,50  One final note: when trying to send messages using 25 threads in bulks of up to 1000, I once got into a situation where the backup considered the primary dead even though it was working, and another time when the sending failed because the “address was blocked” (in other words, queue was full and couldn’t fit in memory), even though the receivers worked all the time. Maybe that’s due to GC? Or just the very high load? In summary As always, which message queue you choose depends on specific project requirements. All of the above solutions have some good sides:SQS is a service, so especially if you are using the AWS cloud, it’s an easy choice: good performance and no setup required if you are using Mongo, it is easy to build a replicated message queue on top of it, without the need to create and maintain a separate messaging cluster if you want to have high persistence guarantees, RabbitMQ ensures replication across the cluster and on disk on message send. finally, HornetQ has the best performanceWhen looking only at the throughput, HornetQ is a clear winner (unless we include SQS with multiple nodes, but as mentioned, that would be unfair):There are of course many other aspects besides performance, which should be taken into account when choosing a message queue, such as administration overhead, partition tolerance, feature set regarding routing, etc. Do you have any experiences with persistent, replicated queues? Or maybe you are using some other messaging solutions?Reference: Evaluating persistent, replicated message queues from our JCG partner Adam Warski at the Blog of Adam Warski blog....
java-logo

If BigDecimal is the answer, it must have been a strange question

Overview Many developers have determined that BigDecimal is the only way to deal with money.  Often they site that by replacing double with BigDecimal, they fixed a bug or ten.  What I find unconvincing about this is that perhaps they could have fixed the bug in the handling of double and that the extra overhead of using BigDecimal. My comparison, when asked to improve the performance of a financial application, I know at some time we will be removing BigDecimal if it is there. (It is usually not the biggest source of delays, but as we fix the system it moves up to the worst offender).   BigDecimal is not an improvement BigDecimal has many problems, so take your pick, but an ugly syntax is perhaps the worst sin.BigDecimal syntax is an unnatural. BigDecimal uses more memory BigDecimal creates garbage BigDecimal is much slower for most operations (there are exceptions)The following JMH benchmark demonstrates two problems with BigDecimal, clarity and performance. The core code takes an average of two values. The double implementation looks like this. Note: the need to use rounding. mp[i] = round6((ap[i] + bp[i]) / 2); The same operation using BigDecimal is not only long, but there is lots of boiler plate code to navigatemp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP);Does this give you different results? double has 15 digits of accuracy and the numbers are far less than 15 digits. If these prices had 17 digits, this would work, but nor work the poor human who have to also comprehend the price (i.e. they will never get incredibly long). Performance If you have to incurr coding overhead, usually this is done for performance reasons, but this doesn’t make sense here.Benchmark Mode Samples Score Score error Unitso.s.MyBenchmark.bigDecimalMidPrice thrpt 20 23638.568 590.094 ops/so.s.MyBenchmark.doubleMidPrice thrpt 20 123208.083 2109.738 ops/sConclusion If you don’t know how to use round in double, or your project mandates BigDecimal, then use BigDecimal. But if you have choice, don’t just assume that BigDecimal is the right way to go. The codeimport org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.Scope; import org.openjdk.jmh.annotations.State; import org.openjdk.jmh.runner.Runner; import org.openjdk.jmh.runner.RunnerException; import org.openjdk.jmh.runner.options.Options; import org.openjdk.jmh.runner.options.OptionsBuilder;import java.math.BigDecimal; import java.util.Random;@State(Scope.Thread) public class MyBenchmark { static final int SIZE = 1024; final double[] ap = new double[SIZE]; final double[] bp = new double[SIZE]; final double[] mp = new double[SIZE];final BigDecimal[] ap2 = new BigDecimal[SIZE]; final BigDecimal[] bp2 = new BigDecimal[SIZE]; final BigDecimal[] mp2 = new BigDecimal[SIZE];public MyBenchmark() { Random rand = new Random(1); for (int i = 0; i < SIZE; i++) { int x = rand.nextInt(200000), y = rand.nextInt(10000); ap2[i] = BigDecimal.valueOf(ap[i] = x / 1e5); bp2[i] = BigDecimal.valueOf(bp[i] = (x + y) / 1e5); } doubleMidPrice(); bigDecimalMidPrice(); for (int i = 0; i < SIZE; i++) { if (mp[i] != mp2[i].doubleValue()) throw new AssertionError(mp[i] + " " + mp2[i]); } }@Benchmark public void doubleMidPrice() { for (int i = 0; i < SIZE; i++) mp[i] = round6((ap[i] + bp[i]) / 2); }static double round6(double x) { final double factor = 1e6; return (long) (x * factor + 0.5) / factor; }@Benchmark public void bigDecimalMidPrice() { for (int i = 0; i < SIZE; i++) mp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP); }public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(".*" + MyBenchmark.class.getSimpleName() + ".*") .forks(1) .build();new Runner(opt).run(); } }Reference: If BigDecimal is the answer, it must have been a strange question from our JCG partner Peter Lawrey at the Vanilla Java blog....
android-logo

Android RecyclerView

RecyclerView is one of the two UI widgets introduced by the support library in Android L. In this post I will describe how we can use it and what’s the difference between this widget and the “classic” ListView. This new widget is more flexible than the ListView but it introduces some complexities. As we are used RecyclerView introduces a new Adapter that must be used to represent the underlying data in the widget. This new adapter is called RecyclerView.Adapter. To use this widget you have to add latest v7 support library.       Introduction We know already that in the ListView to increase the performance we have to use the ViewHolder pattern. This is simply a java class that holds the references to the widget in the row layout of the ListView (for example TextView, ImageView and so on). Using this pattern we avoid to call several times findById method to get the UI widget reference making the ListView scrolling smoother. Even if this pattern was suggested as best-practice we could implement our Adapter without using this pattern. RecyclerView enforces this pattern making it the core of this UI widget and we have to use it in our Adapter. The Adapter: RecyclerView.Adapter If we want to show the information in a ListView or in the new RecyclerView we have to use an Adapter. This component stands behind the UI widget and determines how the rows, in the ListView, have to be rendered and what information to show. Also in the RecyclerView we have to use an Adapter: public class MyRecyclerAdapter extends RecyclerView.Adapter<MyRecyclerAdapter.MyHolder> { .... } where MyHolder is our implementation of ViewHolder pattern. We can suppose, we have a simple row layout in our RecyclerView: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txt1"/><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txt2"/></LinearLayout> so our ViewHolder patter implementation is: public static class MyHolder extends RecyclerView.ViewHolder { protected TextView txt1; protected TextView txt2;private MyHolder(View v) { super(v); this.txt1 = (TextView) v.findViewById(R.id.txt1); this.txt2 = (TextView) v.findViewById(R.id.txt2); } } As you can notice, the lookup process (findViewById) is made in the view holder instead of in getView method. Now we have to implement some important method in our adapter to make it working properly. There are two important methods we have to override:onCreateViewHolder(ViewGroup viewGroup, int i) onBindViewHolder(MyHolder myHolder, int i)OnCreateViewHolder is called whenever a new instance of View Holder class must be instantiated, so this method becomes: @Override public MyHolder onCreateViewHolder(ViewGroup viewGroup, int i) { numCreated++; Log.d("RV", "OncreateViewHolder ["+numCreated+"]"); View v = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.row_layout, null); MyHolder mh = new MyHolder(v); return mh; } as you can notice, the method returns an instance of our View holder implementation, while the second method is called when the SO binds the view with the data, so we set the UI widget content to the data values: @Override public void onBindViewHolder(MyHolder myHolder, int i) { Log.d("RV", "OnBindViewHolder"); Item item = itemList.get(i); myHolder.txt1.setText(item.name); myHolder.txt2.setText(item.descr); } Notice that we don’t make a lookup but we simply use the UI widget reference stored in our view holder. RecyclerView Now we have our adapter we can create our RecyclerView: RecyclerView rv = (RecyclerView) findViewById(R.id.my_recycler_view); rv.setLayoutManager(new LinearLayoutManager(this)); MyRecyclerAdapter adapter = new MyRecyclerAdapter(createList()); rv.setAdapter(adapter); LinearLayoutManager is the “main” layout manager used to dispose items inside the RecyclerView. We can extend or implement our layout manager. Final considerations Running the example we obtain:The most interesting aspect is how many times the onCreateViewHolder is called compared to the number of items shown. If you look at the log you will find that the object created is 1/3 of the total number displayed.Reference: Android RecyclerView from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
javaone-logo

Gearing up for JavaOne 2014 !

Hold that thought! Yeah…I wish I was presenting at Java One 2014 – but I am only worthy of doing that in my dreams right now! But nothing is stopping me from following Java One and tracking sessions/talks about my favorite topics. I am hoping Oracle would make the 2014 talks available online for mortals like us just like it did for the 2013 edition. I have already made my list of talks (see below) which I am going to pounce upon (once they are available)…Have You ?     Lessons Learned from Real-World Deployments of Java EE 7 – Arun Gupta talks dissects live Java EE 7 applications and shares his thoughts and learnings. 50 EJB 3 Best Practices in 50 Minutes – need I say more…? eBay, Connecting Buyers and Sellers Globally via JavaServer Faces Applied Domain-Driven Design Blueprints for Java EE - Have you been following and learning from the Cargo Tracker Java EE application already? You are going to love this talk by Reza and Vijay. JPA Gotchas and Best Practices: Lessons from Overstock.com – nothing like listening to a ‘real’ world example. RESTful Microservices – Never heard of this one! It’s about leveraging Jersey 2.0 to achieve what you haven’t until now! Java API for JSON Binding: Introduction and Update – Potential Java EE 8 candidate and JAXB’s better half ! Is Your Code Parallel-Ready? – Love hacking on Java 8 Stream API? Yes.. this one is for you! Java EE Game Changers – Tomitribe’s David Blevins will talk about the journey from J2EE to Java EE 7, provide insights into Java EE 8 candidates and much more! Programming with Streams in Java 8 – I don’t think I need to comment. Unorthodox Enterprise Practices – Thoughts on Java EE design choices with live coding.. By one of my favorites…Adam Bien. Inside the CERT Oracle Secure Coding Standard for Java RESTing on Your Laurels Will Get You Pwned – When the rest of the world in going ga-ga about REST, this talk will attempt to look at the dark side and empower us with skills to mitigate REST related vulnerabilities. Java EE 7 Batch Processing in the Real World – another live coding session using Java EE 7 standards! Scaling a Mobile Startup Through the Cloud: A True Story – The folks at CodenameOne reveal their secrets. The Anatomy of a Secure Web Application Using Java – developing and deploying secure Java EE web application in the cloud.. now who does not want to do that?! Dirty Data: Why XML and JSON Don’t Cut It and What You Can Do About It – I haven’t thought beyond XML and JSON, but apparently some guys have and you must listen to them! API Design Checklist - no comments needed. Java EE 7 and Spring 4: A Shootout — I have been waiting for this! The Path to CDI 2.0 – Antoine, the CDI guy at Red Hat talks about…of course…CDI ! This includes discussion about 1.1/1.2 along with what’s coming up in CDI 2.0. Five Keys for Securing Java Web Apps – looks like I am repeating myself here.. but too much security can never be bad.. right? Java EE 7 Recipes – Yeah! cooked up by Josh Juneau. There is still time…Pick and gear up for your favorite tracks. More details on the official JavaOne website. Until then…Happy Learning!Reference: Gearing up for JavaOne 2014 ! from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
software-development-2-logo

You Probably Don’t Need a Message Queue

I’m a minimalist, and I don’t like to complicate software too early and unnecessarily. And adding components to a software system is one of the things that adds a significant amount of complexity. So let’s talk about message queues. Message Queues are systems that let you have fault-tolerant, distributed, decoupled, etc, etc. architecture. That sounds good on paper. Message queues may fit in several use-cases in your application. You can check this nice article about the benefits of MQs of what some use-cases might be. But don’t be hasty in picking an MQ because “decoupling is good”, for example. Let’s use an example – you want your email sending to be decoupled from your order processing.   So you post a message to a message queue, then the email processing system picks it up and sends the emails. How would you do that in a monolithic, single classpath application? Just make your order processing service depend on an email service, and call sendEmail(..) rather than sendToMQ(emailMessage). If you use MQ, you define a message format to be recognized by the two systems; if you don’t use an MQ you define a method signature. What is the practical difference? Not much, if any. But then you probably want to be able to add another consumer that does additional thing with a given message? And that might happen indeed, it’s just not for the regular project out there. And even if it is, it’s not worth it, compared to adding just another method call. Coupled – yes. But not inconveniently coupled. What if you want to handle spikes? Message queues give you the ability to put requests in a persistent queue and process all of them. And that is a very useful feature, but again it’s limited based on several factors – are your requests processed in the UI background, or require immediate response? The servlet container thread pool can be used as sort-of queue – response will be served eventually, but the user will have to wait (if the thread acquisition timeout is too small, requests will be dropped, though). Or you can use an in-memory queue for the heavier requests (that are handled in the UI background). And note that by default your MQ might not be highly-availably. E.g. if an MQ node dies, you lose messages. So that’s not a benefit over an in-memory queue in your application node. Which leads us to asynchronous processing – this is indeed a useful feature. You don’t want to do some heavy computation while the user is waiting. But you can use an in-memory queue, or simply start a new thread (a-la spring’s @Async annotation). Here comes another aspect – does it matter if a message is lost? If you application node, processing the request, dies, can you recover? You’ll be surprised how often it doesn’t actually matter, and you can function properly without guaranteeing all messages are processed. So, just asynchronously handling heavier invocations might work well. Even if you can’t afford to lose messages, the use-case when a message is put into a queue in order for another component to process it, there’s still a simple solution – the database. You put a row with a processed=false flag in the database. A scheduled job runs, picks all unprocessed ones and processes them asynchronously. Then, when processing is finished, set the flag to true. I’ve used this approach a number of times, including large production systems, and it works pretty well. And you can still scale your application nodes endlessly, as long as you don’t have any persistent state in them. Regardless of whether you are using an MQ or not. (Temporary in-memory processing queues are not persistent state). Why I’m trying to give alternatives to common usages of message queues? Because if chosen for the wrong reason, an MQ can be a burden. They are not as easy to use as it sounds. First, there’s a learning curve. Generally, the more separate integrated components you have, the more problems may arise. Then there’s setup and configuration. E.g. when the MQ has to run in a cluster, in multiple data centers (for HA), that becomes complex. High availability itself is not trivial – it’s not normally turned on by default. And how does your application node connect to the MQ? Via a refreshing connection pool, using a short-lived DNS record, via a load balancer? Then your queues have tons of configurations – what’s their size, what’s their behaviour (should consumers explicitly acknowledge receipt, should they explicitly acknowledge failure to process messages, should multiple consumers get the same message or not, should messages have TTL, etc.). Then there’s the network and message transfer overhead – especially given that people often choose JSON or XML for transferring messages. If you overuse your MQ, then it adds latency to your system. And last, but not least – it’s harder to track the program flow when analyzing problems. You can’t just see the “call hierarchy” in your IDE, because once you send a message to the MQ, you need to go and find where it is handled. And that’s not always as trivial as it sounds. You see, it adds a lot of complexity and things to take care of. Certainly MQs are very useful in some contexts. I’ve been using them in projects where they were really a good fit – e.g. we couldn’t afford to lose messages and we needed fast processing (so pinging the database wasn’t an option). I’ve also seen it being used in non-trivial scenarios, where we are using to for consuming messages on a single application node, regardless which node posts the message (pub/sub). And you can also check this stackoverflow question. And maybe you really need to have multiple languages communicate (but don’t want an ESB), or maybe your flow is getting so complex, that adding a new method call instead of a new message consumer is an overkill. So all I’m trying to say here is the trite truism “you should use the right tool for the job”. Don’t pick a message queue if you haven’t identified a real use for it that can’t be easily handled in a different, easier to setup and maintain manner. And don’t start with an MQ “just in case” – add it whenever you realize the actual need for it. Because probably, in the regular project out there, a message queue is not needed.Reference: You Probably Don’t Need a Message Queue from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

Step By Step Path to Becoming a Great Software Developer

I get quite a few emails that basically say “how do I become a good / great software developer?” These kinds of emails generally tick me off, because I feel like when you ask this kind of question, you are looking for some magical potion you can take that will suddenly make you into a super developer. I suspect that very few people who email me asking this question really want to know how to become a great software developer, but are instead looking for a quick fix or an easy answer.   On the other hand, I think there are some genuinely sincere developers that just don’t even know how to formulate the right questions they need to ask to get to their desired future. I think those developers–especially the ones starting out–are looking for a step-by-step guide to becoming a great developer.I thought I would make an attempt, from my experience and the best of my knowledge, to offer up that step-by-step guide. Now, of course, I realize that there is no magical formula and that there are multiple paths to success, but I think what follows is a reasonable outline of steps someone starting out could take to reach a pretty high level of proficiency and be generally regarded as a good–perhaps even great–developer. Step 1: Pick one language, learn the basics Before we can run, we have to learn to walk. You walk by learning how to program in a single programming language. You don’t learn to walk by trying to learn 50 million things all at once and spreading yourself way too thin. Too many beginning programmers try and jump into everything all at once and don’t have the patience to learn a single programming language before moving forward. They think that they have to know all the hot new technologies in order to get a programming job. While it is true that you need to know more than just the basics of a single programming language, you have to start here, so you might as well focus. Pick a single programming language that you think you would be likely to base your career around. The programming language itself doesn’t matter all that much, since you should be thinking for the long term here. What I mean is you shouldn’t try and learn an “easy” programming language to start. Just learn whatever language you are interested in and could see yourself programming in for the next few years. You want to pick something that will have some lasting value. Once you’ve picked the programming language you are going to try and learn, try and find some books or tutorials that isolate that programming language. What I mean by this is that you don’t want to find learning materials that will teach you too much all at once. You want to find beginner materials that focus on just the language, not a full technology stack. As you read through the material or go through the tutorial you have picked out, make sure you actually write code. Do exercises if you can. Try out what you learned. Try to put things together and use every concept you learn about. Yes, this is a pain. Yes, it is easier to read a book cover-to-cover, but if you really want to learn you need to do. When you are writing code, try to make sure you understand what every line of code you write does. The same goes for any code you read. If you are exposed to code, slow down and make sure you understand it. Whatever you don’t understand, look up. Take the time to do this and you will not feel lost and confused all the time. Finally, expect to go through a book or tutorial three times before it clicks. You will not get “programming” on the first try–no one ever does. You need repeated exposure before you start to finally get it and can understand what is going on. Until then you will feel pretty lost, that is ok, it is part of the process. Just accept it and forge ahead. Step 2: Build something small Now that you have a basic understanding of a single programming language, it’s time to put that understanding to work and find out where your gaps are. The best way to do this is to try and build something. Don’t get too ambitious at this point–but also don’t be too timid. Pick an idea for an application that is simple enough that you can do it with some effort, but nothing that will take months to complete. Try to confine it to just the programming language as much as possible. Don’t try to do something full stack (meaning, using all the technologies from user interfaces all the way to databases)–although you’ll probably need to utilize some kind of existing framework or APIs. For your first real project you might want to consider copying something simple that already exists. Look for a simple application, like a To-Do list app and straight out try to copy it. Don’t let your design skills stand in the way of learning to code.I’d recommend creating a mobile application of some kind, since most mobile applications are small and pretty simple. Plus, learning mobile development skills is very useful as more and more companies are starting to need mobile applications. Today, you can build a mobile application in just about any language. There are many solutions that let you build an app for the different mobile OSes using a wide variety of programming languages. You could also build a small web application, but just try to not get too deep into a complex web development stack. I generally recommend starting with a mobile app, because web development has a higher cost to entry. To develop a web application you’ll need to at least know some HTML, probably some back-end framework and JavaScript. Regardless of what you choose to build, you are probably going to have to learn a little bit about some framework–this is good, just don’t get too bogged down into the details. For example, you can write a pretty simple Android application without having to really know a lot about all of the Android APIs and how Android works, just by following some simple tutorials. Just don’t waste too much time trying to learn everything about a framework. Learn what you need to know to get your project done. You can learn the details later. Oh, and this is supposed to be difficult. That is how you learn. You struggle to figure out how to do something, then you find the answer. Don’t skip this step. You’ll never reach a point as a software developer where you don’t have to learn things on the spot and figure things out as you go along. This is good training for your future. Step 3: Learn a framework Now it’s time to actually focus on a framework. By now you should have a decent grasp of at least one programming language and have some experience working with a framework for mobile or web applications. Pick a single framework to learn that will allow you to be productive in some environment. What kind of framework you choose to learn will be based on what kind of developer you want to become. If you want to be a web developer, you’ll want to learn a web development framework for whatever programming language you are programming in. If you want to become a mobile developer, you’ll need to learn a mobile os and the framework that goes with it. Try to go deep with your knowledge of the framework. This will take time, but invest the time to learn whatever framework you are using well. Don’t try to learn multiple frameworks right now–it will only split your focus. Think about learning the skills you need for a very specific job that you will get that will use that framework and the programming language you are learning. You can always expand your skills later. Step 4: Learn a database technology Most software developers will need to know some database technology as most series applications have a back-end database. So, make sure you do not neglect investing in this area. You will probably see the biggest benefit if you learn SQL–even if you plan on working with NoSQL database like MongoDB or Raven, learning SQL will give you a better base to work from. There are many more jobs out there that require knowledge of SQL than NoSQL. Don’t worry so much about the flavor of SQL. The different SQL technologies are similar enough that you shouldn’t have a problem switching between them if you know the basics of one SQL technology. Just make sure you learn the basics about tables, queries, and other common database operations. I’d recommend getting a good book on the SQL technology of your choice and creating a few small sample projects, so you can practice what you are learning–always practice what you are learning. You have sufficient knowledge of SQL when you can:Create tables Perform basics queries Join tables together to get data Understand the basics of how indexes work Insert, update and delete dataIn addition, you will want to learn some kind of object relational mapping technology (ORM). Which one you learn will depend on what technology stack you are working with. Look for ORM technologies that fit the framework you have learned. There might be a few options, so you best bet is to try to pick the most popular one. Step 5: Get a job supporting an existing system Ok, now you have enough skills and knowledge to get a basic job as a software developer. If you could show me that you understand the basics of a programming language, can work with a framework, understand databases and have built your own application, I would certainly want to hire you–as would many employers.The key here is not too aim to high and to be very specific. Don’t try and get your dream job right now–you aren’t qualified. Instead, try and get a job maintaining an existing code base that is built using the programming language and framework you have been learning. You might not be able to find an exact match, but the more specific you can be the better. Try to apply for jobs that are exactly matched to the technologies you have been learning. Even without much experience, if you match the skill-set exactly and you are willing to be a maintenance programmer, you should be able to find a job. Yes, this kind of job might be a bit boring. It’s not nearly as exciting as creating something new, but the purpose of this job is not to have fun or to make money, it is to learn and gain experience. Working on an existing application, with a team of developers, will help you to expand your skills and see how large software systems are structured. You might be fixing bugs and adding small features, but you’ll be learning and putting your skills into action. Pour your heart into this job. Learn everything you can. Do the best work possible. Don’t think about money, raises and playing political games–all that comes later–for now, just focus on getting as much meaningful productive work done as possible and expanding your skills. Step 6: Learn structural best practices Now it’s time to start becoming better at writing code. Don’t worry too much about design at this point. You need to learn how to write good clean code that is easy to understand and maintain. In order to do this, you’ll need to read a lot and see many examples of good code. Beef up your library with the following books:Code Complete Clean Code Refactoring Working Effectively With Legacy Code Programming Pearls - (do the exercises)Language specific structural books like:JavaScript: The Good Parts Effective Java Effective C#At this point you really want to focus your learning on the structural process of writing good code and working with existing systems. You should strive to be able to easily implement an algorithm in your programming language of choice and to do it in a way that is easy to read and understand. Step 7: Learn a second language At this point you will likely grow the most by learning a second programming language really well. You will no doubt, at this point, have been exposed to more than one programming language, but now you will want to focus on a new language–ideally one that is significantly different than the one you know. This might seem like an odd thing to do, but let me explain why this is so important. When you know only one programming language very well, it is difficult to understand what concepts in software development are unique to your programming language and which ones transcend a particular language or technology. If you spend time in a new language and programming environment, you’ll begin to see things in a new way. You’ll start to learn practicality rather than convention. As a new programmer, you are very likely to do things in a particular way without knowing why you are doing them that way. Once you have a second language and technology stack under your belt, you’ll start to see more of the why. Trust me, you’ll grow if you do this. Especially if you pick a language you hate. Make sure you build something with this new language. Doesn’t have to be anything large, but something of enough complexity to force you to scratch your head and perhaps bang it against the wall–gently. Step 8: Build something substantial Allright, now comes the true test to prove your software development abilities. Can you actually build something substantial on your own? If you are going to move on and have the confidence to get a job building, and perhaps even designing something for an employer, you need to know you can do it. There is no better way to know it than to do it. Pick a project that will use the full stack of your skills. Make sure you incorporate database, framework and everything else you need to build a complete application. This project should be something that will take you more than a week and require some serious thinking and design. Try to make it something you can charge money for so that you take it seriously and have some incentive to keep working on it. Just make sure you don’t drag it out. You still don’t want to get too ambitious here. Pick a project that will challenge you, but not one that will never be completed. This is a major turning point in your career. If you have the follow-through to finish this project, you’ll go far, if you don’t… well, I can’t make any promises. Step 9: Get a job creating a new system Ok, now it’s time to go job hunting again. By this point, you should have pretty much maxed out the benefit you can get from your current job–especially if it still involves only doing maintenance.It’s time to look for a job that will challenge you–but not too much. You still have a lot to learn, so you don’t want to get in too far over your head. Ideally, you want to find a job where you’ll get the opportunity to work on a team building something new. You might not be the architect of the application, but being involved in the creation of an application will help you expand your skills and challenge you in different ways than just maintaining an existing code base. You should already have some confidence with creating a new system, since you’ll have just finished creating a substantial system yourself, so you can walk into interviews without being too nervous and with the belief you can do the job. This confidence will make it much more likely that you’ll get whatever job you apply for. Make sure you make your job search focused again. Highlight a specific set of skills that you have acquired. Don’t try to impress everyone with a long list of irrelevant skills. Focus on the most important skills and look for jobs that exactly match them–or at least match them as closely as possible. Step 10: Learn design best practices Now it’s time to go from junior developer to senior developer. Junior developers maintain systems, senior developers build and design them. (This is a generalization, obviously. Some senior developers maintain systems.) You should be ready to build systems by now, but now you need to learn how to design them. You should focus your studies on design best practices and some advanced topics like:Design patterns Inversion of Control (IOC) Test Driven Development (TDD) Behavior Driven Development (BDD) Software development methodologies like: Agile, SCRUM, etc Message buses and integration patternsThis list could go on for quite some time–you’ll never be done learning and improving your skills in these areas. Just make sure you start with the most important things first–which will be highly dependent on what interests you and where you would like to take your career. Your goal here is to be able to not just build a system that someone else has designed, but to form your own opinions about how software should be designed and what kinds of architectures make sense for what kinds of problems. Step 11: Keep going At this point you’ve made it–well, you’ll never really “make it,” but you should be a pretty good software developer–maybe even “great.” Just don’t let it go to your head, you’ll always have something to learn. How long did it take you to get here? I have no idea. It was probably at least a few years, but it might have been 10 or more–it just depends on how dedicated you were and what opportunities presented themselves to you. A good shortcut is to try and always surround yourself with developers better than you are. What to do along the way There are a few things that you should be doing along the way as you are following this 10 step process. It would be difficult to list them in every step, so I’ll list them all briefly here: Teach - The whole time you are learning things, you should be teaching them as well. It doesn’t matter if you are a beginner or expert, you have something valuable to teach and besides, teaching is one of the best ways to learn. Document your process and journey, help others along the way. Market yourself - I think this is so important that I built an entire course around the idea. Learn how to market yourself and continually do it throughout your career. Figure out how to create a personal brand for yourself and build a reputation for yourself in the industry and you’ll never be at want for a job. You’ll get decide your own future if you learn to market yourself. It takes some work, but it is well worth it. You are reading this post from my effort to do it. Read - Never stop learning. Never stop reading. Always be working your way through a book. Always be improving yourself. Your learning journey is never done. You can’t ever know it all. If you constantly learn during your career, you’ll constantly surpass your peers. Do - Every stop along the way, don’t just learn, but do. Put everything you are learning into action. Set aside time to practice your skills and to write code and build things. You can read all the books on golfing that you want, but you’ll never be Tiger Woods if you don’t swing a golf club.Reference: Step By Step Path to Becoming a Great Software Developer from our JCG partner John Sonmez at the Making the Complex Simple blog....
gradle-logo

Getting Started with Gradle: Dependency Management

It is challenging, if not impossible, to create real life applications which don’t have any external dependencies. That is why dependency management is a vital part of every software project. This blog post describes how we can manage the dependencies of our projects with Gradle. We will learn to configure the used repositories and the required dependencies. We will also apply this theory to practice by implementing a simple example application. Let’s get started.   Additional Reading:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can add functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file.Introduction to Repository Management Repositories are essentially dependency containers, and each project can use zero or more repositories. Gradle supports the following repository formats:Ivy repositories Maven repositories Flat directory repositoriesLet’s find out how we can configure each repository type in our build. Adding Ivy Repositories to Our Build We can add an Ivy repository to our build by using its url address or its location in the local file system. If we want to add an Ivy repository by using its url address, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "http://ivy.petrikainulainen.net/repo" } } If we want to add an Ivy repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "../ivy-repo" } } If you want to get more information about configuring Ivy repositories, you should check out the following resources:Section 50.6.6 Ivy Repositories of the Gradle User Guide The API documentation of the IvyArtifactRepositoryLet’s move on and find out how we can add Maven repositories to our build. Adding Maven Repositories to Our Build We can add a Maven repository to our build by using its url address or its location in the local file system. If we want to add a Maven repository by using its url, we have to add the following code snippet to the build.gradle file: repositories { maven { url "http://maven.petrikainulainen.net/repo" } } If we want to add a Maven repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { maven { url "../maven-repo" } } Gradle has three “aliases” which we can use when we are adding Maven repositories to our build. These aliases are:The mavenCentral() alias means that dependencies are fetched from the central Maven 2 repository. The jcenter() alias means that dependencies are fetched from the Bintray’s JCenter Maven repository. The mavenLocal() alias means that dependencies are fetched from the local Maven repository.If we want to add the central Maven 2 repository in our build, we must add the following snippet to our build.gradle file: repositories { mavenCentral() } If you want to get more information about configuring Maven repositories, you should check out section 50.6.4 Maven Repositories of the Gradle User Guide. Let’s move on and find out how we can add flat directory repositories to our build. Adding Flat Directory Repositories to Our Build If we want to use flat directory repositories, we have to add the following code snippet to our build.gradle file: repositories { flatDir { dirs 'lib' } } This means that dependencies are searched from the lib directory. Also, if we want to, we can use multiple directories by adding the following snippet to the build.gradle file: repositories { flatDir { dirs 'libA', 'libB' } } If you want get more information about flat directory repositories, you should check out the following resources:Section 50.6.5 Flat directory repository of the Gradle User Guide Flat Dir Repository post to the gradle-user mailing listLet’s move on and find out how we can manage the dependencies of our project with Gradle. Introduction to Dependency Management After we have configured the repositories of our project, we can declare its dependencies. If we want to declare a new dependency, we have to follow these steps:Specify the configuration of the dependency. Declare the required dependency.Let’s take a closer look at these steps. Grouping Dependencies into Configurations In Gradle dependencies are grouped into a named set of dependencies. These groups are called configurations, and we use them to declare the external dependencies of our project. The Java plugin specifies several dependency configurations which are described in the following:The dependencies added to the compile configuration are required when our the source code of our project is compiled. The runtime configuration contains the dependencies which are required at runtime. This configuration contains the dependencies added to the compile configuration. The testCompile configuration contains the dependencies which are required to compile the tests of our project. This configuration contains the compiled classes of our project and the dependencies added to the compile configuration. The testRuntime configuration contains the dependencies which are required when our tests are run. This configurations contains the dependencies added to compile, runtime, and testCompile configurations. The archives configuration contains the artifacts (e.g. Jar files) produced by our project. The default configuration group contains the dependencies which are required at runtime.Let’s move on and find out how we can declare the dependencies of our Gradle project. Declaring the Dependencies of a Project The most common dependencies are called external dependencies which are found from an external repository. An external dependency is identified by using the following attributes:The group attribute identifies the group of the dependency (Maven users know this attribute as groupId). The name attribute identifies the name of the dependency (Maven users know this attribute as artifactId). The version attribute specifies the version of the external dependency (Maven users know this attribute as version).These attributes are required when you use Maven repositories. If you use other repositories, some attributes might be optional. For example, if you use a flat directory repository, you might have to specify only name and version. Let’s assume that we have to declare the following dependency:The group of the dependency is ‘foo’. The name of the dependency is ‘foo’. The version of the dependency is 0.1. The dependency is required when our project is compiled.We can declare this dependency by adding the following code snipped to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' } We can also declare the dependencies of our project by using a shortcut form which follows this syntax: [group]:[name]:[version]. If we want to use the shortcut form, we have to add the following code snippet to the build.gradle file: dependencies { compile 'foo:foo:0.1' } We can also add multiple dependencies to the same configuration. If we want to use the “normal” syntax when we declare our dependencies, we have to add the following code snippet to the build.gradle file: dependencies { compile ( [group: 'foo', name: 'foo', version: '0.1'], [group: 'bar', name: 'bar', version: '0.1'] ) } On the other hand, if we want to use the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1', 'bar:bar:0.1' } It is naturally possible to declare dependencies which belong to different configurations. For example, if we want to declare dependencies which belong to the compile and testCompile configurations, we have to add the following code snippet to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' testCompile group: 'test', name: 'test', version: '0.1' } Again, it is possible to use the shortcut form. If we want to declare the same dependencies by using the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1' testCompile 'test:test:0.1' } You can get more information about declaring your dependencies by reading the section 50.4 How to declare your dependencies of Gradle User Guide. We have now learned the basics of dependency management. Let’s move on and implement our example application. Creating the Example Application The requirements of our example application are described in thefollowing:The build script of the example application must use the Maven central repository. The example application must write the received message to log by using Log4j. The example application must contain unit tests which ensure that the correct message is returned. These unit tests must be written by using JUnit. Our build script must create an executable jar file.Let’s find out how we can fulfil these requirements. Configuring the Repositories of Our Build One of the requirements of our example application was that its build script must use the Maven central repository. After we have configured our build script to use the Maven central repository, its source code looks as follows (The relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and declare the dependencies of our example application. Declaring the Dependencies of Our Example Application We have to declare two dependencies in the build.gradle file:Log4j (version 1.2.17) is used to write the received message to the log. JUnit (version 4.11) is used to write unit tests for our example application.After we have declared these dependencies, the build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and write some code. Writing the Code In order to fulfil the requirements of our example application, “we have to over-engineer it”. We can create the example application by following these steps:Create a MessageService class which returns the string ‘Hello World!’ when its getMessage() method is called. Create a MessageServiceTest class which ensures that the getMessage() method of the MessageService class returns the string ‘Hello World!’. Create the main class of our application which obtains the message from a MessageService object and writes the message to log by using Log4j. Configure Log4j.Let’s go through these steps one by one. First, we have to create a MessageService class to the src/main/java/net/petrikainulainen/gradle directory and implement it. After we have do this, its source code looks as follows: package net.petrikainulainen.gradle;public class MessageService {public String getMessage() { return "Hello World!"; } } Second, we have create a MessageServiceTest to the src/main/test/net/petrikainulainen/gradle directory and write a unit test to the getMessage() method of the MessageService class. The source code of the MessageServiceTest class looks as follows: package net.petrikainulainen.gradle;import org.junit.Before; import org.junit.Test;import static org.junit.Assert.assertEquals;public class MessageServiceTest {private MessageService messageService;@Before public void setUp() { messageService = new MessageService(); }@Test public void getMessage_ShouldReturnMessage() { assertEquals("Hello World!", messageService.getMessage()); } } Third, we have create a HelloWorld class to the src/main/java/net/petrikainulainen/gradle directory. This class is the main class of our application. It obtains the message from a MessageService object and writes it to a log by using Log4j. The source code of the HelloWorld class looks as follows: package net.petrikainulainen.gradle;import org.apache.log4j.Logger;public class HelloWorld {private static final Logger LOGGER = Logger.getLogger(HelloWorld.class);public static void main(String[] args) { MessageService messageService = new MessageService();String message = messageService.getMessage(); LOGGER.info("Received message: " + message); } } Fourth, we have to configure Log4j by using the log4j.properties which is found from the src/main/resources directory. The log4j.properties file looks as follows: log4j.appender.Stdout=org.apache.log4j.ConsoleAppender log4j.appender.Stdout.layout=org.apache.log4j.PatternLayout log4j.appender.Stdout.layout.conversionPattern=%-5p - %-26.26c{1} - %m\nlog4j.rootLogger=DEBUG,Stdout That is it. Let’s find out how we can run the tests of our example application. Running the Unit Tests We can run our unit test by using the following command: gradle test When our test passes, we see the following output: > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testBUILD SUCCESSFULTotal time: 4.678 secs However, if our unit test would fail, we would see the following output (the interesting section is highlighted): > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testnet.petrikainulainen.gradle.MessageServiceTest > getMessage_ShouldReturnMessageFAILED org.junit.ComparisonFailure at MessageServiceTest.java:221 test completed, 1 failed :test FAILEDFAILURE: Build failed with an exception.* What went wrong: Execution failed for task ':test'. > There were failing tests. See the report at: file:///Users/loke/Projects/Java/Blog/gradle-examples/dependency-management/build/reports/tests/index.html* Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.BUILD FAILEDTotal time: 4.461 secs As we can see, if our unit tests fails, the describes:which tests failed. how many tests were run and how many tests failed. the location of the test report which provides additional information about the failed (and passed) tests.When we run our unit tests, Gradle creates test reports to the following directories:The build/test-results directory contains the raw data of each test run. The build/reports/tests directory contains a HTML report which describes the results of our tests.The HTML test report is very useful tool because it describes the reason why our test failed. For example, if our unit test would expect that the getMessage() method of the MessageService class returns the string ‘Hello Worl1d!’, the HTML test report of that test case would look as follows:Let’s move on and find out how we can package and run our example application. Packaging and Running Our Example Application We can package our application by using one of these commands: em>gradle assembly or gradle build. Both of these commands create the dependency-management.jar file to the build/libs directory. When run our example application by using the command java -jar dependency-management.jar, we see the following output: > java -jar dependency-management.jar Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/Logger at net.petrikainulainen.gradle.HelloWorld.<clinit>(HelloWorld.java:10) Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:372) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:360) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 more The reason for this exception is that the Log4j dependency isn’t found from the classpath when we run our application. The easiest way to solve this problem is to create a so called “fat” jar file. This means that we will package the required dependencies to the created jar file. After we have followed the instructions given in the Gradle cookbook, our build script looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } } manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } We can now run the example application (after we have packaged it) and as we can see, everything is working properly: > java -jar dependency-management.jar INFO - HelloWorld - Received message: Hello World! That is all for today. Let’s summarize what we learned from this blog post. Summary This blog post has taught us four things:We learned how we can configure the repositories used by our build. We learned how we can declare the required dependencies and group these dependencies into configurations. We learned that Gradle creates a HTML test report when our tests are run. We learned how we can create a so called “fat” jar file.If you want to play around with the example application of this blog post, you can get it from Github.Reference: Getting Started with Gradle: Dependency Management from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-logo

Making operations on volatile fields atomic

Overview The expected behaviour for volatile fields is that they should behave in a multi-threaded application the same as they do in a single threaded application.  They are not forbidden to behave the same way, but they are not guaranteed to behave the same way. The solution in Java 5.0+ is to use AtomicXxxx classes however these are relatively inefficient in terms of memory (they add a header and padding), performance (they add a references and little control over their relative positions), and syntactically they are not as clear to use. IMHO A simple solution if for volatile fields to act as they might be expected to do, the way JVM must support in AtomicFields which is not forbidden in the current JMM (Java- Memory Model) but not guaranteed. Why make fields volatile? The benefit of volatile fields is that they are visible across threads and some optimisations which avoid re-reading them are disabled so you always check again the current value even if you didn’t change them. e.g. without volatile Thread 2: int a = 5;Thread 1: a = 6; (later) Thread 2: System.out.println(a); // prints 5 or 6 With volatile Thread 2: volatile int a = 5;Thread 1: a = 6;(later) Thread 2: System.out.println(a); // prints 6 given enough time. Why not use volatile all the time? Volatile read and write access is substantially slower.  When you write to a volatile field it stalls the entire CPU pipeline to ensure the data has been written to cache.  Without this, there is a risk the next read of the value sees an old value, even in the same thread (See AtomicLong.lazySet() which avoids stalling the pipeline) The penalty can be in the order of 10x slower which you don’t want to be doing on every access. What are the limitations of volatile? A significant limitation is that operations on the field is not atomic, even when you might think it is.  Even worse than that is that usually, there is no difference.  I.e. it can appear to work for a long time even years and suddenly/randomly break due to an incidental change such as the version of Java used, or even where the object is loaded into memory. e.g. which programs you loaded before running the program. e.g. updating a value Thread 2: volatile int a = 5;Thread 1: a += 1; Thread 2: a += 2;(later) Thread 2: System.out.println(a); // prints 6, 7 or 8 even given enough time. This is an issue because the read of a and the write of a are done separately and you can get a race condition. 99%+ of the time it will behave as expect, but sometimes it won’t. What can you do about it? You need to use AtomicXxxx classes. These wrap volatile fields with operations which behave as expected. Thread 2: AtomicInteger a = new AtomicInteger(5);Thread 1: a.incrementAndGet(); Thread 2: a.addAndGet(2);(later) Thread 2: System.out.println(a); // prints 8 given enough time. What do I propose? The JVM has a means to behave as expected,  the only surprising thing is you need to use a special class to do what the JMM won’t guarantee for you.  What I propose is that the JMM be changed to support the behaviour currently provided by the concurrency AtomicClasses. In each case the single threaded behaviour is unchanged. A multi-threaded program which does not see a race condition will behave the same. The difference is that a multi-threaded program does not have to see a race condition but changing the underlying behaviour.  current method suggested syntax notesx.getAndIncrement() x++ or x += 1x.incrementAndGet() ++xx.getAndDecrment() x– or x -= 1x.decrementAndGet() –xx.addAndGet(y) (x += y)x.getAndAdd(y) ((x += y)-y)x.compareAndSet(e, y) (x == e ? x = y, true : false) Need to add the comma syntax used in other languages.  These operations could be supported for all the primitive types such as boolean, byte, short, int, long, float and double. Additional assignment operators could be supported such as:  current method suggested syntax notesAtomic multiplication x *= 2;Atomic subtraction x -= y;Atomic division x /= y;Atomic modulus x %= y;Atomic shift x <<= y;Atomic shift x >>= z;Atomic shift x >>>= w;Atomic and x &= ~y; clears bitsAtomic or x |= z; sets bitsAtomic xor x ^= w; flips bits  What is the risk? This could break code which relies on these operations occasionally failing due to race conditions. It might not be possible to support more complex expressions in a thread safe manner.  This could lead to surprising bugs as the code can look like the works, but it doesn’t.  Never the less it will be no worse than the current state. JEP 193 – Enhanced Volatiles There is a JEP 193 to add this functionality to Java. An example is: class Usage { volatile int count; int incrementCount() { return count.volatile.incrementAndGet(); } } IMHO there is a few limitations in this approach.The syntax is fairly significant change.  Changing the JMM might not require many changes the the Java syntax and possibly no changes to the compiler. It is a less general solution.  It can be useful to support operations like volume += quantity; where these are double types. It places more burden on the developer to understand why he/she should use this instead of x++;I am not convinced that a more cumbersome syntax makes it clearer as to what is happening. Consider this example: volatile int a, b;a += b; or a.volatile.addAndGet(b.volatile); orAtomicInteger a, b;a.addAndGet(b.get()); Which of these operations, as a line are atomic. Answer none of them, however systems with Intel TSX can make these atomic and if you are going to change the behaviour of any of these lines of code I would make the the a += b;  rather than invent a new syntax which does the same thing most of the time, but one is guaranteed and not the other. Conclusion Much of the syntactic and performance overhead of using AtomicInteger and AtomicLong could be removed if the JMM guaranteed the equivalent single threaded operations behaved as expected for multi-threaded code. This feature could be added to earlier versions of Java by using byte code instrumentation.Reference: Making operations on volatile fields atomic from our JCG partner Peter Lawrey at the Vanilla Java blog....
software-development-2-logo

10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection

This is part 7 of a series of posts on the OWASP Top 10 Proactive Development Controls: 10 things you can do as a developer to make your app secure. Logging and Intrusion Detection Logging is a straightforward part of any system. But logging is important for more than troubleshooting and debugging. It is also critical for activity auditing, intrusion detection (telling ops when the system is being hacked) and forensics (figuring out what happened after the system was hacked). You should take all of this into account in your logging strategy.   What to log, what to log… and what not to log Make sure that you are always logging when, who, where and what: timestamps (you will need to take care of syncing timestamps across systems and devices or be prepared to account for differences in time zones, accuracy and resolution), user id, source IP and other address information, and event details. To make correlation and analysis easier, follow a common logging approach throughout the application and across systems where possible. Use an extensible logging framework like SLF4J with Logback or Apache Log4j/Log4j2. Compliance regulations like PCI DSS may dictate what information you need to record and when and how, who gets access to the logs, and how long you need to keep this information. You may also need to prove that audit logs and other security logs are complete and have not been tampered with (using a HMAC for example), and ensure that these logs are always archived. For these reasons, it may be better to separate out operations and debugging logs from transaction audit trails and security event logs. There is data that you must log (complete sequential history of specific events to meet compliance or legal requirements). Data that you must not log (PII or credit card data or opt-out/do-not-track data or intercepted communications). And other data you should not log (authentication information and other personal data). And watch out for Log Forging attacks where bad guys inject delimiters like extra CRLF sequences into text fields which they know will be logged in order to try to cover their tracks, or inject Javascript into data which will trigger an XSS attack when the log entry is displayed in a browser-based log viewer. Like other injection attacks, protect the system by encoding user data before writing it to the log. Review code for correct logging practices and test the logging code to make sure that it works. OWASP’s Logging Cheat Sheet provides more guidelines on how to do logging right, and what to watch out for. AppSensor – Intrusion Detection Another OWASP project, the OWASP AppSensor explains how to build on application logging to implement application-level intrusion detection. AppSensor outlines common detection points in an application, places that you should add checks to alert you that your system is being attacked. For example, if a server-side edit catches bad data that should already have been edited at the client, or catches a change to a non-editable field, then you either have some kind of coding bug or (more likely) somebody has bypassed client-side validation and is attacking your app. Don’t just log this case and return an error: throw an alert or take some kind of action to protect the system from being attacked like disconnecting the session. You could also check for known attack signatures:.Nick Galbreath (formerly at Etsy and now at startup Signal Sciences) has done some innovative work on detecting SQL injection and HTML injection attacks by mining logs to find common fingerprints and feeding this back into filters to detect when attacks are in progress and potentially block them. In the next 3 posts, we’ll step back from specific problems, and look at the larger issues of secure architecture, design and requirements.Reference: 10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close