Featured FREE Whitepapers

What's New Here?


If BigDecimal is the answer, it must have been a strange question

Overview Many developers have determined that BigDecimal is the only way to deal with money.  Often they site that by replacing double with BigDecimal, they fixed a bug or ten.  What I find unconvincing about this is that perhaps they could have fixed the bug in the handling of double and that the extra overhead of using BigDecimal. My comparison, when asked to improve the performance of a financial application, I know at some time we will be removing BigDecimal if it is there. (It is usually not the biggest source of delays, but as we fix the system it moves up to the worst offender).   BigDecimal is not an improvement BigDecimal has many problems, so take your pick, but an ugly syntax is perhaps the worst sin.BigDecimal syntax is an unnatural. BigDecimal uses more memory BigDecimal creates garbage BigDecimal is much slower for most operations (there are exceptions)The following JMH benchmark demonstrates two problems with BigDecimal, clarity and performance. The core code takes an average of two values. The double implementation looks like this. Note: the need to use rounding. mp[i] = round6((ap[i] + bp[i]) / 2); The same operation using BigDecimal is not only long, but there is lots of boiler plate code to navigatemp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP);Does this give you different results? double has 15 digits of accuracy and the numbers are far less than 15 digits. If these prices had 17 digits, this would work, but nor work the poor human who have to also comprehend the price (i.e. they will never get incredibly long). Performance If you have to incurr coding overhead, usually this is done for performance reasons, but this doesn’t make sense here.Benchmark Mode Samples Score Score error Unitso.s.MyBenchmark.bigDecimalMidPrice thrpt 20 23638.568 590.094 ops/so.s.MyBenchmark.doubleMidPrice thrpt 20 123208.083 2109.738 ops/sConclusion If you don’t know how to use round in double, or your project mandates BigDecimal, then use BigDecimal. But if you have choice, don’t just assume that BigDecimal is the right way to go. The codeimport org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.Scope; import org.openjdk.jmh.annotations.State; import org.openjdk.jmh.runner.Runner; import org.openjdk.jmh.runner.RunnerException; import org.openjdk.jmh.runner.options.Options; import org.openjdk.jmh.runner.options.OptionsBuilder;import java.math.BigDecimal; import java.util.Random;@State(Scope.Thread) public class MyBenchmark { static final int SIZE = 1024; final double[] ap = new double[SIZE]; final double[] bp = new double[SIZE]; final double[] mp = new double[SIZE];final BigDecimal[] ap2 = new BigDecimal[SIZE]; final BigDecimal[] bp2 = new BigDecimal[SIZE]; final BigDecimal[] mp2 = new BigDecimal[SIZE];public MyBenchmark() { Random rand = new Random(1); for (int i = 0; i < SIZE; i++) { int x = rand.nextInt(200000), y = rand.nextInt(10000); ap2[i] = BigDecimal.valueOf(ap[i] = x / 1e5); bp2[i] = BigDecimal.valueOf(bp[i] = (x + y) / 1e5); } doubleMidPrice(); bigDecimalMidPrice(); for (int i = 0; i < SIZE; i++) { if (mp[i] != mp2[i].doubleValue()) throw new AssertionError(mp[i] + " " + mp2[i]); } }@Benchmark public void doubleMidPrice() { for (int i = 0; i < SIZE; i++) mp[i] = round6((ap[i] + bp[i]) / 2); }static double round6(double x) { final double factor = 1e6; return (long) (x * factor + 0.5) / factor; }@Benchmark public void bigDecimalMidPrice() { for (int i = 0; i < SIZE; i++) mp2[i] = ap2[i].add(bp2[i]) .divide(BigDecimal.valueOf(2), 6, BigDecimal.ROUND_HALF_UP); }public static void main(String[] args) throws RunnerException { Options opt = new OptionsBuilder() .include(".*" + MyBenchmark.class.getSimpleName() + ".*") .forks(1) .build();new Runner(opt).run(); } }Reference: If BigDecimal is the answer, it must have been a strange question from our JCG partner Peter Lawrey at the Vanilla Java blog....

Android RecyclerView

RecyclerView is one of the two UI widgets introduced by the support library in Android L. In this post I will describe how we can use it and what’s the difference between this widget and the “classic” ListView. This new widget is more flexible than the ListView but it introduces some complexities. As we are used RecyclerView introduces a new Adapter that must be used to represent the underlying data in the widget. This new adapter is called RecyclerView.Adapter. To use this widget you have to add latest v7 support library.       Introduction We know already that in the ListView to increase the performance we have to use the ViewHolder pattern. This is simply a java class that holds the references to the widget in the row layout of the ListView (for example TextView, ImageView and so on). Using this pattern we avoid to call several times findById method to get the UI widget reference making the ListView scrolling smoother. Even if this pattern was suggested as best-practice we could implement our Adapter without using this pattern. RecyclerView enforces this pattern making it the core of this UI widget and we have to use it in our Adapter. The Adapter: RecyclerView.Adapter If we want to show the information in a ListView or in the new RecyclerView we have to use an Adapter. This component stands behind the UI widget and determines how the rows, in the ListView, have to be rendered and what information to show. Also in the RecyclerView we have to use an Adapter: public class MyRecyclerAdapter extends RecyclerView.Adapter<MyRecyclerAdapter.MyHolder> { .... } where MyHolder is our implementation of ViewHolder pattern. We can suppose, we have a simple row layout in our RecyclerView: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txt1"/><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/txt2"/></LinearLayout> so our ViewHolder patter implementation is: public static class MyHolder extends RecyclerView.ViewHolder { protected TextView txt1; protected TextView txt2;private MyHolder(View v) { super(v); this.txt1 = (TextView) v.findViewById(R.id.txt1); this.txt2 = (TextView) v.findViewById(R.id.txt2); } } As you can notice, the lookup process (findViewById) is made in the view holder instead of in getView method. Now we have to implement some important method in our adapter to make it working properly. There are two important methods we have to override:onCreateViewHolder(ViewGroup viewGroup, int i) onBindViewHolder(MyHolder myHolder, int i)OnCreateViewHolder is called whenever a new instance of View Holder class must be instantiated, so this method becomes: @Override public MyHolder onCreateViewHolder(ViewGroup viewGroup, int i) { numCreated++; Log.d("RV", "OncreateViewHolder ["+numCreated+"]"); View v = LayoutInflater.from(viewGroup.getContext()).inflate(R.layout.row_layout, null); MyHolder mh = new MyHolder(v); return mh; } as you can notice, the method returns an instance of our View holder implementation, while the second method is called when the SO binds the view with the data, so we set the UI widget content to the data values: @Override public void onBindViewHolder(MyHolder myHolder, int i) { Log.d("RV", "OnBindViewHolder"); Item item = itemList.get(i); myHolder.txt1.setText(item.name); myHolder.txt2.setText(item.descr); } Notice that we don’t make a lookup but we simply use the UI widget reference stored in our view holder. RecyclerView Now we have our adapter we can create our RecyclerView: RecyclerView rv = (RecyclerView) findViewById(R.id.my_recycler_view); rv.setLayoutManager(new LinearLayoutManager(this)); MyRecyclerAdapter adapter = new MyRecyclerAdapter(createList()); rv.setAdapter(adapter); LinearLayoutManager is the “main” layout manager used to dispose items inside the RecyclerView. We can extend or implement our layout manager. Final considerations Running the example we obtain:The most interesting aspect is how many times the onCreateViewHolder is called compared to the number of items shown. If you look at the log you will find that the object created is 1/3 of the total number displayed.Reference: Android RecyclerView from our JCG partner Francesco Azzola at the Surviving w/ Android blog....

Gearing up for JavaOne 2014 !

Hold that thought! Yeah…I wish I was presenting at Java One 2014 – but I am only worthy of doing that in my dreams right now! But nothing is stopping me from following Java One and tracking sessions/talks about my favorite topics. I am hoping Oracle would make the 2014 talks available online for mortals like us just like it did for the 2013 edition. I have already made my list of talks (see below) which I am going to pounce upon (once they are available)…Have You ?     Lessons Learned from Real-World Deployments of Java EE 7 – Arun Gupta talks dissects live Java EE 7 applications and shares his thoughts and learnings. 50 EJB 3 Best Practices in 50 Minutes – need I say more…? eBay, Connecting Buyers and Sellers Globally via JavaServer Faces Applied Domain-Driven Design Blueprints for Java EE - Have you been following and learning from the Cargo Tracker Java EE application already? You are going to love this talk by Reza and Vijay. JPA Gotchas and Best Practices: Lessons from Overstock.com – nothing like listening to a ‘real’ world example. RESTful Microservices – Never heard of this one! It’s about leveraging Jersey 2.0 to achieve what you haven’t until now! Java API for JSON Binding: Introduction and Update – Potential Java EE 8 candidate and JAXB’s better half ! Is Your Code Parallel-Ready? – Love hacking on Java 8 Stream API? Yes.. this one is for you! Java EE Game Changers – Tomitribe’s David Blevins will talk about the journey from J2EE to Java EE 7, provide insights into Java EE 8 candidates and much more! Programming with Streams in Java 8 – I don’t think I need to comment. Unorthodox Enterprise Practices – Thoughts on Java EE design choices with live coding.. By one of my favorites…Adam Bien. Inside the CERT Oracle Secure Coding Standard for Java RESTing on Your Laurels Will Get You Pwned – When the rest of the world in going ga-ga about REST, this talk will attempt to look at the dark side and empower us with skills to mitigate REST related vulnerabilities. Java EE 7 Batch Processing in the Real World – another live coding session using Java EE 7 standards! Scaling a Mobile Startup Through the Cloud: A True Story – The folks at CodenameOne reveal their secrets. The Anatomy of a Secure Web Application Using Java – developing and deploying secure Java EE web application in the cloud.. now who does not want to do that?! Dirty Data: Why XML and JSON Don’t Cut It and What You Can Do About It – I haven’t thought beyond XML and JSON, but apparently some guys have and you must listen to them! API Design Checklist - no comments needed. Java EE 7 and Spring 4: A Shootout — I have been waiting for this! The Path to CDI 2.0 – Antoine, the CDI guy at Red Hat talks about…of course…CDI ! This includes discussion about 1.1/1.2 along with what’s coming up in CDI 2.0. Five Keys for Securing Java Web Apps – looks like I am repeating myself here.. but too much security can never be bad.. right? Java EE 7 Recipes – Yeah! cooked up by Josh Juneau. There is still time…Pick and gear up for your favorite tracks. More details on the official JavaOne website. Until then…Happy Learning!Reference: Gearing up for JavaOne 2014 ! from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

You Probably Don’t Need a Message Queue

I’m a minimalist, and I don’t like to complicate software too early and unnecessarily. And adding components to a software system is one of the things that adds a significant amount of complexity. So let’s talk about message queues. Message Queues are systems that let you have fault-tolerant, distributed, decoupled, etc, etc. architecture. That sounds good on paper. Message queues may fit in several use-cases in your application. You can check this nice article about the benefits of MQs of what some use-cases might be. But don’t be hasty in picking an MQ because “decoupling is good”, for example. Let’s use an example – you want your email sending to be decoupled from your order processing.   So you post a message to a message queue, then the email processing system picks it up and sends the emails. How would you do that in a monolithic, single classpath application? Just make your order processing service depend on an email service, and call sendEmail(..) rather than sendToMQ(emailMessage). If you use MQ, you define a message format to be recognized by the two systems; if you don’t use an MQ you define a method signature. What is the practical difference? Not much, if any. But then you probably want to be able to add another consumer that does additional thing with a given message? And that might happen indeed, it’s just not for the regular project out there. And even if it is, it’s not worth it, compared to adding just another method call. Coupled – yes. But not inconveniently coupled. What if you want to handle spikes? Message queues give you the ability to put requests in a persistent queue and process all of them. And that is a very useful feature, but again it’s limited based on several factors – are your requests processed in the UI background, or require immediate response? The servlet container thread pool can be used as sort-of queue – response will be served eventually, but the user will have to wait (if the thread acquisition timeout is too small, requests will be dropped, though). Or you can use an in-memory queue for the heavier requests (that are handled in the UI background). And note that by default your MQ might not be highly-availably. E.g. if an MQ node dies, you lose messages. So that’s not a benefit over an in-memory queue in your application node. Which leads us to asynchronous processing – this is indeed a useful feature. You don’t want to do some heavy computation while the user is waiting. But you can use an in-memory queue, or simply start a new thread (a-la spring’s @Async annotation). Here comes another aspect – does it matter if a message is lost? If you application node, processing the request, dies, can you recover? You’ll be surprised how often it doesn’t actually matter, and you can function properly without guaranteeing all messages are processed. So, just asynchronously handling heavier invocations might work well. Even if you can’t afford to lose messages, the use-case when a message is put into a queue in order for another component to process it, there’s still a simple solution – the database. You put a row with a processed=false flag in the database. A scheduled job runs, picks all unprocessed ones and processes them asynchronously. Then, when processing is finished, set the flag to true. I’ve used this approach a number of times, including large production systems, and it works pretty well. And you can still scale your application nodes endlessly, as long as you don’t have any persistent state in them. Regardless of whether you are using an MQ or not. (Temporary in-memory processing queues are not persistent state). Why I’m trying to give alternatives to common usages of message queues? Because if chosen for the wrong reason, an MQ can be a burden. They are not as easy to use as it sounds. First, there’s a learning curve. Generally, the more separate integrated components you have, the more problems may arise. Then there’s setup and configuration. E.g. when the MQ has to run in a cluster, in multiple data centers (for HA), that becomes complex. High availability itself is not trivial – it’s not normally turned on by default. And how does your application node connect to the MQ? Via a refreshing connection pool, using a short-lived DNS record, via a load balancer? Then your queues have tons of configurations – what’s their size, what’s their behaviour (should consumers explicitly acknowledge receipt, should they explicitly acknowledge failure to process messages, should multiple consumers get the same message or not, should messages have TTL, etc.). Then there’s the network and message transfer overhead – especially given that people often choose JSON or XML for transferring messages. If you overuse your MQ, then it adds latency to your system. And last, but not least – it’s harder to track the program flow when analyzing problems. You can’t just see the “call hierarchy” in your IDE, because once you send a message to the MQ, you need to go and find where it is handled. And that’s not always as trivial as it sounds. You see, it adds a lot of complexity and things to take care of. Certainly MQs are very useful in some contexts. I’ve been using them in projects where they were really a good fit – e.g. we couldn’t afford to lose messages and we needed fast processing (so pinging the database wasn’t an option). I’ve also seen it being used in non-trivial scenarios, where we are using to for consuming messages on a single application node, regardless which node posts the message (pub/sub). And you can also check this stackoverflow question. And maybe you really need to have multiple languages communicate (but don’t want an ESB), or maybe your flow is getting so complex, that adding a new method call instead of a new message consumer is an overkill. So all I’m trying to say here is the trite truism “you should use the right tool for the job”. Don’t pick a message queue if you haven’t identified a real use for it that can’t be easily handled in a different, easier to setup and maintain manner. And don’t start with an MQ “just in case” – add it whenever you realize the actual need for it. Because probably, in the regular project out there, a message queue is not needed.Reference: You Probably Don’t Need a Message Queue from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Step By Step Path to Becoming a Great Software Developer

I get quite a few emails that basically say “how do I become a good / great software developer?” These kinds of emails generally tick me off, because I feel like when you ask this kind of question, you are looking for some magical potion you can take that will suddenly make you into a super developer. I suspect that very few people who email me asking this question really want to know how to become a great software developer, but are instead looking for a quick fix or an easy answer.   On the other hand, I think there are some genuinely sincere developers that just don’t even know how to formulate the right questions they need to ask to get to their desired future. I think those developers–especially the ones starting out–are looking for a step-by-step guide to becoming a great developer.I thought I would make an attempt, from my experience and the best of my knowledge, to offer up that step-by-step guide. Now, of course, I realize that there is no magical formula and that there are multiple paths to success, but I think what follows is a reasonable outline of steps someone starting out could take to reach a pretty high level of proficiency and be generally regarded as a good–perhaps even great–developer. Step 1: Pick one language, learn the basics Before we can run, we have to learn to walk. You walk by learning how to program in a single programming language. You don’t learn to walk by trying to learn 50 million things all at once and spreading yourself way too thin. Too many beginning programmers try and jump into everything all at once and don’t have the patience to learn a single programming language before moving forward. They think that they have to know all the hot new technologies in order to get a programming job. While it is true that you need to know more than just the basics of a single programming language, you have to start here, so you might as well focus. Pick a single programming language that you think you would be likely to base your career around. The programming language itself doesn’t matter all that much, since you should be thinking for the long term here. What I mean is you shouldn’t try and learn an “easy” programming language to start. Just learn whatever language you are interested in and could see yourself programming in for the next few years. You want to pick something that will have some lasting value. Once you’ve picked the programming language you are going to try and learn, try and find some books or tutorials that isolate that programming language. What I mean by this is that you don’t want to find learning materials that will teach you too much all at once. You want to find beginner materials that focus on just the language, not a full technology stack. As you read through the material or go through the tutorial you have picked out, make sure you actually write code. Do exercises if you can. Try out what you learned. Try to put things together and use every concept you learn about. Yes, this is a pain. Yes, it is easier to read a book cover-to-cover, but if you really want to learn you need to do. When you are writing code, try to make sure you understand what every line of code you write does. The same goes for any code you read. If you are exposed to code, slow down and make sure you understand it. Whatever you don’t understand, look up. Take the time to do this and you will not feel lost and confused all the time. Finally, expect to go through a book or tutorial three times before it clicks. You will not get “programming” on the first try–no one ever does. You need repeated exposure before you start to finally get it and can understand what is going on. Until then you will feel pretty lost, that is ok, it is part of the process. Just accept it and forge ahead. Step 2: Build something small Now that you have a basic understanding of a single programming language, it’s time to put that understanding to work and find out where your gaps are. The best way to do this is to try and build something. Don’t get too ambitious at this point–but also don’t be too timid. Pick an idea for an application that is simple enough that you can do it with some effort, but nothing that will take months to complete. Try to confine it to just the programming language as much as possible. Don’t try to do something full stack (meaning, using all the technologies from user interfaces all the way to databases)–although you’ll probably need to utilize some kind of existing framework or APIs. For your first real project you might want to consider copying something simple that already exists. Look for a simple application, like a To-Do list app and straight out try to copy it. Don’t let your design skills stand in the way of learning to code.I’d recommend creating a mobile application of some kind, since most mobile applications are small and pretty simple. Plus, learning mobile development skills is very useful as more and more companies are starting to need mobile applications. Today, you can build a mobile application in just about any language. There are many solutions that let you build an app for the different mobile OSes using a wide variety of programming languages. You could also build a small web application, but just try to not get too deep into a complex web development stack. I generally recommend starting with a mobile app, because web development has a higher cost to entry. To develop a web application you’ll need to at least know some HTML, probably some back-end framework and JavaScript. Regardless of what you choose to build, you are probably going to have to learn a little bit about some framework–this is good, just don’t get too bogged down into the details. For example, you can write a pretty simple Android application without having to really know a lot about all of the Android APIs and how Android works, just by following some simple tutorials. Just don’t waste too much time trying to learn everything about a framework. Learn what you need to know to get your project done. You can learn the details later. Oh, and this is supposed to be difficult. That is how you learn. You struggle to figure out how to do something, then you find the answer. Don’t skip this step. You’ll never reach a point as a software developer where you don’t have to learn things on the spot and figure things out as you go along. This is good training for your future. Step 3: Learn a framework Now it’s time to actually focus on a framework. By now you should have a decent grasp of at least one programming language and have some experience working with a framework for mobile or web applications. Pick a single framework to learn that will allow you to be productive in some environment. What kind of framework you choose to learn will be based on what kind of developer you want to become. If you want to be a web developer, you’ll want to learn a web development framework for whatever programming language you are programming in. If you want to become a mobile developer, you’ll need to learn a mobile os and the framework that goes with it. Try to go deep with your knowledge of the framework. This will take time, but invest the time to learn whatever framework you are using well. Don’t try to learn multiple frameworks right now–it will only split your focus. Think about learning the skills you need for a very specific job that you will get that will use that framework and the programming language you are learning. You can always expand your skills later. Step 4: Learn a database technology Most software developers will need to know some database technology as most series applications have a back-end database. So, make sure you do not neglect investing in this area. You will probably see the biggest benefit if you learn SQL–even if you plan on working with NoSQL database like MongoDB or Raven, learning SQL will give you a better base to work from. There are many more jobs out there that require knowledge of SQL than NoSQL. Don’t worry so much about the flavor of SQL. The different SQL technologies are similar enough that you shouldn’t have a problem switching between them if you know the basics of one SQL technology. Just make sure you learn the basics about tables, queries, and other common database operations. I’d recommend getting a good book on the SQL technology of your choice and creating a few small sample projects, so you can practice what you are learning–always practice what you are learning. You have sufficient knowledge of SQL when you can:Create tables Perform basics queries Join tables together to get data Understand the basics of how indexes work Insert, update and delete dataIn addition, you will want to learn some kind of object relational mapping technology (ORM). Which one you learn will depend on what technology stack you are working with. Look for ORM technologies that fit the framework you have learned. There might be a few options, so you best bet is to try to pick the most popular one. Step 5: Get a job supporting an existing system Ok, now you have enough skills and knowledge to get a basic job as a software developer. If you could show me that you understand the basics of a programming language, can work with a framework, understand databases and have built your own application, I would certainly want to hire you–as would many employers.The key here is not too aim to high and to be very specific. Don’t try and get your dream job right now–you aren’t qualified. Instead, try and get a job maintaining an existing code base that is built using the programming language and framework you have been learning. You might not be able to find an exact match, but the more specific you can be the better. Try to apply for jobs that are exactly matched to the technologies you have been learning. Even without much experience, if you match the skill-set exactly and you are willing to be a maintenance programmer, you should be able to find a job. Yes, this kind of job might be a bit boring. It’s not nearly as exciting as creating something new, but the purpose of this job is not to have fun or to make money, it is to learn and gain experience. Working on an existing application, with a team of developers, will help you to expand your skills and see how large software systems are structured. You might be fixing bugs and adding small features, but you’ll be learning and putting your skills into action. Pour your heart into this job. Learn everything you can. Do the best work possible. Don’t think about money, raises and playing political games–all that comes later–for now, just focus on getting as much meaningful productive work done as possible and expanding your skills. Step 6: Learn structural best practices Now it’s time to start becoming better at writing code. Don’t worry too much about design at this point. You need to learn how to write good clean code that is easy to understand and maintain. In order to do this, you’ll need to read a lot and see many examples of good code. Beef up your library with the following books:Code Complete Clean Code Refactoring Working Effectively With Legacy Code Programming Pearls - (do the exercises)Language specific structural books like:JavaScript: The Good Parts Effective Java Effective C#At this point you really want to focus your learning on the structural process of writing good code and working with existing systems. You should strive to be able to easily implement an algorithm in your programming language of choice and to do it in a way that is easy to read and understand. Step 7: Learn a second language At this point you will likely grow the most by learning a second programming language really well. You will no doubt, at this point, have been exposed to more than one programming language, but now you will want to focus on a new language–ideally one that is significantly different than the one you know. This might seem like an odd thing to do, but let me explain why this is so important. When you know only one programming language very well, it is difficult to understand what concepts in software development are unique to your programming language and which ones transcend a particular language or technology. If you spend time in a new language and programming environment, you’ll begin to see things in a new way. You’ll start to learn practicality rather than convention. As a new programmer, you are very likely to do things in a particular way without knowing why you are doing them that way. Once you have a second language and technology stack under your belt, you’ll start to see more of the why. Trust me, you’ll grow if you do this. Especially if you pick a language you hate. Make sure you build something with this new language. Doesn’t have to be anything large, but something of enough complexity to force you to scratch your head and perhaps bang it against the wall–gently. Step 8: Build something substantial Allright, now comes the true test to prove your software development abilities. Can you actually build something substantial on your own? If you are going to move on and have the confidence to get a job building, and perhaps even designing something for an employer, you need to know you can do it. There is no better way to know it than to do it. Pick a project that will use the full stack of your skills. Make sure you incorporate database, framework and everything else you need to build a complete application. This project should be something that will take you more than a week and require some serious thinking and design. Try to make it something you can charge money for so that you take it seriously and have some incentive to keep working on it. Just make sure you don’t drag it out. You still don’t want to get too ambitious here. Pick a project that will challenge you, but not one that will never be completed. This is a major turning point in your career. If you have the follow-through to finish this project, you’ll go far, if you don’t… well, I can’t make any promises. Step 9: Get a job creating a new system Ok, now it’s time to go job hunting again. By this point, you should have pretty much maxed out the benefit you can get from your current job–especially if it still involves only doing maintenance.It’s time to look for a job that will challenge you–but not too much. You still have a lot to learn, so you don’t want to get in too far over your head. Ideally, you want to find a job where you’ll get the opportunity to work on a team building something new. You might not be the architect of the application, but being involved in the creation of an application will help you expand your skills and challenge you in different ways than just maintaining an existing code base. You should already have some confidence with creating a new system, since you’ll have just finished creating a substantial system yourself, so you can walk into interviews without being too nervous and with the belief you can do the job. This confidence will make it much more likely that you’ll get whatever job you apply for. Make sure you make your job search focused again. Highlight a specific set of skills that you have acquired. Don’t try to impress everyone with a long list of irrelevant skills. Focus on the most important skills and look for jobs that exactly match them–or at least match them as closely as possible. Step 10: Learn design best practices Now it’s time to go from junior developer to senior developer. Junior developers maintain systems, senior developers build and design them. (This is a generalization, obviously. Some senior developers maintain systems.) You should be ready to build systems by now, but now you need to learn how to design them. You should focus your studies on design best practices and some advanced topics like:Design patterns Inversion of Control (IOC) Test Driven Development (TDD) Behavior Driven Development (BDD) Software development methodologies like: Agile, SCRUM, etc Message buses and integration patternsThis list could go on for quite some time–you’ll never be done learning and improving your skills in these areas. Just make sure you start with the most important things first–which will be highly dependent on what interests you and where you would like to take your career. Your goal here is to be able to not just build a system that someone else has designed, but to form your own opinions about how software should be designed and what kinds of architectures make sense for what kinds of problems. Step 11: Keep going At this point you’ve made it–well, you’ll never really “make it,” but you should be a pretty good software developer–maybe even “great.” Just don’t let it go to your head, you’ll always have something to learn. How long did it take you to get here? I have no idea. It was probably at least a few years, but it might have been 10 or more–it just depends on how dedicated you were and what opportunities presented themselves to you. A good shortcut is to try and always surround yourself with developers better than you are. What to do along the way There are a few things that you should be doing along the way as you are following this 10 step process. It would be difficult to list them in every step, so I’ll list them all briefly here: Teach - The whole time you are learning things, you should be teaching them as well. It doesn’t matter if you are a beginner or expert, you have something valuable to teach and besides, teaching is one of the best ways to learn. Document your process and journey, help others along the way. Market yourself - I think this is so important that I built an entire course around the idea. Learn how to market yourself and continually do it throughout your career. Figure out how to create a personal brand for yourself and build a reputation for yourself in the industry and you’ll never be at want for a job. You’ll get decide your own future if you learn to market yourself. It takes some work, but it is well worth it. You are reading this post from my effort to do it. Read - Never stop learning. Never stop reading. Always be working your way through a book. Always be improving yourself. Your learning journey is never done. You can’t ever know it all. If you constantly learn during your career, you’ll constantly surpass your peers. Do - Every stop along the way, don’t just learn, but do. Put everything you are learning into action. Set aside time to practice your skills and to write code and build things. You can read all the books on golfing that you want, but you’ll never be Tiger Woods if you don’t swing a golf club.Reference: Step By Step Path to Becoming a Great Software Developer from our JCG partner John Sonmez at the Making the Complex Simple blog....

Getting Started with Gradle: Dependency Management

It is challenging, if not impossible, to create real life applications which don’t have any external dependencies. That is why dependency management is a vital part of every software project. This blog post describes how we can manage the dependencies of our projects with Gradle. We will learn to configure the used repositories and the required dependencies. We will also apply this theory to practice by implementing a simple example application. Let’s get started.   Additional Reading:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can add functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file.Introduction to Repository Management Repositories are essentially dependency containers, and each project can use zero or more repositories. Gradle supports the following repository formats:Ivy repositories Maven repositories Flat directory repositoriesLet’s find out how we can configure each repository type in our build. Adding Ivy Repositories to Our Build We can add an Ivy repository to our build by using its url address or its location in the local file system. If we want to add an Ivy repository by using its url address, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "http://ivy.petrikainulainen.net/repo" } } If we want to add an Ivy repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "../ivy-repo" } } If you want to get more information about configuring Ivy repositories, you should check out the following resources:Section 50.6.6 Ivy Repositories of the Gradle User Guide The API documentation of the IvyArtifactRepositoryLet’s move on and find out how we can add Maven repositories to our build. Adding Maven Repositories to Our Build We can add a Maven repository to our build by using its url address or its location in the local file system. If we want to add a Maven repository by using its url, we have to add the following code snippet to the build.gradle file: repositories { maven { url "http://maven.petrikainulainen.net/repo" } } If we want to add a Maven repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { maven { url "../maven-repo" } } Gradle has three “aliases” which we can use when we are adding Maven repositories to our build. These aliases are:The mavenCentral() alias means that dependencies are fetched from the central Maven 2 repository. The jcenter() alias means that dependencies are fetched from the Bintray’s JCenter Maven repository. The mavenLocal() alias means that dependencies are fetched from the local Maven repository.If we want to add the central Maven 2 repository in our build, we must add the following snippet to our build.gradle file: repositories { mavenCentral() } If you want to get more information about configuring Maven repositories, you should check out section 50.6.4 Maven Repositories of the Gradle User Guide. Let’s move on and find out how we can add flat directory repositories to our build. Adding Flat Directory Repositories to Our Build If we want to use flat directory repositories, we have to add the following code snippet to our build.gradle file: repositories { flatDir { dirs 'lib' } } This means that dependencies are searched from the lib directory. Also, if we want to, we can use multiple directories by adding the following snippet to the build.gradle file: repositories { flatDir { dirs 'libA', 'libB' } } If you want get more information about flat directory repositories, you should check out the following resources:Section 50.6.5 Flat directory repository of the Gradle User Guide Flat Dir Repository post to the gradle-user mailing listLet’s move on and find out how we can manage the dependencies of our project with Gradle. Introduction to Dependency Management After we have configured the repositories of our project, we can declare its dependencies. If we want to declare a new dependency, we have to follow these steps:Specify the configuration of the dependency. Declare the required dependency.Let’s take a closer look at these steps. Grouping Dependencies into Configurations In Gradle dependencies are grouped into a named set of dependencies. These groups are called configurations, and we use them to declare the external dependencies of our project. The Java plugin specifies several dependency configurations which are described in the following:The dependencies added to the compile configuration are required when our the source code of our project is compiled. The runtime configuration contains the dependencies which are required at runtime. This configuration contains the dependencies added to the compile configuration. The testCompile configuration contains the dependencies which are required to compile the tests of our project. This configuration contains the compiled classes of our project and the dependencies added to the compile configuration. The testRuntime configuration contains the dependencies which are required when our tests are run. This configurations contains the dependencies added to compile, runtime, and testCompile configurations. The archives configuration contains the artifacts (e.g. Jar files) produced by our project. The default configuration group contains the dependencies which are required at runtime.Let’s move on and find out how we can declare the dependencies of our Gradle project. Declaring the Dependencies of a Project The most common dependencies are called external dependencies which are found from an external repository. An external dependency is identified by using the following attributes:The group attribute identifies the group of the dependency (Maven users know this attribute as groupId). The name attribute identifies the name of the dependency (Maven users know this attribute as artifactId). The version attribute specifies the version of the external dependency (Maven users know this attribute as version).These attributes are required when you use Maven repositories. If you use other repositories, some attributes might be optional. For example, if you use a flat directory repository, you might have to specify only name and version. Let’s assume that we have to declare the following dependency:The group of the dependency is ‘foo’. The name of the dependency is ‘foo’. The version of the dependency is 0.1. The dependency is required when our project is compiled.We can declare this dependency by adding the following code snipped to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' } We can also declare the dependencies of our project by using a shortcut form which follows this syntax: [group]:[name]:[version]. If we want to use the shortcut form, we have to add the following code snippet to the build.gradle file: dependencies { compile 'foo:foo:0.1' } We can also add multiple dependencies to the same configuration. If we want to use the “normal” syntax when we declare our dependencies, we have to add the following code snippet to the build.gradle file: dependencies { compile ( [group: 'foo', name: 'foo', version: '0.1'], [group: 'bar', name: 'bar', version: '0.1'] ) } On the other hand, if we want to use the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1', 'bar:bar:0.1' } It is naturally possible to declare dependencies which belong to different configurations. For example, if we want to declare dependencies which belong to the compile and testCompile configurations, we have to add the following code snippet to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' testCompile group: 'test', name: 'test', version: '0.1' } Again, it is possible to use the shortcut form. If we want to declare the same dependencies by using the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1' testCompile 'test:test:0.1' } You can get more information about declaring your dependencies by reading the section 50.4 How to declare your dependencies of Gradle User Guide. We have now learned the basics of dependency management. Let’s move on and implement our example application. Creating the Example Application The requirements of our example application are described in thefollowing:The build script of the example application must use the Maven central repository. The example application must write the received message to log by using Log4j. The example application must contain unit tests which ensure that the correct message is returned. These unit tests must be written by using JUnit. Our build script must create an executable jar file.Let’s find out how we can fulfil these requirements. Configuring the Repositories of Our Build One of the requirements of our example application was that its build script must use the Maven central repository. After we have configured our build script to use the Maven central repository, its source code looks as follows (The relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and declare the dependencies of our example application. Declaring the Dependencies of Our Example Application We have to declare two dependencies in the build.gradle file:Log4j (version 1.2.17) is used to write the received message to the log. JUnit (version 4.11) is used to write unit tests for our example application.After we have declared these dependencies, the build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and write some code. Writing the Code In order to fulfil the requirements of our example application, “we have to over-engineer it”. We can create the example application by following these steps:Create a MessageService class which returns the string ‘Hello World!’ when its getMessage() method is called. Create a MessageServiceTest class which ensures that the getMessage() method of the MessageService class returns the string ‘Hello World!’. Create the main class of our application which obtains the message from a MessageService object and writes the message to log by using Log4j. Configure Log4j.Let’s go through these steps one by one. First, we have to create a MessageService class to the src/main/java/net/petrikainulainen/gradle directory and implement it. After we have do this, its source code looks as follows: package net.petrikainulainen.gradle;public class MessageService {public String getMessage() { return "Hello World!"; } } Second, we have create a MessageServiceTest to the src/main/test/net/petrikainulainen/gradle directory and write a unit test to the getMessage() method of the MessageService class. The source code of the MessageServiceTest class looks as follows: package net.petrikainulainen.gradle;import org.junit.Before; import org.junit.Test;import static org.junit.Assert.assertEquals;public class MessageServiceTest {private MessageService messageService;@Before public void setUp() { messageService = new MessageService(); }@Test public void getMessage_ShouldReturnMessage() { assertEquals("Hello World!", messageService.getMessage()); } } Third, we have create a HelloWorld class to the src/main/java/net/petrikainulainen/gradle directory. This class is the main class of our application. It obtains the message from a MessageService object and writes it to a log by using Log4j. The source code of the HelloWorld class looks as follows: package net.petrikainulainen.gradle;import org.apache.log4j.Logger;public class HelloWorld {private static final Logger LOGGER = Logger.getLogger(HelloWorld.class);public static void main(String[] args) { MessageService messageService = new MessageService();String message = messageService.getMessage(); LOGGER.info("Received message: " + message); } } Fourth, we have to configure Log4j by using the log4j.properties which is found from the src/main/resources directory. The log4j.properties file looks as follows: log4j.appender.Stdout=org.apache.log4j.ConsoleAppender log4j.appender.Stdout.layout=org.apache.log4j.PatternLayout log4j.appender.Stdout.layout.conversionPattern=%-5p - %-26.26c{1} - %m\nlog4j.rootLogger=DEBUG,Stdout That is it. Let’s find out how we can run the tests of our example application. Running the Unit Tests We can run our unit test by using the following command: gradle test When our test passes, we see the following output: > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testBUILD SUCCESSFULTotal time: 4.678 secs However, if our unit test would fail, we would see the following output (the interesting section is highlighted): > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testnet.petrikainulainen.gradle.MessageServiceTest > getMessage_ShouldReturnMessageFAILED org.junit.ComparisonFailure at MessageServiceTest.java:221 test completed, 1 failed :test FAILEDFAILURE: Build failed with an exception.* What went wrong: Execution failed for task ':test'. > There were failing tests. See the report at: file:///Users/loke/Projects/Java/Blog/gradle-examples/dependency-management/build/reports/tests/index.html* Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.BUILD FAILEDTotal time: 4.461 secs As we can see, if our unit tests fails, the describes:which tests failed. how many tests were run and how many tests failed. the location of the test report which provides additional information about the failed (and passed) tests.When we run our unit tests, Gradle creates test reports to the following directories:The build/test-results directory contains the raw data of each test run. The build/reports/tests directory contains a HTML report which describes the results of our tests.The HTML test report is very useful tool because it describes the reason why our test failed. For example, if our unit test would expect that the getMessage() method of the MessageService class returns the string ‘Hello Worl1d!’, the HTML test report of that test case would look as follows:Let’s move on and find out how we can package and run our example application. Packaging and Running Our Example Application We can package our application by using one of these commands: em>gradle assembly or gradle build. Both of these commands create the dependency-management.jar file to the build/libs directory. When run our example application by using the command java -jar dependency-management.jar, we see the following output: > java -jar dependency-management.jar Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/Logger at net.petrikainulainen.gradle.HelloWorld.<clinit>(HelloWorld.java:10) Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:372) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:360) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 more The reason for this exception is that the Log4j dependency isn’t found from the classpath when we run our application. The easiest way to solve this problem is to create a so called “fat” jar file. This means that we will package the required dependencies to the created jar file. After we have followed the instructions given in the Gradle cookbook, our build script looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } } manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } We can now run the example application (after we have packaged it) and as we can see, everything is working properly: > java -jar dependency-management.jar INFO - HelloWorld - Received message: Hello World! That is all for today. Let’s summarize what we learned from this blog post. Summary This blog post has taught us four things:We learned how we can configure the repositories used by our build. We learned how we can declare the required dependencies and group these dependencies into configurations. We learned that Gradle creates a HTML test report when our tests are run. We learned how we can create a so called “fat” jar file.If you want to play around with the example application of this blog post, you can get it from Github.Reference: Getting Started with Gradle: Dependency Management from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Making operations on volatile fields atomic

Overview The expected behaviour for volatile fields is that they should behave in a multi-threaded application the same as they do in a single threaded application.  They are not forbidden to behave the same way, but they are not guaranteed to behave the same way. The solution in Java 5.0+ is to use AtomicXxxx classes however these are relatively inefficient in terms of memory (they add a header and padding), performance (they add a references and little control over their relative positions), and syntactically they are not as clear to use. IMHO A simple solution if for volatile fields to act as they might be expected to do, the way JVM must support in AtomicFields which is not forbidden in the current JMM (Java- Memory Model) but not guaranteed. Why make fields volatile? The benefit of volatile fields is that they are visible across threads and some optimisations which avoid re-reading them are disabled so you always check again the current value even if you didn’t change them. e.g. without volatile Thread 2: int a = 5;Thread 1: a = 6; (later) Thread 2: System.out.println(a); // prints 5 or 6 With volatile Thread 2: volatile int a = 5;Thread 1: a = 6;(later) Thread 2: System.out.println(a); // prints 6 given enough time. Why not use volatile all the time? Volatile read and write access is substantially slower.  When you write to a volatile field it stalls the entire CPU pipeline to ensure the data has been written to cache.  Without this, there is a risk the next read of the value sees an old value, even in the same thread (See AtomicLong.lazySet() which avoids stalling the pipeline) The penalty can be in the order of 10x slower which you don’t want to be doing on every access. What are the limitations of volatile? A significant limitation is that operations on the field is not atomic, even when you might think it is.  Even worse than that is that usually, there is no difference.  I.e. it can appear to work for a long time even years and suddenly/randomly break due to an incidental change such as the version of Java used, or even where the object is loaded into memory. e.g. which programs you loaded before running the program. e.g. updating a value Thread 2: volatile int a = 5;Thread 1: a += 1; Thread 2: a += 2;(later) Thread 2: System.out.println(a); // prints 6, 7 or 8 even given enough time. This is an issue because the read of a and the write of a are done separately and you can get a race condition. 99%+ of the time it will behave as expect, but sometimes it won’t. What can you do about it? You need to use AtomicXxxx classes. These wrap volatile fields with operations which behave as expected. Thread 2: AtomicInteger a = new AtomicInteger(5);Thread 1: a.incrementAndGet(); Thread 2: a.addAndGet(2);(later) Thread 2: System.out.println(a); // prints 8 given enough time. What do I propose? The JVM has a means to behave as expected,  the only surprising thing is you need to use a special class to do what the JMM won’t guarantee for you.  What I propose is that the JMM be changed to support the behaviour currently provided by the concurrency AtomicClasses. In each case the single threaded behaviour is unchanged. A multi-threaded program which does not see a race condition will behave the same. The difference is that a multi-threaded program does not have to see a race condition but changing the underlying behaviour.  current method suggested syntax notesx.getAndIncrement() x++ or x += 1x.incrementAndGet() ++xx.getAndDecrment() x– or x -= 1x.decrementAndGet() –xx.addAndGet(y) (x += y)x.getAndAdd(y) ((x += y)-y)x.compareAndSet(e, y) (x == e ? x = y, true : false) Need to add the comma syntax used in other languages.  These operations could be supported for all the primitive types such as boolean, byte, short, int, long, float and double. Additional assignment operators could be supported such as:  current method suggested syntax notesAtomic multiplication x *= 2;Atomic subtraction x -= y;Atomic division x /= y;Atomic modulus x %= y;Atomic shift x <<= y;Atomic shift x >>= z;Atomic shift x >>>= w;Atomic and x &= ~y; clears bitsAtomic or x |= z; sets bitsAtomic xor x ^= w; flips bits  What is the risk? This could break code which relies on these operations occasionally failing due to race conditions. It might not be possible to support more complex expressions in a thread safe manner.  This could lead to surprising bugs as the code can look like the works, but it doesn’t.  Never the less it will be no worse than the current state. JEP 193 – Enhanced Volatiles There is a JEP 193 to add this functionality to Java. An example is: class Usage { volatile int count; int incrementCount() { return count.volatile.incrementAndGet(); } } IMHO there is a few limitations in this approach.The syntax is fairly significant change.  Changing the JMM might not require many changes the the Java syntax and possibly no changes to the compiler. It is a less general solution.  It can be useful to support operations like volume += quantity; where these are double types. It places more burden on the developer to understand why he/she should use this instead of x++;I am not convinced that a more cumbersome syntax makes it clearer as to what is happening. Consider this example: volatile int a, b;a += b; or a.volatile.addAndGet(b.volatile); orAtomicInteger a, b;a.addAndGet(b.get()); Which of these operations, as a line are atomic. Answer none of them, however systems with Intel TSX can make these atomic and if you are going to change the behaviour of any of these lines of code I would make the the a += b;  rather than invent a new syntax which does the same thing most of the time, but one is guaranteed and not the other. Conclusion Much of the syntactic and performance overhead of using AtomicInteger and AtomicLong could be removed if the JMM guaranteed the equivalent single threaded operations behaved as expected for multi-threaded code. This feature could be added to earlier versions of Java by using byte code instrumentation.Reference: Making operations on volatile fields atomic from our JCG partner Peter Lawrey at the Vanilla Java blog....

10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection

This is part 7 of a series of posts on the OWASP Top 10 Proactive Development Controls: 10 things you can do as a developer to make your app secure. Logging and Intrusion Detection Logging is a straightforward part of any system. But logging is important for more than troubleshooting and debugging. It is also critical for activity auditing, intrusion detection (telling ops when the system is being hacked) and forensics (figuring out what happened after the system was hacked). You should take all of this into account in your logging strategy.   What to log, what to log… and what not to log Make sure that you are always logging when, who, where and what: timestamps (you will need to take care of syncing timestamps across systems and devices or be prepared to account for differences in time zones, accuracy and resolution), user id, source IP and other address information, and event details. To make correlation and analysis easier, follow a common logging approach throughout the application and across systems where possible. Use an extensible logging framework like SLF4J with Logback or Apache Log4j/Log4j2. Compliance regulations like PCI DSS may dictate what information you need to record and when and how, who gets access to the logs, and how long you need to keep this information. You may also need to prove that audit logs and other security logs are complete and have not been tampered with (using a HMAC for example), and ensure that these logs are always archived. For these reasons, it may be better to separate out operations and debugging logs from transaction audit trails and security event logs. There is data that you must log (complete sequential history of specific events to meet compliance or legal requirements). Data that you must not log (PII or credit card data or opt-out/do-not-track data or intercepted communications). And other data you should not log (authentication information and other personal data). And watch out for Log Forging attacks where bad guys inject delimiters like extra CRLF sequences into text fields which they know will be logged in order to try to cover their tracks, or inject Javascript into data which will trigger an XSS attack when the log entry is displayed in a browser-based log viewer. Like other injection attacks, protect the system by encoding user data before writing it to the log. Review code for correct logging practices and test the logging code to make sure that it works. OWASP’s Logging Cheat Sheet provides more guidelines on how to do logging right, and what to watch out for. AppSensor – Intrusion Detection Another OWASP project, the OWASP AppSensor explains how to build on application logging to implement application-level intrusion detection. AppSensor outlines common detection points in an application, places that you should add checks to alert you that your system is being attacked. For example, if a server-side edit catches bad data that should already have been edited at the client, or catches a change to a non-editable field, then you either have some kind of coding bug or (more likely) somebody has bypassed client-side validation and is attacking your app. Don’t just log this case and return an error: throw an alert or take some kind of action to protect the system from being attacked like disconnecting the session. You could also check for known attack signatures:.Nick Galbreath (formerly at Etsy and now at startup Signal Sciences) has done some innovative work on detecting SQL injection and HTML injection attacks by mining logs to find common fingerprints and feeding this back into filters to detect when attacks are in progress and potentially block them. In the next 3 posts, we’ll step back from specific problems, and look at the larger issues of secure architecture, design and requirements.Reference: 10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection from our JCG partner Jim Bird at the Building Real Software blog....

Using @NamedEntityGraph to load JPA entities more selectively in N+1 scenarios

The N+1 problem is a common issue when working with ORM solutions. It happens when you set the fetchType for some @OneToMany relation to lazy, in order to load the child entities only when the Set/List is accessed. Let’s assume we have a Customer entity with two relations: a set of orders and a set of addresses for each customer.             @OneToMany(mappedBy = "customer", cascade = CascadeType.ALL, fetch = FetchType.LAZY) private Set<OrderEntity> orders;@OneToMany(mappedBy = "customer", cascade = CascadeType.ALL, fetch = FetchType.LAZY) private Set<AddressEntity> addresses; To load all customers, we can issue the following JPQL statement and afterwards load all orders for each customer: List<CustomerEntity> resultList = entityManager.createQuery("SELECT c FROM CustomerEntity AS c", CustomerEntity.class).getResultList(); for(CustomerEntity customerEntity : resultList) { Set<OrderEntity> orders = customerEntity.getOrders(); for(OrderEntity orderEntity : orders) { ... } } Hibernate 4.3.5 (as shipped with JBoss AS Wildfly 8.1.0CR2) will generate the following series of SQL statements out of it for only two(!) customers in the database: Hibernate: select customeren0_.id as id1_1_, customeren0_.name as name2_1_, customeren0_.numberOfPurchases as numberOf3_1_ from CustomerEntity customeren0_ Hibernate: select orders0_.CUSTOMER_ID as CUSTOMER4_1_0_, orders0_.id as id1_2_0_, orders0_.id as id1_2_1_, orders0_.campaignId as campaign2_2_1_, orders0_.CUSTOMER_ID as CUSTOMER4_2_1_, orders0_.timestamp as timestam3_2_1_ from OrderEntity orders0_ where orders0_.CUSTOMER_ID=? Hibernate: select orders0_.CUSTOMER_ID as CUSTOMER4_1_0_, orders0_.id as id1_2_0_, orders0_.id as id1_2_1_, orders0_.campaignId as campaign2_2_1_, orders0_.CUSTOMER_ID as CUSTOMER4_2_1_, orders0_.timestamp as timestam3_2_1_ from OrderEntity orders0_ where orders0_.CUSTOMER_ID=? As we can see, the first query selects all customers from the table CustomerEntity. The following two selects fetch then the orders for each customer we have loaded in the first query. When we have 100 customers instead of two, we will get 101 queries. One initial query to load all customers and then for each of the 100 customers an additional query for the orders. That is the reason why this problem is called N+1. A common idiom to solve this problem is to force the ORM to generate an inner join query. In JPQL this can be done by using the JOIN FETCH clause like demonstrated in the following code snippet: entityManager.createQuery("SELECT c FROM CustomerEntity AS c JOIN FETCH c.orders AS o", CustomerEntity.class).getResultList(); As expected the ORM now generates an inner join with the OrderEntity table and therewith only needs one SQL statement to load all data: select customeren0_.id as id1_0_0_, orders1_.id as id1_1_1_, customeren0_.name as name2_0_0_, orders1_.campaignId as campaign2_1_1_, orders1_.CUSTOMER_ID as CUSTOMER4_1_1_, orders1_.timestamp as timestam3_1_1_, orders1_.CUSTOMER_ID as CUSTOMER4_0_0__, orders1_.id as id1_1_0__ from CustomerEntity customeren0_ inner join OrderEntity orders1_ on customeren0_.id=orders1_.CUSTOMER_ID In situations where you know that you will have to load all orders for each customer, the JOIN FETCH clause minimizes the number of SQL statements from N+1 to 1. This comes of course with the drawback that you now transfer for all orders of one customer the customer data again and again (due to the additional customer columns in the query). The JPA specification introduces with version 2.1 so called NamedEntityGraphs. This annotation lets you describe the graph a JPQL query should load in more detail than a JOIN FETCH clause can do and therewith is another solution to the N+1 problem. The following example demonstrates a NamedEntityGraph for our customer entity that is supposed to load only the name of the customer and its orders. The orders are described in the subgraph ordersGraph in more detail. Here we see that we only want to load the fields id and campaignId of the order. @NamedEntityGraph( name = "CustomersWithOrderId", attributeNodes = { @NamedAttributeNode(value = "name"), @NamedAttributeNode(value = "orders", subgraph = "ordersGraph") }, subgraphs = { @NamedSubgraph( name = "ordersGraph", attributeNodes = { @NamedAttributeNode(value = "id"), @NamedAttributeNode(value = "campaignId") } ) } ) The NamedEntityGraph is given as a hint to the JPQL query, after it has been loaded via EntityManager using its name: EntityGraph entityGraph = entityManager.getEntityGraph("CustomersWithOrderId"); entityManager.createQuery("SELECT c FROM CustomerEntity AS c", CustomerEntity.class).setHint("javax.persistence.fetchgraph", entityGraph).getResultList(); Hibernate supports the @NamedEntityGraph annotation since version 4.3.0.CR1 and creates the following SQL statement for the JPQL query shown above: Hibernate: select customeren0_.id as id1_1_0_, orders1_.id as id1_2_1_, customeren0_.name as name2_1_0_, customeren0_.numberOfPurchases as numberOf3_1_0_, orders1_.campaignId as campaign2_2_1_, orders1_.CUSTOMER_ID as CUSTOMER4_2_1_, orders1_.timestamp as timestam3_2_1_, orders1_.CUSTOMER_ID as CUSTOMER4_1_0__, orders1_.id as id1_2_0__ from CustomerEntity customeren0_ left outer join OrderEntity orders1_ on customeren0_.id=orders1_.CUSTOMER_ID We see that Hibernate does not issue N+1 queries but that instead the @NamedEntityGraph annotation has forced Hibernate to load the orders per left outer join. This is of course a subtle difference to the FETCH JOIN clause, where Hibernate created an inner join. The left outer join would also load customers for which no order exists in contrast to the FETCH JOIN clause, where we would only load customers that have at least one order. Interestingly is also that Hibernate loads more than the specified attributes for the tables CustomerEntity and OrderEntity. As this conflicts with the specification of @NamedEntityGraph (section 3.7.4) I have created an JIRA issue for that. Conclusion We have seen that with JPA 2.1 we have two solutions for the N+1 problem: We can either use the FETCH JOIN clause to eagerly fetch a @OneToMany relation, which results in an inner join, or we can use @NamedEntityGraph feature that lets us specify which @OneToMany relation to load via left outer join.Reference: Using @NamedEntityGraph to load JPA entities more selectively in N+1 scenarios from our JCG partner Martin Mois at the Martin’s Developer World blog....

Spring 4: CGLIB-based proxy classes with no default constructor

In Spring, if the class of a target object that is to be proxied doesn’t implement any interfaces, then a CGLIB-based proxy will be created. Prior to Spring 4, CGLIB-based proxy classes require a default constructor. And this is not the limitation of CGLIB library, but Spring itself. Fortunately, as of Spring 4 this is no longer an issue. CGLIB-based proxy classes no longer require a default constructor. How can this impact your code? Let’s see. One of the idioms of dependency injection is constructor injection. It can be generally used when the injected dependencies are required and must not change after the object is initiated. In this article I am not going to discuss why and when you should use constructor dependency injection. I assume you use this idiom in your code or you consider using it. If you are interested in learning more, see the resources section in the bottom of this article. Contructor injection with no-proxied beans Having the following collaborator: package pl.codeleak.services;import org.springframework.stereotype.Service;@Service public class Collaborator { public String collaborate() { return "Collaborating"; } } we can easily inject it via constructor: package pl.codeleak.services;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class SomeService {private final Collaborator collaborator;@Autowired public SomeService(Collaborator collaborator) { this.collaborator = collaborator; }public String businessMethod() { return collaborator.collaborate(); }} You may notice that both Collaborator and the Service have no interfaces, but they are no proxy candidates. So this code will work perfectly fine with Spring 3 and Spring 4: package pl.codeleak.services;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import pl.codeleak.Configuration;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = Configuration.class) public class WiringTests {@Autowired private SomeService someService;@Autowired private Collaborator collaborator;@Test public void hasValidDependencies() { assertThat(someService) .isNotNull() .isExactlyInstanceOf(SomeService.class);assertThat(collaborator) .isNotNull() .isExactlyInstanceOf(Collaborator.class);assertThat(someService.businessMethod()) .isEqualTo("Collaborating"); } } Contructor injection with proxied beans In many cases your beans need to be decorated with an AOP proxy at runtime, e.g when you want to use declarative transactions with @Transactional annotation. To visualize this, I created an aspect that will advice all methods in SomeService. With the below aspect defined, SomeService becomes a candidate for proxying: package pl.codeleak.aspects;import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.springframework.stereotype.Component;@Aspect @Component public class DummyAspect {@Before("within(pl.codeleak.services.SomeService)") public void before() { // do nothing }} When I re-run the test with Spring 3.2.9, I get the following exception: Could not generate CGLIB subclass of class [class pl.codeleak.services.SomeService]: Common causes of this problem include using a final class or a non-visible class; nested exception is java.lang.IllegalArgumentException: Superclass has no null constructors but no arguments were given This can be simply fixed by providing a default, no argument, constructor to SomeService, but this is not what I want to do – as I would also need to make dependencies non-final. Another solution would be to provide an interface for SomeService. But again, there are many situations when you don’t need to create interfaces. Updating to Spring 4 solves the problem immediately. As documentation states: CGLIB-based proxy classes no longer require a default constructor. Support is provided via the objenesis library which is repackaged inline and distributed as part of the Spring Framework. With this strategy, no constructor at all is being invoked for proxy instances anymore. The test I created will fail, but it visualizes that CGLIB proxy was created for SomeService: java.lang.AssertionError: Expecting: <pl.codeleak.services.SomeService@6a84a97d> to be exactly an instance of: <pl.codeleak.services.SomeService> but was an instance of: <pl.codeleak.services.SomeService$$EnhancerBySpringCGLIB$$55c3343b> After removing the first assertion from the test, it will run just perfectly fine. ResourcesIn case you need to read more about constructor dependency injection, have a look at this great article by Petri Kainulainen: http://www.petrikainulainen.net/software-development/design/why-i-changed-my-mind-about-field-injection. Core Container Improvements in Spring 4: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/new-in-4.0.html#_core_container_improvements You may also be interested in reading my other article about Spring: Spring 4: @DateTimeFormat with Java 8 Date-Time API and Better error messages with Bean Validation 1.1 in Spring MVC applicationReference: Spring 4: CGLIB-based proxy classes with no default constructor from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: