Featured FREE Whitepapers

What's New Here?

java-logo

Compact Off-Heap Structures/Tuples In Java

In my last post I detailed the implications of the access patterns your code takes to main memory. Since then I’ve had a lot of questions about what can be done in Java to enable more predictable memory layout. There are patterns that can be applied using array backed structures which I will discuss in another post. This post will explore how to simulate a feature sorely missing in Java – arrays of structures similar to what C has to offer. Structures are very useful, both on the stack and the heap. To my knowledge it is not possible to simulate this feature on the Java stack. Not being able to do this on the stack is such as shame because it greatly limits the performance of some parallel algorithms, however that is a rant for another day. In Java, all user defined types have to exist on the heap. The Java heap is managed by the garbage collector in the general case, however there is more to the wider heap in a Java process. With the introduction of direct ByteBuffer, memory can be allocated which is not tracked by the garbage collector because it can be available to native code for tasks like avoiding the copying of data to and from the kernel for IO. So one method of managing structures is to fake them within a ByteBuffer as a reasonable approach. This can allow compact data representations, but has performance and size limitations. For example, it is not possible to have a ByteBuffer greater than 2GB, and all access is bounds checked which impacts performance. An alternative exists using Unsafe that is both faster and and not size constrained like ByteBuffer. The approach I’m about to detail is not traditional Java. If your problem space is dealing with big data, or extreme performance, then there are benefits to be had. If your data sets are small, and performance is not an issue, then run away now to avoid getting sucked into the dark arts of native memory management. The benefits of the approach I’m about to detail are:Significantly improved performance More compact data representation Ability to work with very large data sets while avoiding nasty GC pauses[1]With all choices there are consequences. By taking the approach detailed below you take responsibility for some of the memory managment yourself. Getting it wrong can lead to memory leaks, or worse, you can crash the JVM! Proceed with caution… Suitable Example – Trade Data A common challenge faced in finance applications is capturing and working with very large volumes of order and trade data. For the example I will create a large table of in-memory trade data that can have analysis queries run against it. This table will be built using 2 contrasting approaches. Firstly, I’ll take the traditional Java approach of creating a large array and reference individual Trade objects. Secondly, I keep the usage code identical but replace the large array and Trade objects with an off-heap array of structures that can be manipulated via a Flyweight pattern. If for the traditional Java approach I used some other data structure, such as a Map or Tree, then the memory footprint would be even greater and the performance lower. Traditional Java Approach public class TestJavaMemoryLayout { private static final int NUM_RECORDS = 50 * 1000 * 1000;private static JavaMemoryTrade[] trades;public static void main(final String[] args) { for (int i = 0; i < 5; i++) { System.gc(); perfRun(i); } }private static void perfRun(final int runNum) { long start = System.currentTimeMillis();init();System.out.format('Memory %,d total, %,d free\n', Runtime.getRuntime().totalMemory(), Runtime.getRuntime().freeMemory());long buyCost = 0; long sellCost = 0;for (int i = 0; i < NUM_RECORDS; i++) { final JavaMemoryTrade trade = get(i);if (trade.getSide() == 'B') { buyCost += (trade.getPrice() * trade.getQuantity()); } else { sellCost += (trade.getPrice() * trade.getQuantity()); } }long duration = System.currentTimeMillis() - start; System.out.println(runNum + ' - duration ' + duration + 'ms'); System.out.println('buyCost = ' + buyCost + ' sellCost = ' + sellCost); }private static JavaMemoryTrade get(final int index) { return trades[index]; }public static void init() { trades = new JavaMemoryTrade[NUM_RECORDS];final byte[] londonStockExchange = {'X', 'L', 'O', 'N'}; final int venueCode = pack(londonStockExchange);final byte[] billiton = {'B', 'H', 'P'}; final int instrumentCode = pack( billiton);for (int i = 0; i < NUM_RECORDS; i++) { JavaMemoryTrade trade = new JavaMemoryTrade(); trades[i] = trade;trade.setTradeId(i); trade.setClientId(1); trade.setVenueCode(venueCode); trade.setInstrumentCode(instrumentCode);trade.setPrice(i); trade.setQuantity(i);trade.setSide((i & 1) == 0 ? 'B' : 'S'); } }private static int pack(final byte[] value) { int result = 0; switch (value.length) { case 4: result = (value[3]); case 3: result |= ((int)value[2] << 8); case 2: result |= ((int)value[1] << 16); case 1: result |= ((int)value[0] << 24); break;default: throw new IllegalArgumentException('Invalid array size'); }return result; }private static class JavaMemoryTrade { private long tradeId; private long clientId; private int venueCode; private int instrumentCode; private long price; private long quantity; private char side;public long getTradeId() { return tradeId; }public void setTradeId(final long tradeId) { this.tradeId = tradeId; }public long getClientId() { return clientId; }public void setClientId(final long clientId) { this.clientId = clientId; }public int getVenueCode() { return venueCode; }public void setVenueCode(final int venueCode) { this.venueCode = venueCode; }public int getInstrumentCode() { return instrumentCode; }public void setInstrumentCode(final int instrumentCode) { this.instrumentCode = instrumentCode; }public long getPrice() { return price; }public void setPrice(final long price) { this.price = price; }public long getQuantity() { return quantity; }public void setQuantity(final long quantity) { this.quantity = quantity; }public char getSide() { return side; }public void setSide(final char side) { this.side = side; } } } Compact Off-Heap Structures import sun.misc.Unsafe;import java.lang.reflect.Field;public class TestDirectMemoryLayout { private static final Unsafe unsafe; static { try { Field field = Unsafe.class.getDeclaredField('theUnsafe'); field.setAccessible(true); unsafe = (Unsafe)field.get(null); } catch (Exception e) { throw new RuntimeException(e); } }private static final int NUM_RECORDS = 50 * 1000 * 1000;private static long address; private static final DirectMemoryTrade flyweight = new DirectMemoryTrade();public static void main(final String[] args) { for (int i = 0; i < 5; i++) { System.gc(); perfRun(i); } }private static void perfRun(final int runNum) { long start = System.currentTimeMillis();init();System.out.format('Memory %,d total, %,d free\n', Runtime.getRuntime().totalMemory(), Runtime.getRuntime().freeMemory());long buyCost = 0; long sellCost = 0;for (int i = 0; i < NUM_RECORDS; i++) { final DirectMemoryTrade trade = get(i);if (trade.getSide() == 'B') { buyCost += (trade.getPrice() * trade.getQuantity()); } else { sellCost += (trade.getPrice() * trade.getQuantity()); } }long duration = System.currentTimeMillis() - start; System.out.println(runNum + ' - duration ' + duration + 'ms'); System.out.println('buyCost = ' + buyCost + ' sellCost = ' + sellCost);destroy(); }private static DirectMemoryTrade get(final int index) { final long offset = address + (index * DirectMemoryTrade.getObjectSize()); flyweight.setObjectOffset(offset); return flyweight; }public static void init() { final long requiredHeap = NUM_RECORDS * DirectMemoryTrade.getObjectSize(); address = unsafe.allocateMemory(requiredHeap);final byte[] londonStockExchange = {'X', 'L', 'O', 'N'}; final int venueCode = pack(londonStockExchange);final byte[] billiton = {'B', 'H', 'P'}; final int instrumentCode = pack( billiton);for (int i = 0; i < NUM_RECORDS; i++) { DirectMemoryTrade trade = get(i);trade.setTradeId(i); trade.setClientId(1); trade.setVenueCode(venueCode); trade.setInstrumentCode(instrumentCode);trade.setPrice(i); trade.setQuantity(i);trade.setSide((i & 1) == 0 ? 'B' : 'S'); } }private static void destroy() { unsafe.freeMemory(address); }private static int pack(final byte[] value) { int result = 0; switch (value.length) { case 4: result |= (value[3]); case 3: result |= ((int)value[2] << 8); case 2: result |= ((int)value[1] << 16); case 1: result |= ((int)value[0] << 24); break;default: throw new IllegalArgumentException('Invalid array size'); }return result; }private static class DirectMemoryTrade { private static long offset = 0;private static final long tradeIdOffset = offset += 0; private static final long clientIdOffset = offset += 8; private static final long venueCodeOffset = offset += 8; private static final long instrumentCodeOffset = offset += 4; private static final long priceOffset = offset += 4; private static final long quantityOffset = offset += 8; private static final long sideOffset = offset += 8;private static final long objectSize = offset += 2;private long objectOffset;public static long getObjectSize() { return objectSize; }void setObjectOffset(final long objectOffset) { this.objectOffset = objectOffset; }public long getTradeId() { return unsafe.getLong(objectOffset + tradeIdOffset); }public void setTradeId(final long tradeId) { unsafe.putLong(objectOffset + tradeIdOffset, tradeId); }public long getClientId() { return unsafe.getLong(objectOffset + clientIdOffset); }public void setClientId(final long clientId) { unsafe.putLong(objectOffset + clientIdOffset, clientId); }public int getVenueCode() { return unsafe.getInt(objectOffset + venueCodeOffset); }public void setVenueCode(final int venueCode) { unsafe.putInt(objectOffset + venueCodeOffset, venueCode); }public int getInstrumentCode() { return unsafe.getInt(objectOffset + instrumentCodeOffset); }public void setInstrumentCode(final int instrumentCode) { unsafe.putInt(objectOffset + instrumentCodeOffset, instrumentCode); }public long getPrice() { return unsafe.getLong(objectOffset + priceOffset); }public void setPrice(final long price) { unsafe.putLong(objectOffset + priceOffset, price); }public long getQuantity() { return unsafe.getLong(objectOffset + quantityOffset); }public void setQuantity(final long quantity) { unsafe.putLong(objectOffset + quantityOffset, quantity); }public char getSide() { return unsafe.getChar(objectOffset + sideOffset); }public void setSide(final char side) { unsafe.putChar(objectOffset + sideOffset, side); } } }Results Intel i7-860 @ 2.8GHz, 8GB RAM DDR3 1333MHz, Windows 7 64-bit, Java 1.7.0_07 ============================================= java -server -Xms4g -Xmx4g TestJavaMemoryLayout Memory 4,116,054,016 total, 1,108,901,104 free 0 - duration 19334ms Memory 4,116,054,016 total, 1,109,964,752 free 1 - duration 14295ms Memory 4,116,054,016 total, 1,108,455,504 free 2 - duration 14272ms Memory 3,817,799,680 total, 815,308,600 free 3 - duration 28358ms Memory 3,817,799,680 total, 810,552,816 free 4 - duration 32487msjava -server TestDirectMemoryLayout Memory 128,647,168 total, 126,391,384 free 0 - duration 983ms Memory 128,647,168 total, 126,992,160 free 1 - duration 958ms Memory 128,647,168 total, 127,663,408 free 2 - duration 873ms Memory 128,647,168 total, 127,663,408 free 3 - duration 886ms Memory 128,647,168 total, 127,663,408 free 4 - duration 884msIntel i7-2760QM @ 2.40GHz, 8GB RAM DDR3 1600MHz, Linux 3.4.11 kernel 64-bit, Java 1.7.0_07 ================================================= java -server -Xms4g -Xmx4g TestJavaMemoryLayout Memory 4,116,054,016 total, 1,108,912,960 free 0 - duration 12262ms Memory 4,116,054,016 total, 1,109,962,832 free 1 - duration 9822ms Memory 4,116,054,016 total, 1,108,458,720 free 2 - duration 10239ms Memory 3,817,799,680 total, 815,307,640 free 3 - duration 21558ms Memory 3,817,799,680 total, 810,551,856 free 4 - duration 23074msjava -server TestDirectMemoryLayout Memory 123,994,112 total, 121,818,528 free 0 - duration 634ms Memory 123,994,112 total, 122,455,944 free 1 - duration 619ms Memory 123,994,112 total, 123,103,320 free 2 - duration 546ms Memory 123,994,112 total, 123,103,320 free 3 - duration 547ms Memory 123,994,112 total, 123,103,320 free 4 - duration 534msAnalysis Let’s compare the results to the 3 benefits promised above. 1. Significantly improved performance The evidence here is pretty clear cut. Using the off-heap structures approach is more than an order of magnitude faster. At the most extreme, look at the 5th run on a Sandy Bridge processor, we have 43.2 times difference in duration to complete the task. It is also a nice illustration of how well Sandy Bridge does with predictable access patterns to data. Not only is the performance significantly better it is also more consistent. As the heap becomes fragmented, and thus access patterns become more random, the performance degrades as can be seen in the later runs with standard Java approach. 2. More compact data representation For our off-heap representation each object requires 42-bytes. To store 50 million of these, as in the example, we require 2,100,000,000 bytes. The memory required by the JVM heap is: memory required = total memory – free memory – base JVM needs 2,883,248,712 = 3,817,799,680 – 810,551,856 – 123,999,112 This implies the JVM needs ~40% more memory to represent the same data. The reason for this overhead is the array of references to the Java objects plus the object headers. In a previous post I discussed object layout in Java. When working with very large data sets this overhead can become a significant limiting factor. 3. Ability to work with very large data sets while avoiding nasty GC pauses The sample code above forces a GC cycle before each run and can improve the consistency of the results in some cases. Feel free to remove the call to System.gc() and observe the implications for yourself. If you run the tests adding the following command line arguments then the garbage collector will output in painful detail what happened. -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintHeapAtGC -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintSafepointStatistics From analysing the output I can see the application underwent a total of 29 GC cycles. Pause times are listed below by extracting the lines from the output indicating when the application threads are stopped. With System.gc() before each run ================================ Total time for which application threads were stopped: 0.0085280 seconds Total time for which application threads were stopped: 0.7280530 seconds Total time for which application threads were stopped: 8.1703460 seconds Total time for which application threads were stopped: 5.6112210 seconds Total time for which application threads were stopped: 1.2531370 seconds Total time for which application threads were stopped: 7.6392250 seconds Total time for which application threads were stopped: 5.7847050 seconds Total time for which application threads were stopped: 1.3070470 seconds Total time for which application threads were stopped: 8.2520880 seconds Total time for which application threads were stopped: 6.0949910 seconds Total time for which application threads were stopped: 1.3988480 seconds Total time for which application threads were stopped: 8.1793240 seconds Total time for which application threads were stopped: 6.4138720 seconds Total time for which application threads were stopped: 4.4991670 seconds Total time for which application threads were stopped: 4.5612290 seconds Total time for which application threads were stopped: 0.3598490 seconds Total time for which application threads were stopped: 0.7111000 seconds Total time for which application threads were stopped: 1.4426750 seconds Total time for which application threads were stopped: 1.5931500 seconds Total time for which application threads were stopped: 10.9484920 seconds Total time for which application threads were stopped: 7.0707230 secondsWithout System.gc() before each run =================================== Test run times 0 - duration 12120ms 1 - duration 9439ms 2 - duration 9844ms 3 - duration 20933ms 4 - duration 23041msTotal time for which application threads were stopped: 0.0170860 seconds Total time for which application threads were stopped: 0.7915350 seconds Total time for which application threads were stopped: 10.7153320 seconds Total time for which application threads were stopped: 5.6234650 seconds Total time for which application threads were stopped: 1.2689950 seconds Total time for which application threads were stopped: 7.6238170 seconds Total time for which application threads were stopped: 6.0114540 seconds Total time for which application threads were stopped: 1.2990070 seconds Total time for which application threads were stopped: 7.9918480 seconds Total time for which application threads were stopped: 5.9997920 seconds Total time for which application threads were stopped: 1.3430040 seconds Total time for which application threads were stopped: 8.0759940 seconds Total time for which application threads were stopped: 6.3980610 seconds Total time for which application threads were stopped: 4.5572100 seconds Total time for which application threads were stopped: 4.6193830 seconds Total time for which application threads were stopped: 0.3877930 seconds Total time for which application threads were stopped: 0.7429270 seconds Total time for which application threads were stopped: 1.5248070 seconds Total time for which application threads were stopped: 1.5312130 seconds Total time for which application threads were stopped: 10.9120250 seconds Total time for which application threads were stopped: 7.3528590 secondsIt can been seen from the output that a significant proportion of the time is spent in the garbage collector. When your threads are stopped your application is not responsive. These tests have been done with default GC settings. It is possible to tune the GC for better results but this can be a highly skilled and significant effort. The only JVM I know that copes well by not imposing long pause times, even under high-throughput conditions, is the Azul concurrent compacting collector. When profiling this application, I can see that the majority of the time is spent allocating the objects and promoting them to the old generation because they do not fit in the young generation. The initialisation costs can be removed from the timing but that is not realistic. If the traditional Java approach is taken the state needs to be built up before the query can take place. The end user of an application has to wait for the state to be built up and the query executed. This test is really quite trivial. Imagine working with similar data sets but at the 100 GB scale. Note: When the garbage collector compacts a region, then objects that were next to each other can be moved far apart. This can result in TLB and other cache misses. Side Note On Serialization A huge benefit of using off-heap structures in this manner is how they can be very easily serialised to network, or storage, by a simple memory copy as I have shown in the previous post. This way we can completely bypass intermediate buffer and object allocation. Conclusion If you are willing to do some C style programming for large datasets it is possible to control the memory layout in Java by going off-heap. If you do, the benefits in performance, compactness, and avoiding GC issues are significant. However this is an approach that should not be used for all applications. Its benefits are only noticable for very large datasets, or the extremes of performance in throughput and/or latency. I hope the Java community can collectively realise the importance of supporting structures both on the heap and the stack. John Rose has done some excellent work in this area defining how tuples could be added to the JVM. His talk on Arrays 2.0 from the JVM Language Summit this year is really worth a watch. John discusses options for arrays of structures, and structures of arrays, in his talk. If the tuples, as proposed by John, were available then the test described here could have comparable performance and be a more pleasant programming style. The whole array of structures could be allocated in a single action thus bypassing the copy of individual objects across generations, and it would be stored in a compact contiguous fashion. This would remove the significant GC issues for this class of problem. Lately, I was comparing standard data structures between Java and .Net. In some cases I observed a 6-10X performance advantage to .Net for things like maps and dictionaries when .Net used native structure support. Let’s get this into Java as soon as possible! It is also pretty obvious from the results that if we are to use Java for real-time analysis on big data, then our standard garbage collectors need to significantly improve and support true concurrent operations. [1] – To my knowledge the only JVM that deals well with very large heaps is Azul Zing Happy coding and don’t forget to share! Reference: Compact Off-Heap Structures/Tuples In Java from our JCG partner Martin Thompson at the Mechanical Sympathy blog....
software-development-2-logo

Should you care about Conway’s Law?

Conway’s Law says that“organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.” [emphasis mine] This was an assertion made in the 1960s based on a small study which has now become a truism in software development (it’s fascinating how much of what we do and think today is based on data that is 50 or more years old). There are lots of questions to ask about Conway’s Law. Should we believe it – is there evidence to support it? How important is the influence of the structure of the team that designed and built the system compared to the structure of the team that continued to change and maintain it for several years – are initial decisions more or less important? What happens as the organization structure changes over time – are these changes reflected in the structure of the code? What organization structures result in better code, or is it better to have no organization structure at all?Conway’s Law and Collective Code Ownership Conway’s Law is sometimes used as an argument for a “Whole Team” approach and “Collective Code Ownership” in Agile development. The position taken is that systems that are designed by teams structured around different specializations are of lower quality (because they are artificially constrained) than systems built by teams of “specialized generalists” or “generalizing specialists” who share responsibilities and the code (in small Scrum teams for example).Communications Structure and Seams First it is important to understand that the argument in Conway’s Law is not necessarily about how organizations are structured. It is about how people inside an organization communicate with each other – whether and how they talk to each other and share information, the freedom and frequency and form, is communication low-bandwidth and formal/structured, or high-bandwidth and informal. It’s about the “social structure” of an organization. There are natural seams that occur in any application architecture, as part of decomposition and assignment of responsibilities (which is what application architecture is all about). Client and server separation (UI and UX work is quite different from what needs to be done on the server, and is often done with completely different technology), API boundaries with outside systems, data management, reporting, transaction management, workflow. Different kinds of problems that require different skills to solve them. The useful argument that Conway’s Law makes is that unnatural seams, unnecessary complexity, misunderstandings and disconnects will appear in the system where people don’t communicate with each other effectively.Conway’s Corollary Much more interesting is what Conway’s Law means to how you should structure your development organization. This is Conway’s Corollary: “A software system whose structure closely matches its organization’s communication structure works better (defined broadly) than a subsystem who structure differs from its organization’s communication structure.” “Better” means higher productivity for the people developing and maintaining the system, through more efficient communication and coordination, and higher quality. In Making Software, Christian Bird at Microsoft Research (no relation) explains how important it is that an organization’s “social structure” mirrors the architecture of the system that they are building or working on. He walks through a study on the relationship between development team organization structure and post-release defects, in this case the organization that built Microsoft Windows Vista. This was a very large project, with thousands of developers, working on tens of millions of LOC. The study found that organization structure was a better indicator of software quality than any attributes of the software itself. The more complex the organization, the more coordination required, the more chances for bugs (obvious, but worth verifying). What is most important is “geographic and structural congruence” – work that is related should be done by people who are working closely together (also obvious, and now we have data to prove it). Conway’s Corollary and Collective Code Ownership Conway’s Corollary argues against the “Collective Code Ownership” principle in XP where everyone can and should work on any part of the code at any time. The Microsoft study found that where developers from different parts of the organization worked on the same code, there were more bugs. It was better to have a team own a piece of code, or at the very least act as a gatekeeper and review all changes. Work is best done by the people (or person) who understand the code the most. Making Organizational Decisions A second study of 5 OSS projects was also interesting, because it showed that even in Open Source projects, people naturally form teams to work together on logically related parts of a code base. The lessons from Conway’s Corollary are that you should delay making decisions on organization until you understand the architectural relationships in a system; and that you need to reorganize the team to fit as the architecture changes over time. Dan Pritchett even suggests that that if you want to change the architectural structure of a system, you should start by changing the organization structure of the team to fit the target design – forcing the team to work togeter to “draw the new architecture out of the code”. Conway’s Law is less important and meaningful than people believe. Applying the argument to small teams, especially co-located Agile teams where people are all working closely together and talking constantly, is effectively irrelevant. Conway’s Corollary however is valuable, especially for large, distributed development organizations. It’s important for managers to ensure that the structure of the team is aligned with the architectural structure of the system – the way it is today, or the way you want it to be. Reference: Should you care about Conway’s Law? from our JCG partner Jim Bird at the Building Real Software blog....
eclipse-logo

Eclipse refactoring on steroids

In my last post about common Java violations, I listed a set of mistakes that java developers tend to make. While refactoring a Java project with the objective to resolve those violations, I used the refactoring features of Eclipse extensively to quickly change the code. Below is the compilation of such refactoring techniques. 1. Adding curly braces around block level statements It is often a good practice to wrap the block level statements with {curly braces}. But still, if there is only one statement in the block, then some developers prefer not to wrap them with {}. But Checkstyle will complain if you do so. If you want to change this, if(condition) doSomething; to this: if(condition){ doSomething(); } Eclipse’s source cleanup is there for your help.In Project Explorer, right click on the source folder and select Source -> Clean up… Choose Use custom profile and then click configure next to the custom profile section. By default, clean up action is configured to do multiple cleanup task. Since we are focused only on adding curly braces, we will disable all other cleanup tasks. To do this, navigate to all the tabs in the Custom Clean ups window and deselect all the cleanups. Then in the Code Style tab, select the Use blocks in if/while/for/do statements option and click OK. Then in Clean Up dialog, click Next and the refactoring will occur. You’ll be presented with a review page with the changes made.2. Joining to if statements into one Lets say, you have a code like this: if(isLoggedIn){ if(isAdmin){ doSecretStuff(); } } It is safe to combine the two if statements into one, unless you have some other code in between the two if statements. Of course, you can manually edit the code to remove the second if and move the condition to up. But wait, when Eclipse can do this for us, why should we do it ourselves?Place your cursor on the if keyword of the inner if statement. Press Ctrl + 1 which will open a context menu. Select the option Join ‘if’ statement with outer ‘if’ statement. Voila! the two if statements are now combined into one.You’ll get: if(isLoggedIn && isAdmin){ doSecretStuff(); }3. Renaming a filed and its getter / setter methods According to a this, renaming an element is the mostly used refactoring in Eclipse. So, when you rename a field which has setter/getter methods, you’d manually rename those method names. But Eclipse can help here to simplify this.Place your cursor on the field name that you want to rename. Press Ctrl + Shift and then press R key two times continuously which will open the ‘Rename Field’ dialog box. Check the options ‘Rename getter’ and ‘Rename setter’ while providing a new name to your field. On clicking OK, this will rename the field as well as its getter/setter methods.4. Inverting if statement Suppose you have a code like this: if(!isLoggedIn){ // ask to login }else{ // allow access } Above code is 100% valid. But code quality tools such as Checkstyle might complain, because we are using a negativity check in the first condition (i.e. !isLoggedIn). When you have only one case (just the if block), then we can’t do much about it. But when you have both if and else, then you can just invert the conditions to avoid this scenario.Place your cursor on the first if keyword. Press Ctrl + 1 and then select Invert ‘if’ statement. Eclipse will invert the conditions and the corresponding blocks. Finally you’ll get: if(isLoggedIn){ // allow access }else{ // ask to login }Helps to improve the readability of the code.Conclusion: Of course, the above are just the tip of the iceberg. Eclipse is capable of doing much-more advanced refactoring. So, what are your secret refactoring techniques? Reference: Eclipse refactoring on steroids from our JCG partner Veera Sundar at the Veera Sundar blog....
apache-openoffice-logo

Apache OpenOffice just graduated from the Incubator

Apache OpenOffice has just made it out of the Incubator and is now an official Apache Software Foundation project. “What?”, might some people ask now, “wasn’t it official before a year or so?”. No, it wasn’t! When Oracle decided to donate OpenOffice.org to the Apache Software Foundation, it entered the so called Incubatorfirst. That was back in June 2011. And as an incubating project it was not-yet official. Actually it was hard work to make it an official ASF project. Let me explain what happened.What does happen when a project incubates? When a project want to join the Apache Software Foundation, there are many open questions. Who wrote the code? Does the project really own all the intellectual property of it? Which license does the code use? Is there a working community? Usually there are a couple of long term Apache activists who join the project as mentors. In the case of OpenOffice, there were a couple of well known and respected community members involved. Like for example Jim Jagielski (ASF President), Sam Ruby (who has so many roles at the ASF that it is being said Sam Ruby does not refer to a person but a whole team), Ross Gardler (actually on the ASF board too), Shane Curcuru (ASF Trademark Expert), Joe Schaefer (one of the ASF Infra Gurus), Danese Cooper (better read her Wikipedia entry) and Noirin Plunkett, who is also an Officer to the ASF. Oh, and me. Me – the only one without a Wikipedia entry. You can imagine how excited I was to see so many experienced people joining as Mentors. Of course you can learn much of them and this is what I did. As a Mentor you have not only the chance to look at the gory details of an incubation – you have the duty to do so. Finally only when the project is “running” like an Apache project – often referred as the Apache way, which describes core values like “being open” – it will graduate out from incubator and become an official top level project. You can be assured that licensing problems are no longer there and the project has a clean IP. OpenOffice and some of its issues The Mentors will look at all the questions and advise the project to solve them. Mentors usually say things like:”you cannot use dependency $a, because it uses license $x. These are not compatible.” They say it, because the Apache Software Foundation only release code licensed with the Apache License. Oracles OpenOffice.org has had a lot of dependencies and some where GPL’ed. GPL is a different philosophy and unfortunately these two licenses are not fully compatible. One of the first hurdles was to make sure everything which will be published by the OpenOffice project is compatible to the Apache License. If you were coding on huge projects in your life, you know how painful it can be to look at every single dependency you might use. Mentors also look at the community. In the case of OpenOffice, there was a totally different style of “project management”. It was – more or less – leadership based. But at the ASF there are no “real” leaders, or at there is not a role of a leader. There a people who do stuff, and when they do stuff, they somehow lead it. Finally the project agrees or disagrees with votes. We call that Do-cracy (or so). But there is never ever one person who can decide what will happen and when. The Apache style is not for everybody. But I am glad to say that many, many people at this project changed their way of working without much pain. The community of OpenOffice is huge. It was overwhelming huge. There are parts of OpenOffice which required some special thoughts. Like the official OpenOffice forums. These forums were once running more or less independently. But now the forums were about to be part of the project. In other terms: the people who were moderating/administrating the forum needed become Apache committers. Even when they would not write a single line code. It is often misunderstood that you would need to write code to join a project as a committer. But this is not true. Apache projects usually are glad about every contribution and will respect you for that. If you write docs, you are able to join. If you are active as supporter on the mailing lists you are also able to join. We had to do much work to integrate the forum people into the OpenOffice community and this community into the Apache community. There were language barriers and concerns. I mean: some folks just wanted to post in the forums as always. Why did they need to sign a CLA? Well, because we are concerned on the IP. Because we want them to join our community – fully. Besides: we have not had forums on the ASF before. How to operate them? But there were some great volunteers who succeeded with this job. This is the case with Apache: we are one community. Community over code, it is often said. With this incubation we had to bring a fully fledged community into ours. We needed to mentor without being arrogant. I hope it worked out that way (I doubt everybody will agree). But it was difficult. The folks of OpenOffice needed to bend more than we needed. We more or less changed some infrastructure things, like running the forums on our servers. But OpenOffice community needed to change the way they operate. Therefore I can just give all involved people my deepest respect. When two communities grow together and one community cannot move so far as the other, there are often misunderstandings and of course hurt feelings. But in just this little time (since june 2011!) it worked out. Here is a great quote from the official annoucement: “The OpenOffice graduation is the official recognition that the project is now able to self-manage not only in technical matters, but also in community issues,” said Andrea Pescetti, Vice President of Apache OpenOffice. “The ‘Apache Way’ and its methods, such as taking every decision in public with total transparency, have allowed the project to attract and successfully engage new volunteers, and to elect an active and diverse Project Management Committee that will be able to guarantee a stable future to Apache OpenOffice.” Yup, that’s it.The first release It was not only impressive to see the community grow. No, one of the most impressive things I ever seen was that OpenOffice people – surrounded by Nay-sayers and other destructive elements – simply made what they liked. They made a new release. With a complete new infrastructure. With brand new requirements. With mentors in their backs. And with a growing and successful LibreOffice community on the other side. But they kept on going and finally they made it. A project with size and this restrictions – I can just say:”wow guys, that was incredible.”. Check their releases out here: openoffice.apache.org. 20 million other people did so since the first release was out in May 2012!And what next? Incubation is over. My role at this project is done. OpenOffice is now self governing and they totally deserved it. Now they can say they are an official project and users can use software which is guaranteed to run under the permissive Apache License 2.0. This will make it possible to use in your own products. There will be some tasks to be done for post graduation. But actually these are just small steps. Graduation is important from a psychology point of view. From technical point of view: some redirections and then head on to the next release. However, I was glad to get such a great insight, even when it needed huge amount of my energy. Somehow I am glad to unsubscribe, but somehow I will miss this exciting project. In any way, thanks guys that I was allowed to learn so much. And I wish you all the best for the future. I now think it is a bright one.At our conference Did you know there are a couple of great OpenOffice talks at the ApacheCon EU? Reference: Apache OpenOffice just graduated from the Incubator from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....
spring-interview-questions-answers

Spring MVC for Atom Feeds

How to add feeds (Atom) to your web application with just two classes? How about Spring MVC? Here are my assumptions:you are using Spring framework you have some entity, say “News”, that you want to publish in your feeds your ‘News’ entity has creationDate, title, and shortDescription you have some repository/dao, say ‘NewsRepository’, that will return the news from your database you want to write as little as possible you don’t want to format Atom (xml) by handYou actually do NOT need to use Spring MVC in your application already. If you do, skip to step 3. Step 1: add Spring MVC dependency to your application With maven that will be: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>3.1.0.RELEASE</version> </dependency>Step 2: add Spring MVC DispatcherServlet With web.xml that would be: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring-mvc.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/feed</url-pattern> </servlet-mapping>Notice, I set the url-pattern to “/feed” which means I don’t want Spring MVC to handle any other urls in my app (I’m using a different web framework for the rest of the app). I also give it a brand new contextConfigLocation, where only the mvc configuration is kept. Remember that, when you add a DispatcherServlet to an app that already has Spring (from ContextLoaderListener for example), your context is inherited from the global one, so you should not create beans that exist there again, or include xml that defines them. Watch out for Spring context getting up twice, and refer to spring or servlet documentation to understand what’s happaning. Step 3. add ROME – a library to handle Atom format With maven that is: <dependency> <groupId>net.java.dev.rome</groupId> <artifactId>rome</artifactId> <version>1.0.0</version> </dependency>Step 4. write your very simple controller @Controller public class FeedController { static final String LAST_UPDATE_VIEW_KEY = 'lastUpdate'; static final String NEWS_VIEW_KEY = 'news'; private NewsRepository newsRepository; private String viewName;protected FeedController() {} //required by cglibpublic FeedController(NewsRepository newsRepository, String viewName) { notNull(newsRepository); hasText(viewName); this.newsRepository = newsRepository; this.viewName = viewName; }@RequestMapping(value = '/feed', method = RequestMethod.GET) @Transactional public ModelAndView feed() { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName(viewName); List<News> news = newsRepository.fetchPublished(); modelAndView.addObject(NEWS_VIEW_KEY, news); modelAndView.addObject(LAST_UPDATE_VIEW_KEY, getCreationDateOfTheLast(news)); return modelAndView; }private Date getCreationDateOfTheLast(List<News> news) { if(news.size() > 0) { return news.get(0).getCreationDate(); } return new Date(0); } } And here’s a test for it, in case you want to copy&paste (who doesn’t?): @RunWith(MockitoJUnitRunner.class) public class FeedControllerShould { @Mock private NewsRepository newsRepository; private Date FORMER_ENTRY_CREATION_DATE = new Date(1); private Date LATTER_ENTRY_CREATION_DATE = new Date(2); private ArrayList<News> newsList; private FeedController feedController;@Before public void prepareNewsList() { News news1 = new News().title('title1').creationDate(FORMER_ENTRY_CREATION_DATE); News news2 = new News().title('title2').creationDate(LATTER_ENTRY_CREATION_DATE); newsList = newArrayList(news2, news1); }@Before public void prepareFeedController() { feedController = new FeedController(newsRepository, 'viewName'); }@Test public void returnViewWithNews() { //given given(newsRepository.fetchPublished()).willReturn(newsList); //when ModelAndView modelAndView = feedController.feed(); //then assertThat(modelAndView.getModel()) .includes(entry(FeedController.NEWS_VIEW_KEY, newsList)); }@Test public void returnViewWithLastUpdateTime() { //given given(newsRepository.fetchPublished()).willReturn(newsList);//when ModelAndView modelAndView = feedController.feed();//then assertThat(modelAndView.getModel()) .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, LATTER_ENTRY_CREATION_DATE)); }@Test public void returnTheBeginningOfTimeAsLastUpdateInViewWhenListIsEmpty() { //given given(newsRepository.fetchPublished()).willReturn(new ArrayList<News>());//when ModelAndView modelAndView = feedController.feed();//then assertThat(modelAndView.getModel()) .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, new Date(0))); } }Notice: here, I’m using fest-assert and mockito. The dependencies are: <dependency> <groupId>org.easytesting</groupId> <artifactId>fest-assert</artifactId> <version>1.4</version> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.8.5</version> <scope>test</scope> </dependency>Step 5. write your very simple view Here’s where all the magic formatting happens. Be sure to take a look at all the methods of Entry class, as there is quite a lot you may want to use/fill. import org.springframework.web.servlet.view.feed.AbstractAtomFeedView; [...]public class AtomFeedView extends AbstractAtomFeedView { private String feedId = 'tag:yourFantastiSiteName'; private String title = 'yourFantastiSiteName: news'; private String newsAbsoluteUrl = 'http://yourfanstasticsiteUrl.com/news/';@Override protected void buildFeedMetadata(Map<String, Object> model, Feed feed, HttpServletRequest request) { feed.setId(feedId); feed.setTitle(title); setUpdatedIfNeeded(model, feed); }private void setUpdatedIfNeeded(Map<String, Object> model, Feed feed) { @SuppressWarnings('unchecked') Date lastUpdate = (Date)model.get(FeedController.LAST_UPDATE_VIEW_KEY); if (feed.getUpdated() == null || lastUpdate != null || lastUpdate.compareTo(feed.getUpdated()) > 0) { feed.setUpdated(lastUpdate); } }@Override protected List<Entry> buildFeedEntries(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception { @SuppressWarnings('unchecked') List<News> newsList = (List<News>)model.get(FeedController.NEWS_VIEW_KEY); List<Entry> entries = new ArrayList<Entry>(); for (News news : newsList) { addEntry(entries, news); } return entries; }private void addEntry(List<Entry> entries, News news) { Entry entry = new Entry(); entry.setId(feedId + ', ' + news.getId()); entry.setTitle(news.getTitle()); entry.setUpdated(news.getCreationDate()); entry = setSummary(news, entry); entry = setLink(news, entry); entries.add(entry); }private Entry setSummary(News news, Entry entry) { Content summary = new Content(); summary.setValue(news.getShortDescription()); entry.setSummary(summary); return entry; }private Entry setLink(News news, Entry entry) { Link link = new Link(); link.setType('text/html'); link.setHref(newsAbsoluteUrl + news.getId()); //because I have a different controller to show news at http://yourfanstasticsiteUrl.com/news/ID entry.setAlternateLinks(newArrayList(link)); return entry; }}Step 6. add your classes to your Spring context I’m using xml approach. because I’m old and I love xml. No, seriously, I use xml because I may want to declare FeedController a few times with different views (RSS 1.0, RSS 2.0, etc.). So this is the forementioned spring-mvc.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'><bean class='org.springframework.web.servlet.view.ContentNegotiatingViewResolver'> <property name='mediaTypes'> <map> <entry key='atom' value='application/atom+xml'/> <entry key='html' value='text/html'/> </map> </property> <property name='viewResolvers'> <list> <bean class='org.springframework.web.servlet.view.BeanNameViewResolver'/> </list> </property> </bean><bean class='eu.margiel.pages.confitura.feed.FeedController'> <constructor-arg index='0' ref='newsRepository'/> <constructor-arg index='1' value='atomFeedView'/> </bean><bean id='atomFeedView' class='eu.margiel.pages.confitura.feed.AtomFeedView'/> </beans> And you are done. I’ve been asked a few times before to put all the working code in some public repo, so this time it’s the other way around. I’ve describe things that I had already published, and you can grab the commit from the bitbucket. Reference: Atom Feeds with Spring MVC from our JCG partner Jakub Nabrdalik at the Solid Craft blog....
ZXing-logo

Generate QR Code image from Java Program

If you are tech and gadget savvy, then you must be aware of QR codes. You will find it everywhere these days – in blogs, websites and even in some public places. This is very popular in mobile apps, where you scan the QR code using a QR Code scanner app and it will show you the text or redirect you to the web page if it’s URL. I came across this recently and found it very interesting. If you want to know about QR Code, you can find a lot of useful information at Wikipedia QR Code Page.When I found these kind of images in so many websites then I started looking how to generate it using Java Code. I looked into some APIs available as open source in the market and found zxing to be the simplest and best to use. Here is the program you can use to create QR Code image with zxing API. package com.adly.generator;import java.awt.Color; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Hashtable;import javax.imageio.ImageIO;import com.google.zxing.BarcodeFormat; import com.google.zxing.EncodeHintType; import com.google.zxing.WriterException; import com.google.zxing.common.BitMatrix; import com.google.zxing.qrcode.QRCodeWriter; import com.google.zxing.qrcode.decoder.ErrorCorrectionLevel;public class GenerateQRCode {/** * @param args * @throws WriterException * @throws IOException */ public static void main(String[] args) throws WriterException, IOException { String qrCodeText = 'http://www.journaldev.com'; String filePath = 'D:\\Pankaj\\JD.png'; int size = 125; String fileType = 'png'; File qrFile = new File(filePath); createQRImage(qrFile, qrCodeText, size, fileType); System.out.println('DONE'); }private static void createQRImage(File qrFile, String qrCodeText, int size, String fileType) throws WriterException, IOException { // Create the ByteMatrix for the QR-Code that encodes the given String Hashtable hintMap = new Hashtable(); hintMap.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.L); QRCodeWriter qrCodeWriter = new QRCodeWriter(); BitMatrix byteMatrix = qrCodeWriter.encode(qrCodeText, BarcodeFormat.QR_CODE, size, size, hintMap); // Make the BufferedImage that are to hold the QRCode int matrixWidth = byteMatrix.getWidth(); BufferedImage image = new BufferedImage(matrixWidth, matrixWidth, BufferedImage.TYPE_INT_RGB); image.createGraphics();Graphics2D graphics = (Graphics2D) image.getGraphics(); graphics.setColor(Color.WHITE); graphics.fillRect(0, 0, matrixWidth, matrixWidth); // Paint and save the image using the ByteMatrix graphics.setColor(Color.BLACK);for (int i = 0; i < matrixWidth; i++) { for (int j = 0; j < matrixWidth; j++) { if (byteMatrix.get(i, j)) { graphics.fillRect(i, j, 1, 1); } } } ImageIO.write(image, fileType, qrFile); }} Here is the QR Code image file created by this program. You can use your mobile QR Code scanner app to test it. It should point to JournalDev Home URL.If you don’t have a mobile app to test it, don’t worry. You can test it with zxing API through command line too. I am on Windows OS and here is the command to test it. If you are on Unix/Linux/Mac OS then change it accordingly. D:\Pankaj\zxing>java -cp javase\javase.jar;core\core.jar com.google.zxing.client.j2se.CommandLineRunner D:\Pankaj\JD.png file:/D:/Pankaj/JD.png (format: QR_CODE, type: URI): Raw result:http://www.journaldev.comParsed result:http://www.journaldev.comFound 4 result points. Point 0: (35.5,89.5) Point 1: (35.5,35.5) Point 2: (89.5,35.5) Point 3: (80.5,80.5)Tip for Dynamic QR Code Generation If you want to generate QR code dynamically, you can do it using Google Charts Tools. For above scenario, URL will be https://chart.googleapis.com/chartchs=125×125&cht=qr&chl=http://www.journaldev.com Happy coding and don’t forget to share! Reference: Generate QR Code image from Java Program from our JCG partner Pankaj Kumar at the Developer Recipes blog....
career-logo

Top 7 tips for succeeding in a technical interview for software engineers

In this post I would like to write on how to succeed in a technical interview based on my experience as an interviewer. Most of the interviews follows some patterns. If you understand it and frame your response in the same way you can clear any interview. If you don’t know stuff this might not help you, but if you are prepared, this article will help you show of your full potential. If you are skillful the only reason you can loose an interview is by lack of preparation. You may know all the stuff but you still needs to prepare by reading books, article etc.. Theses may not teach you anything new but will help in organizing things that you already know. Once you have organized information it is really easy to access it. You should read not only for interviews, make it a practice and get better at your job. Most of the time interviewer is looking for a candidate who can work with him. The vacancy m ay be in other teams but they use this parameter to judge. Mostly this article contains general tips. These are targeted for 2 to 6 years experienced candidates. 1. Be honest and don’t bluff Answer what you know, confidently. If you have been asked a question that you don’t know, Start by telling ‘I am not sure, but I think It is …..’. Never tell a wrong answer confidently. That will make them doubt your correct answers also or may feel that they were guesses. You can’t use this technique for every question, but I would think 25% is a good amount. Most importantly this shows your ability to think and a never die attitude. No one wants to work with people says ‘I can’t do this’. Try to do some thing about all the questions. 2. Be ready to write Code If you are been asked to write some code, be careful and follow some basic standards. I heard people telling me ‘I forgot the syntax…’ and this for the syntax of a for loop. No one expect you to remember everything but basics like looping, if conditions, main method, exceptions are never to be forgotten. If you did, brush them up. Always write the code with good indentation using lots of white spaces. That might make up for your bad handwriting!! 3. Get ready to explain about your project As engineers you have to understand the business before you start code it. So you should be able to explain what is being done in your project. Write down 3-4 lines that will explain the project in high level. By hearing the lines some one out side your team should get an idea about it. Because we always works inside on features, most of the time it is difficult to frame these. Check your client’s internal communications how they are marketing and get some clue from it. Practice what your are going to say with friends make make sure you are on to the point. Once you have explained about the business needs then you will be asked about the technical architecture of the project. You have to be prepared with a architecture diagram that shows how the interaction of components in your project. It don’t have to be in any specific UML format, but make sure you can explain stuff relating to the diagram you have drawn. For example if you are working in a web application show how the data is flow from UI to DB. You can show different layers involved, technologies used etc.. The most important part is you should be clear in your mind about what you are currently working on. 4. Convert arguments to conversation Even if you know that that person is wrong do not argue and try to continue the conversation saying ‘Ok, But I am not so sure if that is correct, I will check that out’. This keeps the person in good terms. Be an active listener during the interview use reference to your experience when you are answering. 5. Be prepared for the WHY question Good interviews focus on the question ‘Why?’. It might start with ‘What’ but will end in ‘Why?’. For example in Java typical question would be ‘What is the difference between String and StringBuffer?’. A follow-up why question will be like ‘Why is String has so-and-so’ or ‘How is it done..?’. Be ready to give inside information by answering ‘How?’ and ‘Why’ parts of he question. 6. Tell about your best achievement During your work there might be something that you consider as your best achievement. It is important to describe it in such a way that interviewer feels that you have did something extraordinary there. So, prepare a believable story on how your abilities helped you complete that task. It is important to prepare this because it takes time to dig your memory and find situations. 7. Do you have any questions for me? This question gets repeated in every single interview. Here you don’t actually care about the answers; but you should make yourselves look good by asking ‘smart’ questions. This article will help you in this. Reference: Top 7 tips for succeeding in a technical interview for software engineers from our JCG partner Manu PK at the The Object Oriented Life blog....
aspectj-logo

Clean code with aspects

In my previous post I’ve described the alphabet conversion, and I’ve mentioned that we used AspectJ to resolve that task, but i did not mention how AspectJ works and what are aspects generaly. So in the next few lines i will explain:what is Aspect Oriented Programming and why we need it what is AspectJ using AspectJ with Spring (configuring AspectJ and spring to work together) and i will explain aspects on the example from previous post.What Is Aspect Oriented Programing and why we need it During software development we can use different programming paradigms such as OOP (object oriented programing) or POP(procedural oriented programing). Today most of us use Object Oriented Programming methodologies for resolving real life problems, during software development process. But during our work, we are constantly meeting with some code which crossing through our code base and breaking its modularity and making it dirty. This part of code usually don’t have business values, but we need them to resolve our problems. For example we can take a look on database transactions. Transactions are very important for our software, because they take care about data consistency. The code which start and handle transaction are very important for our application but it is used for technical stuff (starting, committing and rolling back transactions). This things make it difficult to understand what is real meaning of code (to see real business value of code). Of course i will not make any example of how to handle transactions using aspects because there is a lot of frameworks which will take care about transactions instead of us. I’ve just mentioned transactions because you probably know how to insert data into database using a plain JDBC API. So to make our code cleaner we will use a design patterns, which is a good approach for the problem solving. But also sometimes a usage of design patterns will not lead us to easy solution, and most of us will resort to the easier solution what will produced “dirty” code. In this situation we should give a chance to Aspect Oriented approach for the problem solving. When we think about AOP we should not think about something totally new for us, we should think about AOP as a complements of OOP. AOP is there to make easier code modularisation, make code cleaner, and provide us with easier and faster understand what some part of application should do. AOP introduce a few new concepts which will allow us easier code modulation. If we want to efficiently use Aspects we need to know its basics principe and terminology. When we start with using AOP we will meet a new termines:Crosscutting concerns, it is code which should be moved in separate module (i.e. code for handling transactions). Aspect, it is a module which contains concerns. Pointcut, we can look at it as pointer which will instruct when corresponding code should be run Advice, it contains a code which should be run when some join point is reached. Inner-type declaration, allow modification of class structure. Aspect-weaving, is mechanism which coordinate the integration with the rest of the system.I will show at the end w hat are they and how to use them within an example. What is AspectJ AspectJ is an extension of Java programing language which allow usage of AOP concepts within Java programing language. When you use AspectJ you do not need to make any changes in your existing code. AspectJ extend java with a new construct called aspect, and after AspectJ 5 you can use annotation based development style. AspectJ and Spring Spring framework already provide its own implementation of AOP. Spring AOP is simpler solution than AspectJ but it is not so robust as AspectJ. So if you want to use aspects in your spring application you should be familiar with possibilities of Spring AOP before choosing AspectJ to do work. Before we see the example of using aspect i will show you how to integrate AspectJ with Spring and how to configure Tomcat to be able to run AspectJ application with Spring. In this example I’ve used LTW (load time weaving) of aspects. So I will start explaining first how to do it from Spring. It is easy, just add next line in your application configuration file: <context:load-time-weaver aspectj-weaving="autodetect"/>That is all what is needed to be done with in spring configuration. The next step is configuration of Tomcat. We need to define new class loader for application. This class loader need to be able to do load time weaving so we use: <loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" />The loader need to be in Tomcat classpath before you can use it. Of course, in order to make this work, we need to create aop.xml file. This file contains instruction which will be used by class loader during class transformation process. Here is example of aop.xml file which I’ve used for alphabet convertion. <aspectj> <weaver options="-Xset:weaveJavaxPackages=true"> <!-- only weave classes in our application-specific packages --> <include within="ba.codecentric.medica.model.*" /> <include within="ba.codecentric.medica..*.service.*" /> <include within="ba.codecentric.medica.controller..*" /> <include within="ba.codecentric.medica.utils.ModelMapper" /> <include within="ba.codecentric.medica.utils.RedirectHelper" /> <include within="ba.codecentric.medica.aop.aspect.CharacterConvertionAspect" /> <include within="ba.codecentric.medica.security.UserAuthenticationProvider" /> <include within="ba.codecentric.medica.wraper.MedicaRequestWrapper"/> </weaver> <aspects> <!-- weave in just this aspect --> <aspect name="ba.codecentric.medica.aop.aspect.CharacterConversionAspect" /> </aspects> </aspectj>This last xml file is most interesting for all of you which are willing to try AspectJ. It instruct AspectJ weaving process. The weaver section contains information about what should be weaved. So this file will include all Classes inside:ba.codecentric.medica.model.* ba.codecentric.medica..*.service.* ba.codecentric.medica.controller..* ba.codecentric.medica.utils.ModelMapper ba.codecentric.medica.utils.RedirectHelper ba.codecentric.medica.aop.aspect.CharacterConvertionAspect ba.codecentric.medica.security.UserAuthenticationProvider ba.codecentric.medica.wraper.MedicaRequestWrapperSo the first line including all classes inside of model package. The second one include all Classes which are part of services sub packages inside of ba.codecentric.medica package (i.e. ba.codecentric.medica.hospitalisation.service). The third one include everything belowe controller package. And rest of the lines include specified classes. Options attribute define addition option which should be used during weaving process. So in this example -Xset:weaveJavaxPackages=true instruct AspectJ also to weave java packages. Aspects section contain list of aspects which will be used during weaving process. For more information about configuration with xml you can see AspectJ documentation. Example of usage AspectJI prefer usage of Annotation so the next example will show you how to use AspectJ with annotation. Annotation driven programming with AspectJ is possible from version AspectJ 5. Here is some code of a complete aspect which contains concerns used for alphabet conversion. package ba.codecentric.medica.aop.aspect; import java.util.List; import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.Signature; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import ba.codecentric.medica.utils.CharacterConverter; import ba.codecentric.medica.utils.ContextHelper; import ba.codecentric.medica.utils.LanguageHelper; /** * Aspect used for transformation characters from one alphabet to another. * * @author igor * */ @Aspect public class CharacterConvertionAspect { private static Log LOG = LogFactory.getLog(CharacterConvertionAspect.class); public int getConvertTo() { return getLanguageHelper().getConvertTo(); } protected LanguageHelper getLanguageHelper() { return ContextHelper.getBean("languageHelper"); } public CharacterConvertionAspect() { LOG.info("Character converter aspect created"); } @SuppressWarnings("rawtypes") @Around("execution(public java.lang.String ba.codecentric.medica.model..*.get*(..)) && !cflow(execution(* ba.codecentric.medica.controller..*.*(..))) && !cflow(execution(public void ba.codecentric.medica..*.service..*.*(..))) && !cflow(execution(* ba.codecentric.medica.security.UserAuthenticationProvider.*(..)))") public Object convertCharacters(ProceedingJoinPoint pjp) throws Throwable { LOG.info("Character conversion trigered"); Object value = pjp.proceed(); if (value instanceof String) { LOG.info("Convert:" + value); Signature signature = pjp.getSignature(); Class type = signature.getDeclaringType(); String methodName = signature.getName(); Map<Class, List<string&lgt;&lgt; skipConvertionMap = getBlackList(); if(skipConvertionMap.containsKey(type)){ List<string&lgt; list = skipConvertionMap.get(type); if(list == null || list.contains(methodName)){ LOG.info("Value will not be converted because it is on blacklist"); return value; } } return getConverter().convertCharacters((String) value, getConvertTo()); } LOG.info("Convertion will not be performed (" + value + ")"); return value; } @Around("execution(public void ba.codecentric.medica.model..*.set*(java.lang.String))") public Object convertCharactersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.getArgs()[0]; LOG.info("Converting value:" + value + ", before persisting"); if (value instanceof String){ value= getConverter().convertCharacters((String)value, CharacterConverter.TO_LAT); } return pjp.proceed(new Object[]{value}); } /** * Convert parameter to Latin alphabet * * @param pjp * @return * @throws Throwable */ @Around("execution(public * ba.codecentric.medica.wraper.MedicaRequestWrapper.getParameter*(..))") public Object convertParametersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.proceed(); return getConverter().convert(value, CharacterConverter.TO_LAT); } /** * If result of the invocation is String, it should be converted to chosen alphabet. * * @param jp * @return converted value * @throws Throwable */ @Around("execution(* ba.codecentric.medica.controller..*.*(..))") public Object procedWithControllerInvocation(ProceedingJoinPoint jp) throws Throwable { Object value = jp.proceed(); return getConverter().convert(value, getConvertTo()); } public CharacterConverter getConverter() { return ContextHelper.getBean("characterConverter"); } @SuppressWarnings("rawtypes") public Map<Class,List<string&lgt;&lgt; getBlackList(){ return ContextHelper.getBean("blackList"); } }First of all we can see that class is annotated with @Aspect annotation. This indicate that this class is actually an aspect. Aspect is a construction which contains similar cross-cutting concerns. So we can look at it as a module which contains cross cutting code and define when which code will be used and how. @Around("execution(public void ba.codecentric.medica.model..*.set*(java.lang.String))") public Object convertCharactersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.getArgs()[0]; LOG.debug("Converting value:" + value + ", before persisting"); if (value instanceof String) { value = getConverter().convertCharacters((String) value, CharacterConverter.TO_LAT); } return pjp.proceed(new Object[] { value }); }This is a method which is annotated with @Around annotation. The around annotation is used to represent around advice. I have already mentioned that, advice is the place which contains a cross-cutting code. In this example I’ve only used “around” advice, but except that there is also before,after,after returning and after throwing advice. All advice except around should not have a return value. The content inside of around annotation define when code from advice will be weaved. This also can be done when we define pointcuts. In this example I did’t use pointcuts for defining join points because it’s simple aspect. With pointcut annotations you can define real robust join points. In this case advice will be executed during setting values of entity beans which have only one parameter of type String. ProcidingJoinPoint pjp, in the example above, present the join point, so for this example it is setter method of entity bean. Value of object send to the entity setter method will be first converted and then the setter method will be called with a converted value. If I didn’t use aspects, my code could look like: public void setJmbg(String jmbg) { this.jmbg = getConverter().convertCharacters(jmbg, CharacterConverter.TO_LAT); }I’ve already said that for this example I use LTW. So in the next few lines I will try to explain weaving process briefly. Weaving is process in which the class is transformed with defined aspect. In the next picture you can see illustration of weaving process.For better understanding of weaving, you can consider it as code injection around the calling method, in this case. ConclusionSo in this example I’ve just covered some basic principles of aspect programming with AspectJ. This aspect helped me to keep the code clean. The result of using aspect is clean separation of crossing-cut code and the code of real business value. The controllers, services and entity beans stayed clean and technical code is extracted in separate module which allow you to easier understand and maintain your code more easily. For more details information about defining pointcuts and general about AspectJ project you can see on the project page. Happy coding and don’t forget to share! Reference: Clean code with aspects from our JCG partner Igor Madjeric at the Igor Madjeric blog....
java-logo

Factory Design Pattern Case Study

I had a job to check our project code quality. And have to report it back to my team leader for any obstacle that i found in the project. I found a lot of leaks and i think would be good to be discussed on the blog. Not to mock the author, but to learn and improve ourselves together. Like this code, this is the part that i found in our code. public ContactInfoBean(final Reseller resellerInfo) {switch(resellerInfo.getType()) {case PROGRAM_CONTACT:readExecutiveInfo(resellerInfo);break;case FILE_CONTACT:readOperationalInfo(resellerInfo);break;default:break;}}The code works fine, and do its job pretty well. But some problem will appear by using this code-style. This class will grow tailing the biz changes, as usual, the bigger one class, the “merrier” to maintain it is. And most likely this class, will be having more than one purpose, can be called low-cohesion. Better OOP Approach Well the better approach for the case above would be using the Factory Design Pattern. We can let the factory of READER to generate every single instance according to their type. It would be easier to grow the instance type, since we just need to create a new class and do a little modification in the Factory class. The caller class, wont grow and will stand still at its current shape. public interface InfoReader {public void readInfo();} public class ExecutiveReader implements InfoReader {public void readInfo() {// override}} public class OperationalReader implements InfoReader {public void readInfo() {// override}}And The Factory public class InfoReaderFactory {private static final int PROGRAM_CONTACT = 1;private static final int FILE_CONTACT = 2;public static InfoReader getInstance(Reseller resellerInfo) {InfoReader instance = null;switch (resellerInfo.getType()) {case PROGRAM_CONTACT:instance = new ExecutiveReader();break;case FILE_CONTACT:instance = new OperationalReader();break;default:throw new IllegalArgumentException('Unknown Reseller');}return instance;}}And now The Caller InfoReader reader = InfoReaderFactory.getInstance(resellerInfo);reader.readInfo();The Benefits With the Factory Design Pattern to handle this case, we can achieve some benefits,Specifying a class for one task, means, easier to maintain since one class is for one purpose only (modularity/High Cohesion). i.e: Operational Reader is only to read data for Operational only, no other purpose. Just in case, one day in the future we need another Reader (say: NonOperationalReader). We just need create a new Class that extends (or implements) the InfoReader class and then we can override our own readInfo() function. This Caller class will have no impact. We just need to do some modification in the Factory code.public class InfoReaderFactory {private static final int PROGRAM_CONTACT = 1;private static final int FILE_CONTACT = 2;private static final int NEW_READER = 3;public static InfoReader getInstance(ResellerInfo resellerInfo) {InfoReader instance = null;switch (resellerInfo.getType()) {case PROGRAM_CONTACT:instance = new ExecutiveReader();break;case FILE_CONTACT:instance = new OperationalReader();break;case NEW_READER:instance = new NonOperationalReader();break;default:throw new IllegalArgumentException('Unknown Reseller');}return instance;}}Higher Reusability of Parent’s Component (Inheritance): Since we have parent class (InfoReader), we can put common functions and thingies inside this InfoReader class, and later all of the derivative classes (ExecutiveReader and OperationalReader) can reuse the common components from InfoReader . Avoid code redundancy and can minimize coding time. Eventhough this one depends on how you do the code and cant be guaranteed.But, It’s Run Perfectly, Should We Change It? Obviously the answer is big NO. This is only the case study and for your further experience and knowledge. OOP is good, do it anywhere it’s applicable. But the most important thing is, if it’s running, dont change it. It would be ridiculous if you ruin the entire working code just to pursue some OOP approach. Dont be naive also, no one can achieve the perfect code. The most important is we know what is the better approach. Reference: Case Study: Factory Design Pattern from our JCG partner Ronald Djunaedi at the Naming Exception blog....
apache-solr-logo

Setting up and playing with Apache Solr on Tomcat

A while back a had a little time to play with Solr, and was instantly blown away by the performance we could achieve on some of our bigger datasets. Here is some of my initial setup and configuration learnings to maybe help someone get it up and running a little faster. Starting with setting both up on windows. Download and extract Apache Tomcat and Solr and copy into your working folders. Tomcat Setup If you want tomcat as a service install it using the following: bin\service.bat install Edit the tomcat users under conf.: <role rolename="admin"/> <role rolename="manager-gui"/> <user username="tomcat" password="tomcat" roles="admin,manager-gui"/>If you are going to query Solr using international characters (>127) using HTTP-GET, you must configure Tomcat to conform to the URI standard by accepting percent-encoded UTF-8. Add: URIEncoding=’UTF-8′ <connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" />to the conf/server.xml Copy the contents of the example\solr your solr home directory D:\Java\apache-solr-3.6.0\home create the code fragment on $CATALINA_HOME/conf/Catalina/localhost/solr.xml pointing to your solr home. <?xml version="1.0" encoding="UTF-8"?> <context docBase="D:\Java\apache-tomcat-7.0.27\webapps\solr.war" debug="0" crossContext="true" > <environment name="solr/home" type="java.lang.String" value="D:\Java\apache-solr-3.6.0\home" override="true" /> </Context>Startup tomcat, login, deploy the solr.war. Solr Setup It should be available at http://localhost:8080/solr/admin/ To create a quick test using SolrJ the creates and reads data: Grab the following Maven Libs: <dependency> <groupid>org.apache.solr</groupId> <artifactid>apache-solr-solrj</artifactId> <version>3.6.0</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpclient</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpcore</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.james</groupId> <artifactid>apache-mime4j</artifactId> <version>0.6.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpmime</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.slf4j</groupId> <artifactid>slf4j-api</artifactId> <version>1.6.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>commons-logging</groupId> <artifactid>commons-logging</artifactId> <version>1.1.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>junit</groupId> <artifactid>junit</artifactId> <version>4.9</version> <scope>test</scope> </dependency>JUnit test: package za.co.discovery.ecs.solr.test; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.net.MalformedURLException; import java.net.URISyntaxException; import java.util.ArrayList; import java.util.Collection; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.SolrServer; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.HttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocument; import org.apache.solr.common.SolrDocumentList; import org.apache.solr.common.SolrInputDocument; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class TestSolr { private SolrServer server; /** * setup. */ @Before public void setup() { server = new HttpSolrServer("http://localhost:8080/solr/"); try { server.deleteByQuery("*:*"); } catch (SolrServerException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } /** * Test Adding. * * @throws MalformedURLException error */ @Test public void testAdding() throws MalformedURLException { try { final SolrInputDocument doc1 = new SolrInputDocument(); doc1.addField("id", "id1", 1.0f); doc1.addField("name", "doc1", 1.0f); doc1.addField("price", 10); final SolrInputDocument doc2 = new SolrInputDocument(); doc2.addField("id", "id2", 1.0f); doc2.addField("name", "doc2", 1.0f); doc2.addField("price", 20); final Collection<solrinputdocument> docs = new ArrayList<solrinputdocument>(); docs.add(doc1); docs.add(doc2); server.add(docs); server.commit(); final SolrQuery query = new SolrQuery(); query.setQuery("*:*"); query.addSortField("price", SolrQuery.ORDER.asc); final QueryResponse rsp = server.query(query); final SolrDocumentList solrDocumentList = rsp.getResults(); for (final SolrDocument doc : solrDocumentList) { final String name = (String) doc.getFieldValue("name"); final String id = (String) doc.getFieldValue("id"); //id is the uniqueKey field System.out.println("Name:" + name + " id:" + id); } } catch (SolrServerException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } catch (IOException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } } }Adding data directly from the DB Firstly you need to add the relevant DB libs to the add classpath. Then create data-config.xml as below, if you require custom fields, those can be specified under the fieldstag in the schema.xml shown below the dataconfig.xml <dataconfig> <datasource name="jdbc" driver="oracle.jdbc.driver.OracleDriver" url="jdbc:oracle:thin:@localhost:1525:DB" user="user" password="pass"/> <document name="products"> <entity name="item" query="select * from demo"> <field column="ID" name="id" /> <field column="DEMO" name="demo" /> <entity name="feature" query="select description from feature where item_id='${item.ID}'"> <field name="features" column="description" /> </entity> <entity name="item_category" query="select CATEGORY_ID from item_category where item_id='${item.ID}'"> <entity name="category" query="select description from category where id = '${item_category.CATEGORY_ID}'"> <field column="description" name="cat" /> </entity> </entity> </entity> </document> </dataConfig>A custom field in the schema.xml: <fields> <field name="DEMO" type="string" indexed="true" stored="true" required="true" /> </fieldsAdd in the solrconfig.xml make sure to point the the data-config.xml, the handler has to be registered in the solrconfig.xml as follows <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">data-config.xml</str> </lst> </requestHandler>Once that is all setup a full import can be done with the following: http://localhost:8080/solr/admin/dataimport?command=full-import Then you should be good to go with some lightning fast data retrieval. Reference: Setting up and playing with Apache Solr on Tomcat from our JCG partner Brian Du Preez at the Zen in the art of IT blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close