Featured FREE Whitepapers

What's New Here?


Java Exception: java lang NoSuchMethodError

If you have a look at the error message java.lang.NoSuchMethodError you may understand that the Java Virtual Machine is trying to indicate us that the method you invoked is not available in the class or in an interface. You could have also seen this error thrown when executing a class that has no public static void main() method.To know the reason behind this read this post. When you try to invoke a method that is no longer available in a class then at the compile time itself you will be shown an error message “cannot find symbol”. So you may think how come this error is thrown when launching a program or an application. I have explained the fact behind this issue using the following programs. Let’s have a class Nomethod and Pro1 as follows, Nomethod class: import java.util.*;class Nomethod{ public static void main(String args[]){ Pro1 s=new Pro1(); s.display(); } }Pro1 class: class Pro1{ public void display(){ System.out.println("I am inside display"); } }When you execute this program it will work fine without showing any errors.Now look at what happens when i change the class Pro1 as follows and compile this class alone. Example1: class Pro1 { }Example2: class Pro1{ public int void display(){ System.out.println("I am inside display"); return 1; // for example i have included a statement like this } }Now if you execute the class Nomethod without recompiling it then you will be embarrassed by this java.lang.NoSuchMethodError at run-time. 1. If you change the class Pro1 as shown in Example1,then this exception will be thrown because there is no method display() available in that class. 2. If you consider the Example2 this error is thrown because the signature of the method display() has been changed. If you understand this examples then you might have understood the reason for this error thrown when executing a class that has no main() method. The real fact is that “binary compatibility with the pre-existing binaries(classes) have been compromised by the new binaries(modified classes)”. “when you change the signature of a method or delete a method in a particular class” and compile it alone then other classes that invokes this method will have no idea about the state of the method,thus causing this error to be thrown at run-time. The same case applies to interfaces also,”if you try to change the signature of a method or delete a method in the interface” at that time also this exception will be thrown. What Could Be The Solution For This?  “If you have recompiled the other class, that invokes this modified method or deleted method in the class or in an interface” then this error will be shown at the compile-time itself and you can do the necessary steps to resolve it. Note: Things may get even worse,consider a situation like even if you recompile the class you will not be indicated of this error.What will you do?… Say,for an example you include a older version of the package in your project and you have placed it in the extension library.You also have the newer package(in which the signature of the method has been changed) and you have included that package in the class path. When compiling the classes the compiler will search for the classes in the extension libraries and bootstrap libraries to resolve the references,but the java virtual machine searches only in the class path(for third-party libraries) that has been specified. So when using a new package in your application,ensure that the settings relevant to the older version had been modified and read the documentation of the newer package to know the changes that has been made in this package. Reference: java.lang.NoSuchMethodError from our JCG partner Ganesh Bhuddhan at the java errors and exceptions blog....

Memory Access Patterns Are Important

In high-performance computing it is often said that the cost of a cache-miss is the largest performance penalty for an algorithm. For many years the increase in speed of our processors has greatly outstripped latency gains to main-memory. Bandwidth to main-memory has greatly increased via wider, and multi-channel, buses however the latency has not significantly reduced. To hide this latency our processors employ evermore complex cache sub-systems that have many layers.The 1994 paper “Hitting the memory wall: implications of the obvious” describes the problem and goes on to argue that caches do not ultimately help because of compulsory cache-misses. I aim to show that by using access patterns which display consideration for the cache hierarchy, this conclusion is not inevitable.Let’s start putting the problem in context with some examples. Our hardware tries to hide the main-memory latency via a number of techniques. Basically three major bets are taken on memory access patterns:Temporal: Memory accessed recently will likely be required again soon. Spatial: Adjacent memory is likely to be required soon.  Striding: Memory access is likely to follow a predictable pattern.To illustrate these three bets in action let’s write some code and measure the results.Walk through memory in a linear fashion being completely predictable. Pseudo randomly walk round memory within a restricted area then move on. This restricted area is what is commonly known as an operating system page of memory. Pseudo randomly walk around a large area of the heap.CodeThe following code should be run with the -Xmx4g JVM option. public class TestMemoryAccessPatterns { private static final int LONG_SIZE = 8; private static final int PAGE_SIZE = 2 * 1024 * 1024; private static final int ONE_GIG = 1024 * 1024 * 1024; private static final long TWO_GIG = 2L * ONE_GIG; private static final int ARRAY_SIZE = (int)(TWO_GIG / LONG_SIZE); private static final int WORDS_PER_PAGE = PAGE_SIZE / LONG_SIZE; private static final int ARRAY_MASK = ARRAY_SIZE - 1; private static final int PAGE_MASK = WORDS_PER_PAGE - 1; private static final int PRIME_INC = 514229; private static final long[] memory = new long[ARRAY_SIZE]; static { for (int i = 0; i < ARRAY_SIZE; i++) { memory[i] = 777; } } public enum StrideType { LINEAR_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return (pos + 1) & ARRAY_MASK; } }, RANDOM_PAGE_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return pageOffset + ((pos + PRIME_INC) & PAGE_MASK); } }, RANDOM_HEAP_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return (pos + PRIME_INC) & ARRAY_MASK; } }; public abstract int next(int pageOffset, int wordOffset, int pos); } public static void main(final String[] args) { final StrideType strideType; switch (Integer.parseInt(args[0])) { case 1: strideType = StrideType.LINEAR_WALK; break; case 2: strideType = StrideType.RANDOM_PAGE_WALK; break; case 3: strideType = StrideType.RANDOM_HEAP_WALK; break; default: throw new IllegalArgumentException("Unknown StrideType"); } for (int i = 0; i < 5; i++) { perfTest(i, strideType); } } private static void perfTest(final int runNumber, final StrideType strideType) { final long start = System.nanoTime(); int pos = -1; long result = 0; for (int pageOffset = 0; pageOffset < ARRAY_SIZE; pageOffset += WORDS_PER_PAGE) { for (int wordOffset = pageOffset, limit = pageOffset + WORDS_PER_PAGE; wordOffset < limit; wordOffset++) { pos = strideType.next(pageOffset, wordOffset, pos); result += memory[pos]; } } final long duration = System.nanoTime() - start; final double nsOp = duration / (double)ARRAY_SIZE; if (208574349312L != result) { throw new IllegalStateException(); } System.out.format("%d - %.2fns %s\n", Integer.valueOf(runNumber), Double.valueOf(nsOp), strideType); } }Results Intel U4100 @ 1.3GHz, 4GB RAM DDR2 800MHz, Windows 7 64-bit, Java 1.7.0_05 =========================================== 0 - 2.38ns LINEAR_WALK 1 - 2.41ns LINEAR_WALK 2 - 2.35ns LINEAR_WALK 3 - 2.36ns LINEAR_WALK 4 - 2.39ns LINEAR_WALK0 - 12.45ns RANDOM_PAGE_WALK 1 - 12.27ns RANDOM_PAGE_WALK 2 - 12.17ns RANDOM_PAGE_WALK 3 - 12.22ns RANDOM_PAGE_WALK 4 - 12.18ns RANDOM_PAGE_WALK0 - 152.86ns RANDOM_HEAP_WALK 1 - 151.80ns RANDOM_HEAP_WALK 2 - 151.72ns RANDOM_HEAP_WALK 3 - 151.91ns RANDOM_HEAP_WALK 4 - 151.36ns RANDOM_HEAP_WALKIntel i7-860 @ 2.8GHz, 8GB RAM DDR3 1333MHz, Windows 7 64-bit, Java 1.7.0_05 ============================================= 0 - 1.06ns LINEAR_WALK 1 - 1.05ns LINEAR_WALK 2 - 0.98ns LINEAR_WALK 3 - 1.00ns LINEAR_WALK 4 - 1.00ns LINEAR_WALK0 - 3.80ns RANDOM_PAGE_WALK 1 - 3.85ns RANDOM_PAGE_WALK 2 - 3.79ns RANDOM_PAGE_WALK 3 - 3.65ns RANDOM_PAGE_WALK 4 - 3.64ns RANDOM_PAGE_WALK0 - 30.04ns RANDOM_HEAP_WALK 1 - 29.05ns RANDOM_HEAP_WALK 2 - 29.14ns RANDOM_HEAP_WALK 3 - 28.88ns RANDOM_HEAP_WALK 4 - 29.57ns RANDOM_HEAP_WALKIntel i7-2760QM @ 2.40GHz, 8GB RAM DDR3 1600MHz, Linux 3.4.6 kernel 64-bit, Java 1.7.0_05 ================================================= 0 - 0.91ns LINEAR_WALK 1 - 0.92ns LINEAR_WALK 2 - 0.88ns LINEAR_WALK 3 - 0.89ns LINEAR_WALK 4 - 0.89ns LINEAR_WALK0 - 3.29ns RANDOM_PAGE_WALK 1 - 3.35ns RANDOM_PAGE_WALK 2 - 3.33ns RANDOM_PAGE_WALK 3 - 3.31ns RANDOM_PAGE_WALK 4 - 3.30ns RANDOM_PAGE_WALK0 - 9.58ns RANDOM_HEAP_WALK 1 - 9.20ns RANDOM_HEAP_WALK 2 - 9.44ns RANDOM_HEAP_WALK 3 - 9.46ns RANDOM_HEAP_WALK 4 - 9.47ns RANDOM_HEAP_WALKAnalysis I ran the code on 3 different CPU architectures illustrating generational steps forward for Intel. It is clear from the results that each generation has become progressively better at hiding the latency to main-memory based on the 3 bets described above for a relatively small heap. This is because the size and sophistication of various caches keep improving. However as memory size increases they become less effective. For example, if the array is doubled to be 4GB in size, then the average latency increases from ~30ns to ~55ns for the i7-860 doing the random heap walk.It seems that for the linear walk case, memory latency does not exist. However as we walk around memory in an evermore random pattern then the latency starts to become very apparent.The random heap walk produced an interesting result. This is a our worst case scenario, and given the hardware specifications of these systems, we could be looking at 150ns, 65ns, and 75ns for the above tests respectively based on memory controller and memory module latencies. For the Nehalem (i7-860) I can further subvert the cache sub-system by using a 4GB array resulting in ~55ns on average per iteration. The i7-2760QM has larger load buffers, TLB caches, and Linux is running with transparent huge pages which are all working to further hide the latency. By playing with different prime numbers for the stride, results can vary wildly depending on processor type, e.g. try PRIME_INC = 39916801 for Nehalem. I’d like to test this on a much larger heap with Sandy Bridge.The main take away is the more predictable the pattern of access to memory, then the better the cache sub-systems are at hiding main-memory latency. Let’s look at these cache sub-systems in a little detail to try and understand the observed results.Hardware ComponentsWe have many layers of cache plus the pre-fetchers to consider for how latency gets hidden. In this section I’ll try and cover the major components used to hide latency that our hardware and systems software friends have put in place. We will investigate these latency hiding components and use the Linux perf and Google Lightweight Performance Counters utilities to retrieve the performance counters from our CPUs which tell how effective these components are when we execute our programs. Performance counters are CPU specific and what I’ve used here are specific to Sandy Bridge.Data Caches Processors typically have 2 or 3 layers of data cache. Each layer as we move out is progressively larger with increasing latency. The latest Intel processors have 3 layers (L1D, L2, and L3); with sizes 32KB, 256KB, and 4-30MB; and ~1ns, ~4ns, and ~15ns latency respectively for a 3.0GHz CPU.Data caches are effectively hardware hash tables with a fixed number of slots for each hash value. These slots are known as “ways”. An 8-way associative cache will have 8 slots to hold values for addresses that hash to the same cache location. Within these slots the data caches do not store words, they store cache-lines of multiple words. For an Intel processor these cache-lines are typically 64-bytes, that is 8 words on a 64-bit machine. This plays to the spatial bet that adjacent memory is likely to be required soon, which is typically the case if we think of arrays or fields of an object.Data caches are typically evicted in a LRU manner. Caches work by using a write-back algorithm were stores need only be propagated to main-memory when a modified cache-line is evicted. This gives rise the the interesting phenomenon that a load can cause a write-back to the outer cache layers and eventually to main-memory. perf stat -e L1-dcache-loads,L1-dcache-load-misses java -Xmx4g TestMemoryAccessPatterns $Performance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 1': 1,496,626,053 L1-dcache-loads 274,255,164 L1-dcache-misses # 18.32% of all L1-dcache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 2': 1,537,057,965 L1-dcache-loads 1,570,105,933 L1-dcache-misses # 102.15% of all L1-dcache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 3': 4,321,888,497 L1-dcache-loads 1,780,223,433 L1-dcache-misses # 41.19% of all L1-dcache hitslikwid-perfctr -C 2 -g L2CACHE java -Xmx4g TestMemoryAccessPatterns $java -Xmx4g TestMemoryAccessPatterns 1 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 5.94918e+09 | | CPU_CLK_UNHALTED_CORE | 5.15969e+09 | | L2_TRANS_ALL_REQUESTS | 1.07252e+09 | | L2_RQSTS_MISS | 3.25413e+08 | +-----------------------+-------------+ +-----------------+-----------+ | Metric | core 2 | +-----------------+-----------+ | Runtime [s] | 2.15481 | | CPI | 0.867293 | | L2 request rate | 0.18028 | | L2 miss rate | 0.0546988 | | L2 miss ratio | 0.303409 | +-----------------+-----------+java -Xmx4g TestMemoryAccessPatterns 2 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 1.48772e+10 | | CPU_CLK_UNHALTED_CORE | 1.64712e+10 | | L2_TRANS_ALL_REQUESTS | 3.41061e+09 | | L2_RQSTS_MISS | 1.5547e+09 | +-----------------------+-------------+ +-----------------+----------+ | Metric | core 2 | +-----------------+----------+ | Runtime [s] | 6.87876 | | CPI | 1.10714 | | L2 request rate | 0.22925 | | L2 miss rate | 0.104502 | | L2 miss ratio | 0.455843 | +-----------------+----------+java -Xmx4g TestMemoryAccessPatterns 3 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 6.49533e+09 | | CPU_CLK_UNHALTED_CORE | 4.18416e+10 | | L2_TRANS_ALL_REQUESTS | 4.67488e+09 | | L2_RQSTS_MISS | 1.43442e+09 | +-----------------------+-------------+ +-----------------+----------+ | Metric | core 2 | +-----------------+----------+ | Runtime [s] | 17.474 | | CPI | 6.4418 | | L2 request rate | 0.71973 | | L2 miss rate | 0.220838 | | L2 miss ratio | 0.306835 | +-----------------+----------+Note: The cache-miss rate of the combined L1D and L2 increases significantly as the pattern of access becomes more random.Translation Lookaside Buffers (TLBs) Our programs deal with virtual memory addresses that need to be translated to physical memory addresses. Virtual memory systems do this by mapping pages. We need to know the offset for a given page and its size for any memory operation. Typically page sizes are 4KB and gradually moving to 2MB and greater. Linux introduced Transparent Huge Pages in the 2.6.38 kernel giving us 2MB pages. The translation of virtual memory pages to physical pages is maintained by the page table. This translation can take multiple accesses to the page table which is a huge performance penalty. To accelerate this lookup, processors have a small hardware cache at each cache level called the TLB cache. A miss on the TLB cache can be hugely expensive because the page table may not be in a nearby data cache. By moving to larger pages, a TLB cache can cover a larger address range for the same number of entries. perf stat -e dTLB-loads,dTLB-load-misses java -Xmx4g TestMemoryAccessPatterns $ Performance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 1': 1,496,128,634 dTLB-loads 310,901 dTLB-misses # 0.02% of all dTLB cache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 2': 1,551,585,263 dTLB-loads 340,230 dTLB-misses # 0.02% of all dTLB cache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 3': 4,031,344,537 dTLB-loads 1,345,807,418 dTLB-misses # 33.38% of all dTLB cache hitsNote: We only incur significant TLB misses when randomly walking the whole heap when huge pages are employed.Hardware Pre-Fetchers Hardware will try and predict the next memory access our programs will make and speculatively load that memory into fill buffers. This is done at it simplest level by pre-loading adjacent cache-lines for the spatial bet, or by recognising regular stride based access patterns, typically less than 2KB in stride length. The tests below we are measuring the number of loads that hit a fill buffer from a hardware pre-fetch. likwid-perfctr -C 2 -t intel -g LOAD_HIT_PRE_HW_PF:PMC0 java -Xmx4g TestMemoryAccessPatterns $java -Xmx4g TestMemoryAccessPatterns 1 +--------------------+-------------+ | Event | core 2 | +--------------------+-------------+ | LOAD_HIT_PRE_HW_PF | 1.31613e+09 | +--------------------+-------------+java -Xmx4g TestMemoryAccessPatterns 2 +--------------------+--------+ | Event | core 2 | +--------------------+--------+ | LOAD_HIT_PRE_HW_PF | 368930 | +--------------------+--------+java -Xmx4g TestMemoryAccessPatterns 3 +--------------------+--------+ | Event | core 2 | +--------------------+--------+ | LOAD_HIT_PRE_HW_PF | 324373 | +--------------------+--------+Note: We have a significant success rate for load hits with the pre-fetcher on the linear walk.Memory Controllers and Row Buffers Beyond our last level cache (LLC) sits the memory controllers that manage access to the SDRAM banks. Memory is organised into rows and columns. To access an address, first the row address must be selected (RAS), then the column address is selected (CAS) within that row to get the word. The row is typically a page in size and loaded into a row buffer. Even at this stage the hardware is still helping hide the latency. A queue of memory access requests is maintained and re-ordered so that multiple words can be fetched from the same row if possible.Non-Uniform Memory Access (NUMA) Systems now have memory controllers on the CPU socket. This move to on-socket memory controllers gave an ~50ns latency reduction over existing front side bus (FSB) and external Northbridge memory controllers. Systems with multiple sockets employ memory interconnects, QPI from Intel, which are used when one CPU wants to access memory managed by another CPU socket. The presence of these interconnects gives rise to the non-uniform nature of server memory access. In a 2-socket system memory may be local or 1 hop away. On a 8-socket system memory can be up to 3 hops away, were each hop adds 20ns latency in each direction.What does this mean for algorithms?The difference between an L1D cache-hit, and a full miss resulting in main-memory access, is 2 orders of magnitude; i.e. <1ns vs. 65-100ns. If algorithms randomly walk around our ever increasing address spaces, then we are less likely to benefit from the hardware support that hides this latency.Is there anything we can do about this when designing algorithms and data-structures? Yes there is a lot we can do. If we perform chunks of work on data that is co-located, and we stride around memory in a predictable fashion, then our algorithms can be many times faster. For example rather than using bucket and chain hash tables, like in the JDK, we can employ hash tables using open-addressing with linear-probing. Rather than using linked-lists or trees with single items in each node, we can store an array of many items in each node. Research is advancing on algorithmic approaches that work in harmony with cache sub-systems. One area I find fascinating is Cache Oblivious Algorithms. The name is a bit misleading but there are some great concepts here for how to improve software performance and better execute in parallel. This article is a great illustration of the performance benefits that can be gained. Conclusion To achieve great performance it is important to have sympathy for the cache sub-systems. We have seen in this article what can be achieved by accessing memory in patterns which work with, rather than against, these caches. When designing algorithms and data structures, it has now much more important to consider cache-misses, probably even more so than counting steps in the algorithm. This is not what we were taught in algorithm theory when studying computer science. The last decade has seen some fundamental changes in technology. For me the two most significant are the rise of multi-core, and now big-memory systems with 64-bit address spaces.One thing is certain, if we want software to execute faster and scale better, we need to make better use of the many cores in our CPUs, and pay attention to memory access patterns.Reference: Memory Access Patterns Are Important from our JCG partner Martin Thompson at the Mechanical Sympathy blog....

Top 5 SOA gotchas and how to avoid them

After 5 years of designing and building award winning service oriented architectures, I thought I’d share my top 5 SOA gotcha’s and some general hints on how you can avoid them in your SOA programme.1. Failure to recognise that Service-Orientation is about design (and not about technology).Service-orientation is to web services what object orientation is to Java, C# and C++. Service-orientation is a design paradigm and not a specific technology.Service-orientation is achieved by applying the principals of service-oriented design during the service design process. A Service-Oriented Architecture (aka SOA) is a suite of well designed and reusable services that follow these design principals. A service-oriented architecture cant be built by simply using the technologies associated with web services (such as SOAP or REST).Still confused? Consider this real world example of design vs technology taken from the construction industry… Even if Concrete (the technology) is used to construct a new office building, this doesn’t automatically mean that the building exhibits the design features of Modernist buildings (the architectural style). Concrete can be used equally well to realise any architectural style from Classical and Gothic to Modernist and International. Modernism is an architectural style whereas Concrete is simply one of a number of technologies that can be used to realise it.So just because you have SOAP (or REST) web-services within your technical architecture, this doesn’t mean that your architecture is automatically service-oriented. It’s possible to create web services that are not service-oriented just like its possible to write Java or C# code that isn’t object oriented.2. Failing to align SOA with the business.SOA is much more powerful if the services that you deliver have recognisable business alignment and reuse potential. By delivering services that mirror business activities, it becomes easier to evolve and re-configure those activities when the business changes.When talking to clients, I often describe SOA services as an ‘Organisation API’. Services should reflect the capabilities of the organisation, not the technologies within it.The simplest way I know to enable business alignment is to bring architects and analysts together into one group with a shared vision and shared working practices (such as utilising BPMN for both business and service analysis).3. Failure to share SOA ownership with the business.There is little point creating flexible, malleable and evolve-able technical services if the business is not committed to leveraging this capability on an ongoing basis. Equally, services designed to be readily reusable are pointless if the business can’t discover, interpret and reuse this API to exploit new opportunities.The business should therefore share the responsibility for designing and managing its technical services. In addition, the process of finding and reusing services should be straightforward and accessible, not shrouded in mystery and technical complexity.Basic SOA Governance processes and simple service repositories can help to overcome these issues and can be as simple or as complex as required to fit your organisation’s culture.4. Investing in the wrong tools and technologies.There are a great many expensive tools and technologies available for SOA, so it’s easy to make big mistakes from a very early stage in most SOA programmes. To keep it simple here are my top 2 technologies to take extra care with…Business Process Management (BPM). BPM systems are often sold as a mechanism to enable service reuse via continuous business re-engineering but beware of getting sucked in by salesman waffle. BPM systems can be complex and are not always the answer. Chucking a BPM system into your shopping cart is unlikely to make a difference unless you have the required ‘culture of change’ within your organisation. I’ve seen people waste millions on BPM systems that only get used once because of poor cultural fit and ingrained application silos. When evaluating BPM, always ask yourself two simple questions; Do we need it? Will we use it? If the answer to both is a strong ‘yes’ then by all means go ahead.Enterprise Service Bus (ESB). ESB systems are often sold as ‘instant service enablers’ or ‘SOA out of the box’ but the smart IT manager should be asking “what kind of services would be exposed to service consumers if I did this?” Would these instant ESB services be well designed and business aligned or would they just expose existing legacy or proprietary application API’s using new protocols? Would these new ESB services be inherently interoperable and reusable or would I be exposing proprietary data models to service clients?Take great care with technology selection and make sure you fully understand the pro’s and con’s before you sign on the dotted line.5. Failing to create a cohesive architectural strategy.Mixing and matching architectural strategies is rarely a winning formula. Different technical strategies have different technical outcomes and these outcomes will often conflict with each other. For example, stating that the corporate strategy is service-orientation whilst also stating that you’re standardising on one vendors integrated applications suite will certainly bring technical conflict. How can you create a vendor neutral SOA in a vendor mandated environment? Which takes precedent in terms of allocating budget? Which best reflects the way you do business? Which provides the best flexibility and best differentiates you from your competitors?In my book it’s better to choose one strategy and one set of goals and benefits and then stick with them. Keep it simple and make it clear.Avoiding SOA mistakes is easy: Use (or create) capable Service Technologists.SOA is powerful stuff, but it’s a big and highly specialised topic. That’s not to say it can’t be simple, it’s just that there’s a lot of conflicting advice out there and it’s usually a mistake to think that you can simply move from EAI or OO straight into SOA in one step without having specialists who can help you to correctly design and build your SOA.I always advise that IT managers seek the advice of an independent SOA consultant from a very early stage in any new SOA programme. Ideally someone accredited with relevant qualification from a professional body that delivers vendor-neutral SOA training and certification. Taking this kind of pro-active approach can save you millions in avoidable expenses and protect your whole change programme from many of the inherent pitfalls. Professional advice can help guarantee a decent return on the investment you’re making and will also help ensure that you deliver the strategic benefits that you’re after.That’s my top 5, but what about yours?Reference: Top 5: SOA gotcha’s and how to avoid them. from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Oracle Public Cloud Java Service with NetBeans in Early Access

Who expected that to happen: Oracle is working on a public cloud offering and the signs of the approaching official launch are there. Nearly a year after the official announcement I was invited to join the so called “Early Access” program to test-drive the new service and give feedback. Thanks to Reza Shafii who is the product manager in charge I have the allowance to dish the dirt a bit. Even if I am not allowed to show you some screenshots from the UI there is plenty to talk about. And today I am willing to give you a first test-drive of the developer experience with NetBeans.PreparationsAs usual there are some preparations to do. Get yourself a copy of the latest NetBeans 7.2 RC1 Java EE edition. This is the publicly available IDE which has oracle cloud support. It was dropped from the 7.2 Final because … yeah … the OPC isn’t public and nobody wanted to see unusable features in a final release. So the first secret seems to be lifted here. When OPC launches we will see a 7.3 release popping up (concluded from this test-specification). Another useful preparation is to download and install the corresponding WebLogic 10.3.6 for local development. And that is the second surprise so far. Oracle Public Cloud Java Service will be a Java EE 5 service. At least for the GA. And absolutely it doesn’t make any sense to stay at this version. So it is really save to say that the WebLogic 12c which has Java EE 6 support will follow sometime next. All set. Fire up NetBeans.Create your Java EE Application   All you have to do now is to create a new Java EE Web Application with NetBeans. Give it a name (I’m calling it MyCloud) and add a new local WebLogic 10 server in the “Add…” Server dialogue. Don’t forget to chose Java EE 5 as EE version. Let’s add JSF 2.0 and Primefaces 3.2 on the Framework tab. Click “Finish”. If NetBeans is complaining about missing server libraries, let it deploy them. That’s it for now. Right Click your app and Run it. This will fire up your local WebLogic domain and point your browser to http://localhost:7001/MyCloud/ or whatever your app is called. as you can see, the Primefaces components are also working. Not spectacular.Add Cloud…  Next you have to add some cloud. Switch to the services tab, right click on the cloud node and select “Add Cloud…”. Chose “Oracle Cloud” and click Next. You will have to fill in a couple of information here.Identity Domain. The individual or group identity of your Oracle Cloud account. Java Service Name. The name of the Java Service. Database Service Name. The name of the Database Service. Administrator. Your identity as Oracle Cloud administrator. Password. Your Oracle Cloud administrator password. SDK. Path to your local copy of the Oracle Cloud SDK. Click Configure to browse for this file.Lucky you, you don’t have to care about the details here. You get hand on the information after the successful account creation. And it is pretty straight forward to figure out what is meant here if you get a hand on the cloud finally. Some more words about the identity domain.When setting up Oracle Cloud services, a service name and an identity domain must be given to each service. An identity domain is a collection of users and roles that have been given specific privileges to use certain services or manage certain services in the domain. So it basically is kind of a secure storage.Click “Finish” if everything is filled out correctly. NetBeans verifies your provided information against the OPC and you now have the Oracle Cloud in it. Additionally you find a new server “Oracle Cloud Remote” which is actually the server hook you have to specify in your projects run configuration. Go there. Switch it from local “Oracle WebLogic Server” to “Oracle Cloud Remote” and hit OK. Now you are all set for the cloud deployment.Run in the Cloud…Right click and “Run” your project. You see a lot of stuff happen. First of all NetBeans is doing the normal build and afterwards is starting the distribution. First of all this is uploading the bundle (MyCload.war) to the cloud. It get’s scanned for viruses and needs to pass the Whitelist scan (more on this later). If both succeeds the deployment happens and your application is opened in your system’s default browser:That was a typical development round-trip with the Oracle Public Cloud Java Service. Develop and test locally deploy and run in the cloud.Some more NetBeans goodies   But for what is the “Oracle Cloud” entry in the Cloud services good for? For now this is very simple. You can use it to access your deployment jobs and the according log-files.Every deployment gets a unique number and you see the deployments status. Together with the log excerpts you are able to track that down further. Let’s try some more. Add a servlet named “Test” and try to use some malicious code ;) System.exit(0);First indication that something is wrong here is dashed code hint.Completing it pops up a little yellow exclamation mark. Let’s verify the project. Right click on it and select “Verify”. That runs the White List Tool which outputs a detailed error report about the white-list validations. ERROR - Path:D:\MyCloud\dist\MyCloud.war (1 Error) ERROR - Class:net.eisele.opc.servlet.Test (1 Error) ERROR - 1:Method exit not allowed from java.lang.System.(Line No:41 Method Name:java.lang.System->exit(int)) ERROR - D:\MyCloud\dist\MyCloud.war Failed with 1 error(s)It is disappointing but there are limitations (aka White List) in place which prevent you from using every single Java functionality you know. For the very moment I am not going to drill this down further. All early access members had to say something about the restrictions and Oracle listened carefully. A lot of things are moving here and it simply is too early to make any statements about the final white list. A lot of 3rd party libraries (e.g. primefaces) are tested and run smoothly. Those aren’t affected by the white list at all.Bottom Line   That is all for today. I am not going to show you anything else of the OPC. And I know that you can’t test-drive the service your own. You need to have the Javacloud SDK in place which isn’t publicly available today. But it will be. And there will be a chance to test-drive the cloud for free. A trial. And I am looking forward showing you some more of the stuff that is possible. As soon as it becomes available. As of today you can register for access and get notified as the service is ready to sign you up!Reference: Oracle Public Cloud Java Service with NetBeans in Early Access from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Java 8: Testing the Lambda Water

Java 8 is about a year away and comes with a language feature I really look forward to: Lambda Expression. Sadly the other big feature, Modules for the Java Platform, has been delayed to Java 9. But nevertheless, bringing lambda expressions (or closures if you like) into the language will make programming in Java much better. So nearly one year to go – but as Java is developed open source now, we can have a look and try to use it right now. So let’s go! Download and Install Lambda enabled Java 8 First, I expected that I have to compile Java 8 myself, as it has not yet been release. But I was pleasantly surprised to see, that there are binary build available for all platforms at http://jdk8.java.net/lambda/. So I just downloaded the latest developer preview build and installed it on my computer.To make sure it works, I created an LambdaIntro class containing a “Hello, World!”, compiled and executed it: ~ $ export JAVA_HOME=~/Devtools/Java/jdk1.8.0/ ~ $ cd spikes/lambda-water ~ $ $JAVA_HOME/bin/javac src/net/jthoenes/blog/spike/lambda/LambdaIntro.java ~ $ $JAVA_HOME/bin/java -cp src net.jthoenes.blog.spike.lambda.LambdaIntro Hello from Java 8!Note: I use the command-line to compile and execute here, because IDEs do not support Java 8 as of now. The Non-Lambda Way As an example lets assume I want to loop through a list of objects. But for my business logic, I need to have the value and the index of the list item. If I want to do it with current Java, I have to handle the index together with the actual logic: List list = Arrays.asList('A', 'B', 'C'); for (int index = 0; index < list.size(); index++) { String value = list.get(index); String output = String.format('%d -> %s', index, value); System.out.println(output); } This will output 0 -> A 1 -> B 2 -> C This is not too bad, but I did two things in the same few lines of code: controlling the iteration and providing some (very simple) business logic. Lambda expressions can help me to separate those two.\ The eachWithIndex method signature So I want to have a method eachWithIndex which can be called like this: List list = Arrays.asList('A', 'B', 'C'); eachWithIndex(list, (value, index) -> { String output = String.format('%d -> %s', index, value); System.out.println(output); } ); The method receives two parameter. The first one is the list and the second one is a lambda expression or closure which instructs the method what to do with each list item. As you can see in line 3, the lambda expression receives two argument: the current value und the current index. These arguments do not have a type declaration. The type information will be inferred by the Java 8 compiler. After the arguments, there is a -> and a block of code which should be executed for every list item. Note: You will have to write this method in a normal text editor or ignore the error messages inside your IDE. Implement the eachWithIndex method To use a lambda in Java 8, you need to declare a functional interface. A functional interface is an interface which has exactly one method – the method which will be implemented by the lambda expression. In this case, I need to declare a method which receives an item and an index and returns nothing. So I define the following interface: public static interface ItemWithIndexVisitor<E> { public void visit(E item, int index); } With this interface I can now implement the eachWithIndex method. public static <E> void eachWithIndex(List<E> list, ItemWithIndexVisitor<E> visitor) { for (int i = 0; i < list.size(); i++) { visitor.visit(list.get(i), i); } } The method makes use of the generic parameter <E>, so the item passed to the visit method will be inferred to be of the same type than the list. The nice thing about using functional interfaces is, that there are a lot of them already out there in Java. Think for example of the java.util.concurrent.Callable interface. It can be used as a lambda without having to change the code which consumes the Callable. This makes a lot of the JDK and frameworks lambda enabled by default. Using a method reference One little handy thing coming from Project Lambda are method references. They are a way, to re-use existing methods and package them into a functional interface object. So let’s say I have the following method public static <E> void printItem(E value, int index) { String output = String.format('%d -> %s', index, value); System.out.println(output); } and I want to use this method in my eachWithIndex method, than I can use the :: notation inside my method call: eachWithIndex(list, LambdaIntro::printItem); Looks nice and concise, doesn’t it? Summary This makes my first lambda example run. I couldn’t avoid a smile on my face to see closures running in one of my Java program after longing for them so long. Lambda Expression are currently only available as a developer preview build. If you want to find out more, read the current Early Draft Review or go to the Project Lambda project page. I uploaded the full example code to gist. Reference: Java 8: Testing The Lambda Water from our JCG partner Johannes Thoenes at the Johannes Thoenes blog blog....

Enterprise Architect Program for organization

From kick start to acceptanceFirst seek to understand.Don’t do EA for EA’s sake. Really seek to understand what clients want. Do they need and EA program to solve something critical to business. That might mean much more than having meetings and workshops. Use your experience as a seasoned professional to understand the enterprise, business strategy, actual stakeholders, pain points of the business, the key stake holders etc. Form a very clear idea of what needs to be addressed. Both in long term (no more than 3 years) and in short term (no less that 6 months). Put down the “problem statement” in as simple a way as possible and no simpler. Get the key stakeholders to write it down and / or be a part of the process where it is articulated and circulated. It will make it much easier for you to get them to endorse the problem – and lend their support to EA program should they choose to have one to solve the problem. Limit the scope of the “problem statement” by clearly calling out what will be delivered, and what will not be delivered.Once you have got the “problem” nailed down – please take that statement with a pinch of salt, for the “problem” statement is never really nailed down and any EA will be naive to not have periodic reaffirmation of the problem statement from the key stake holders – start by working backwardsWhat information do you need to solve the problem? When do you need it by? Who will provide it? Who controls the information? Are the providers and controllers of information bought in to the success of EA program? Does the EA program work against the vested interest of any of these key enablers? How do all these information link with each other. Which information is input to which decision which in turn leads to which information. This interrelation of inputs leading to the answer to a specific question is a framework. Pick a framework The different generalized framework are Zachman, TOGAF etc. The generalized framework – if you choose to pick one – is only the starting point. Don’t succumb to the pressure of having to start with a solution. A solution – enterprise specific and unique to each enterprise – will emerge in due course of time if you start with a framework and let it evolve.EA solutions will be expected to fit the enterprise and it is not enough for them to be good for a project. It is better to have a solution that is good fit for the enterprise and might not fit all the projects like a glove. The trick is to come up with a framework that works for the enterprise and has extension points where the individual projects can put in their specific extensions, should they need to.Fetch data to feed into the framework – and hope it will lead to an answer.If you have worked hands on with technology till now, this is perhaps the most difficult mental blocker / challenge that you will have to face. For the first time in your career perhaps – the success of your job does not depend on your being able to find the right technology / api / tool / software. There is no right answer. At least there is no way to prove immediately that the answer that you get to, is right. Once you have defined the “problem”, scoped it, closed on a “framework” , it is all about getting the correct, sufficient and relevant information to the framework. The solution might come from the management, from the production line, from the domain etc. So, plan that out. List down what you need (information, statistics, report …), from where do you need (stakeholders, production line, chief executives, …), how do you want to collect (interviews, workshops, online surveys …). Determine the resources required to collect this – human, tools, archiving etc. Identify dependencies, key milestones. Put down in a plan. Go. Execute the plan.In an ideal world EAs output might have been 100% correct. In practical world – with moving business scenarios, transient business priorities, inherent issues around correctness of data – expect errors and educate key stakeholders to expect errors as well.It is another mental blocker / challenge that folks from software technology background will face. Those who are used to see their solutions compile, execute, see the solution solve the problem and be able prove that a problem is solved – will find themselves grappling with the concept that what they are striving for is “roughly right” and non demonstrable solution. In fact it is a sure set recipe for failure in all but the smallest of enterprises for an EA to strive for a demonstrable solution. The output is intended to set a broad direction. It can never be correct until there are implementations which actually take that broad direction forward.Know where to stopWhatever you do as an EA has to be reusable. Whatever you deliver as an EA has to be applicable to multiple projects (at least) if not the whole of the organization. Whenever you are assigned / picked up something that is not reusable / not applicable to multiple projects, stop. Don’t do it. Hand it over to project teams. Let them have some fun as well.Know what to avoidAvoid taking decisions. I don’t mean to be funny. I seriously mean it. The matters that EA practices “solve” are matters of business, projects, finance etc. There are people whose jobs are on the line to execute a solution e.g. CEO, Project Manager, CFO etc. Your role is advisory. You advise people on “enterprise problems” with multiple suggestions, one recommendation and raw data should they choose to delve into it. But that’s where you should draw the line. Do not get into the technology wars – Java vs. .Net – Agile vs. Waterfall – are simply not for you to fight. Put a framework in place. Get an agreement on the framework from the key stakeholders. Get data – push it through the framework – arrive at a suggestion and hand it over. Just to prove the point made above, lets take an imaginary scenario: You might have arrived at a suggestion in favor of .Net (in a Java vs .Net decision making progress) and handed it over to the Delivery Manager – and he ultimately might go and choose Java because resources are easily available all of a sudden as some java based software house might have gone bust. It is not a technology based decision. It is an incident in the market that nobody foresaw. Let the Dellivery Manager – whose job is in line to deliver the software product in time and within budget – take the decision that is best for his deliverable. Let him prove to CEO that it is best for the business in long run. Don’t be caught standing in the way of business success because it violates some EA finding.Know when and how to say “No”Any scope by definition has a tendency to explode if unchecked. However, while the scope of the work of a developer can increase only so much, the scope of EA program can take colossal proportions unless very actively and routinely controlled. Most of the activities assigned to EA will be multi-year efforts if not more. However, the business appetite to fund a program could start straining as soon as the business hits a bad quarter. It is vitally important to put an iterative approach (half yearly cycle) to achieve a continuous measurable velocity towards any task. “Out of scope for the current iteration”, “Not approved for the current iteration”, “Will add that into the to-discuss list for future iteration” are ways of saying no that should not ruffle too many feathers in the corporate world. Practice these phrases as you would practice any other tool in your kitty.Get down from the “ivory tower” and make friends.Given that output of EA activities tend to restrictive and prescriptive to the organization, it is likely for EAs to be seen as working in an “ivory tower”. This is wrong on two accounts. Firstly it means that it is more difficult to evangelize EA activities and sell the outputs. Secondly it means that the EAs have not got enough information specific to the organization and hence the output is less likely to be tailored for the specific organization. Get down to the working floor and welcome / invite / allow / reward the bright chaps to contribute in the EA activities. This will make your job easier and win you that many more friends – both factors crucial for success of EA program.Work in progress …Reference: Enterprise Architect Program for organization from our JCG partner Partho at the Tech for Enterprise blog....

Fixing Bugs – there’s no substitute for experience

We’ve all heard that the only way to get good at fixing bugs in through experience – the school of hard knocks. Experienced programmers aren’t afraid, because they’ve worked on hard problems before, and they know what to try when they run into another one – what’s worked for them in the past, what hasn’t, what they’ve seen other programmers try, what they learned from them. They’ve built up their own list of bug patterns and debugging patterns, and checklists and tools and techniques to follow. They know when to try a quick-and-dirty approach, use their gut, and when to be methodical and patient and scientific.They understand how to do binary slicing to reduce the size of the problem set. They know how to read traces and dump files. And they know the language and tools that they are working with. It takes time and experience to know where to start looking, how to narrow in on a problem; what information is useful, what isn’t and how to tell the difference. And how to do all of this fast. We’re back to knowing where to tap the hammer again. But how much of a difference does experience really make? Steve McConnell’s Code Complete is about programmer productivity: what makes some programmers better than others, and what all programmers can do to get better. His research shows that there can be as much as a 10x productivity difference in the quality, amount and speed of work that top programmers can do compared to programmers who don’t know what they are doing. Debugging is one of the areas that really show this difference, that separates the men from the boys and the women from the girls. Studies have found a 20-to-1 or even 25-to-1 difference in the time it takes experienced programmers to find the same set of defects found by inexperienced programmers. That’s not all. The best programmers also find significantly more defects and make far fewer mistakes when putting in fixes. What’s more important: experience or good tools? In Applied Software Measurement, Capers Jones looks at 4 different factors that affect the productivity of programmers finding and fixing bugs:Experience in debugging and maintenance How good – or bad – the code structure is The language and platform Whether the programmers have good code management and debugging tools – and know how to use them.Jones measures the debugging and bug fixing ability of a programmer by measuring assignment scope – the average amount of code that one programmer can maintain in a year. He says that the average programmer can maintain somewhere around 1,000 function points per year – about 50,000 lines of Java code. Let’s look at some of this data to understand how much of a difference experience makes in fixing bugs. Inexperienced staff, poor structure, high-level-language, no maintenance toolsWorst Average Best150 300 500Experienced staff, poor structure, high-level language, no maintenance toolsWorst Average Best1150 1850 2800This data shows a roughly 20:1 difference between experienced programmers and inexperienced programmers, on teams working with badly structured code and without good maintenance tools. Now let’s look at the difference good tools can make: Inexperienced staff, poor structure, high-level language, good toolsWorst Average Best900 1400 2100Experienced staff, poor structure, high-level language, good toolsWorst Average Best2100 2800 4500Using good tools for code navigating and refactoring, reverse engineering, profiling and debugging can help to level the playing field between novice programmers and experts. You’d have to be an idiot to ignore your tools (debuggers are for losers? Seriously?). But even with today’s good tools, an experienced programmer will still win out – 2x more efficient on average, 5x from best to worst case. The difference can be effectively infinite in some cases. There are some bugs that an inexperienced programmer can’t solve at all – they have no idea where to look or what to do. They just don’t understand the language or the platform or the code or the problem well enough to be of any use. And they are more likely to make things worse by introducing new bugs trying to fix something than they are to fix the bug in the first place. There’s no point in even asking them to try. You can learn a lot about debugging from a good book like Debug It! or Code Complete. But when it comes to fixing bugs, there’s no substitute for experience. Reference: Fixing Bugs – there’s no substitute for experience from our JCG partner Jim Bird at the Building Real Software blog....

25 things you’ve said in your career as a software engineer. Admit it!

This article is inspired by an older blog post. I’ve updated it to reflect modern languages and technologies.“It works fine on MY computer. Come and see it in action if you don’t believe me” “Who did you login as? Are you an administrator?” “It’s not a bug, it’s a feature” “That’s weird…” “It’s never done that before.” “It worked yesterday.” “How is that possible?” “Have you checked your network connection /settings.” (Especially when the application is too sloooow) “You must entered wrong data and crashed it?” “There is something funky in your data.” “I haven’t touched that part of the code for weeks!” “You must have the wrong library version.” “It’s just some unlucky coincidence, so don’t bother” “I can’t unit test everything!” “It’s not my fault. It must be that opensource library I’ve used.” “It works, but I didn’t write any unit tests.” “Somebody must have changed my code.” “Did you check for a virus on your system?” “Even though it doesn’t work, how does it feel?” “You can’t use that version on your operating system.” “Why do you want to do it that way?” “Where were you when the program blew up?” “I’m pretty sure I’ve already fixed that.” “Have you restarted your Application Server/DB Server/Machine after upgrading?” ”Which version of JRE / JDK / JVM have you installed?”Feel free to add your own. I’m sure there are plenty!! Reference: 25 things you’ve said in your career as a software engineer. Admit it! from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog....

Build Flow Jenkins Plugin

Most of us we are using Jenkins/Hudson to implement Continuous Integration/Delivery, and we manage job orchestration combining some Jenkins plugins like build pipeline, parameterized-build, join or downstream-ext. We require configuring all of them which implies polluting the job configuration through multiple jobs, which takes the system configuration very complex to maintain. Build Flow enables us to define an upper level flow item to manage job orchestration and link up rules, using a dedicated DSL. Let’s see a very simple example: First step is installing the plugin. Go to Jenkins -> Manage Jenkins -> Plugin Manager -> Available and find for CloudBees Build Flow plugin.Then you can go to Jenkins -> New Job and you will see a new kind of job called Build Flow. In this example we are going to name it build-all-yy.And now you only have to program using flow DSL how this job should orchestrate the other jobs. In ‘ Define build flow using flow DSL‘ input text you can specify the sequence of commands to execute.In current example I have already created two jobs, one executing clean compile goal ( yy-compile job name) and the other one executing javadoc goal ( yy-javadoc job name). I know that this deployment pipeline is not real in a true environment but for now it is enough. Then we want javadoc job running after project is compiled. To configure this we don’t have to create any upstream or downstream actions, simply add next lines at DSL text area: build(‘yy-compile’); build(‘yy-javadoc’); Save and execute build-all-yy job and both projects will be built in a sequential way. Now suppose that we add a third job called yy-sonar which runs sonar goal that generates code quality sonar report. In this case it seems obvious that after project is compiled, generation of javadocs and code quality jobs can be run in parallel. So script is changed to:build(‘yy-compile’) parallel ( {build(‘yy-javadoc’)}, {build(‘yy-sonar’)} ) This plugin also supports more operations like retry (similar behaviour of retry-failed-job plugin) or guard-rescue, that it works mostly like a try+finally block. Also you can create parameterized builds, accessing to build execution or printing to Jenkins console. Next example will print build number of yy-compile job execution: b = build(‘yy-compile’) out.println b.build.numberAnd finally you can also have a quick graphical overview of the execution in Status section. It is true that could be improved more, but for now it is acceptable, and can be used without any problem.Build Flow plugin is in its early stages, in fact it is only at version 0.4. But will be a plugin to be considered in future, and I think it is good to know that it exists. Moreover is being developed by CloudBees folks so it is a guarantee of being fully supported by Jenkins. Reference: Build Flow Jenkins Plugin from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Java Executor Service Types

ExecutorService feature was come with Java 5. It extends Executor interface and provides thread pool feature to execute asynchronous short tasks. There are five ways to execute the tasks asyncronously by using ExecutorService interface provided Java 6. ExecutorService execService = Executors.newCachedThreadPool(); This method of the approach creates a thread pool that creates new threads as needed, but will reuse previously constructed threads when they are available. These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks. If no existing thread is available, a new thread will be created and added to the pool. Threads that have not been used for 60 seconds are terminated and removed from the cache. ExecutorService execService = Executors.newFixedThreadPool(10); This method of the approach creates a thread pool that reuses a fixed number of threads. Created nThreads will be active at the runtime. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available. ExecutorService execService = Executors.newSingleThreadExecutor(); This method of the approach creates an Executor that uses a single worker thread operating off an unbounded queue. Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Methods of the ExecutorService : execute(Runnable) : Executes the given command at some time in the future. submit(Runnable) : submit method returns a Future Object which represents executed task. Future Object returns null if the task has finished correctly. shutdown() : Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down. shutdownNow() : Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution. There are no guarantees beyond best-effort attempts to stop processing actively executing tasks. For example, typical implementations will cancel via Thread.interrupt, so any task that fails to respond to interrupts may never terminate. A sample application is below : STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : CREATE A NEW TASK A new task is created by implementing the Runnable interface(creating Thread) as below. TestTask Class specifies business logic which will be executed. package com.otv.task;import org.apache.log4j.Logger;/** * @author onlinetechvision.com * @since 24 Sept 2011 * @version 1.0.0 * */ public class TestTask implements Runnable { private static Logger log = Logger.getLogger(TestTask.class); private String taskName;public TestTask(String taskName) { this.taskName = taskName; }public void run() { try { log.debug(this.taskName + " is sleeping..."); Thread.sleep(3000); log.debug(this.taskName + " is running..."); } catch (InterruptedException e) { e.printStackTrace(); } }public String getTaskName() { return taskName; }public void setTaskName(String taskName) { this.taskName = taskName; }}STEP 3 : CREATE TestExecutorService by using newCachedThreadPool TestExecutorService is created by using the method newCachedThreadPool. In this case, created thread count is specified at the runtime. package com.otv;import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;import com.otv.task.TestTask;/** * @author onlinetechvision.com * @since 24 Sept 2011 * @version 1.0.0 * */ public class TestExecutorService {public static void main(String[] args) { ExecutorService execService = Executors.newCachedThreadPool(); execService.execute(new TestTask("FirstTestTask")); execService.execute(new TestTask("SecondTestTask")); execService.execute(new TestTask("ThirdTestTask"));execService.shutdown(); } }When TestExecutorService is run, the output will be seen as below : 24.09.2011 17:30:47 DEBUG (TestTask.java:21) - SecondTestTask is sleeping... 24.09.2011 17:30:47 DEBUG (TestTask.java:21) - ThirdTestTask is sleeping... 24.09.2011 17:30:47 DEBUG (TestTask.java:21) - FirstTestTask is sleeping... 24.09.2011 17:30:50 DEBUG (TestTask.java:23) - ThirdTestTask is running... 24.09.2011 17:30:50 DEBUG (TestTask.java:23) - FirstTestTask is running... 24.09.2011 17:30:50 DEBUG (TestTask.java:23) - SecondTestTask is running...STEP 4 : CREATE TestExecutorService by using newFixedThreadPool TestExecutorService is created by using the method newFixedThreadPool. In this case, created thread count is specified at the runtime. package com.otv;import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;import com.otv.task.TestTask;/** * @author onlinetechvision.com * @since 24 Sept 2011 * @version 1.0.0 * */ public class TestExecutorService {public static void main(String[] args) { ExecutorService execService = Executors.newFixedThreadPool(2); execService.execute(new TestTask("FirstTestTask")); execService.execute(new TestTask("SecondTestTask")); execService.execute(new TestTask("ThirdTestTask"));execService.shutdown(); } }When TestExecutorService is run, ThirdTestTask is executed after FirstTestTask and SecondTestTask’ s executions are completed. The output will be seen as below : 24.09.2011 17:33:38 DEBUG (TestTask.java:21) - FirstTestTask is sleeping... 24.09.2011 17:33:38 DEBUG (TestTask.java:21) - SecondTestTask is sleeping... 24.09.2011 17:33:41 DEBUG (TestTask.java:23) - FirstTestTask is running... 24.09.2011 17:33:41 DEBUG (TestTask.java:23) - SecondTestTask is running... 24.09.2011 17:33:41 DEBUG (TestTask.java:21) - ThirdTestTask is sleeping... 24.09.2011 17:33:44 DEBUG (TestTask.java:23) - ThirdTestTask is running...STEP 5 : CREATE TestExecutorService by using newSingleThreadExecutor TestExecutorService is created by using the method newSingleThreadExecutor. In this case, only one thread is created and tasks are executed sequentially. package com.otv;import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;import com.otv.task.TestTask;/** * @author onlinetechvision.com * @since 24 Sept 2011 * @version 1.0.0 * */ public class TestExecutorService {public static void main(String[] args) { ExecutorService execService = Executors.newSingleThreadExecutor(); execService.execute(new TestTask("FirstTestTask")); execService.execute(new TestTask("SecondTestTask")); execService.execute(new TestTask("ThirdTestTask"));execService.shutdown(); } }When TestExecutorService is run, SecondTestTask and ThirdTestTask is executed after FirstTestTask’ s execution is completed. The output will be seen as below : 24.09.2011 17:38:21 DEBUG (TestTask.java:21) - FirstTestTask is sleeping... 24.09.2011 17:38:24 DEBUG (TestTask.java:23) - FirstTestTask is running... 24.09.2011 17:38:24 DEBUG (TestTask.java:21) - SecondTestTask is sleeping... 24.09.2011 17:38:27 DEBUG (TestTask.java:23) - SecondTestTask is running... 24.09.2011 17:38:27 DEBUG (TestTask.java:21) - ThirdTestTask is sleeping... 24.09.2011 17:38:30 DEBUG (TestTask.java:23) - ThirdTestTask is running...STEP 6 : REFERENCES http://download.oracle.com/javase/6/docs/api/java/util/concurrent/ExecutorService.html http://tutorials.jenkov.com/java-util-concurrent/executorservice.html  Reference: Java Executor Service Types from our JCG partner Eren Avsarogullari at the Online Technology Vision blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: