Featured FREE Whitepapers

What's New Here?


Batching (collapsing) requests in Hystrix

Hystrix has an advanced feature of collapsing (or batching) requests. If two or more commands run similar request at the same time, Hystrix can combine them together, run one batched request and dispatch split results back to all commands. Let’s first see how Hystrix works without collapsing. Imagine we have a service that looks up StockPriceof a given Ticker:             import lombok.Value; import java.math.BigDecimal; import java.time.Instant;@Value class Ticker { String symbol; }@Value class StockPrice { BigDecimal price; Instant effectiveTime; }interface StockPriceGateway {default StockPrice load(Ticker stock) { final Set<Ticker> oneTicker = Collections.singleton(stock); return loadAll(oneTicker).get(stock); }ImmutableMap<Ticker, StockPrice> loadAll(Set<Ticker> tickers); }Core implementation of StockPriceGateway must provide loadAll() batch method while load() method is implemented for our convenience. So our gateway is capable of loading multiple prices in one batch (e.g. to reduce latency or network protocol overhead), but at the moment we are not using this feature, always loading price of one stock at a time: class StockPriceCommand extends HystrixCommand<StockPrice> {private final StockPriceGateway gateway; private final Ticker stock;StockPriceCommand(StockPriceGateway gateway, Ticker stock) { super(HystrixCommandGroupKey.Factory.asKey("Stock")); this.gateway = gateway; this.stock = stock; }@Override protected StockPrice run() throws Exception { return gateway.load(stock); } }Such command will always call StockPriceGateway.load() for each and every Ticker, as illustrated by the following tests: class StockPriceCommandTest extends Specification {def gateway = Mock(StockPriceGateway)def 'should fetch price from external service'() { given: gateway.load(TickerExamples.any()) >> StockPriceExamples.any() def command = new StockPriceCommand(gateway, TickerExamples.any())when: def price = command.execute()then: price == StockPriceExamples.any() }def 'should call gateway exactly once when running Hystrix command'() { given: def command = new StockPriceCommand(gateway, TickerExamples.any())when: command.execute()then: 1 * gateway.load(TickerExamples.any()) }def 'should call gateway twice when command executed two times'() { given: def commandOne = new StockPriceCommand(gateway, TickerExamples.any()) def commandTwo = new StockPriceCommand(gateway, TickerExamples.any())when: commandOne.execute() commandTwo.execute()then: 2 * gateway.load(TickerExamples.any()) }def 'should call gateway twice even when executed in parallel'() { given: def commandOne = new StockPriceCommand(gateway, TickerExamples.any()) def commandTwo = new StockPriceCommand(gateway, TickerExamples.any())when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 2 * gateway.load(TickerExamples.any()) }}If you don’t know Hystrix, by wrapping an external call in a command you gain a lot of features like timeouts, circuit breakers, etc. But this is not the focus of this article. Look at last two tests: when asking for price of arbitrary ticker twice, sequentially or in parallel (queue()), our external gateway is also called twice. Last test is especially interesting – we ask for the same ticker at almost the same time, but Hystrix can’t figure that out. These two commands are fully independent, will be executed in different threads and don’t know anything about each other – even though they run at almost the same time. Collapsing is all about finding such similar requests and combining them. Batching (I will use this term interchangeably with collapsing) doesn’t happen automatically and requires a bit of coding. But first let’s see how it behaves: def 'should collapse two commands executed concurrently for the same stock ticker'() { given: def anyTicker = TickerExamples.any() def tickers = [anyTicker] as Setand: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, anyTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 0 * gateway.load(_) 1 * gateway.loadAll(tickers) >> ImmutableMap.of(anyTicker, StockPriceExamples.any()) }def 'should collapse two commands executed concurrently for the different stock tickers'() { given: def anyTicker = TickerExamples.any() def otherTicker = TickerExamples.other() def tickers = [anyTicker, otherTicker] as Setand: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, otherTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: futureOne.get() futureTwo.get()then: 1 * gateway.loadAll(tickers) >> ImmutableMap.of( anyTicker, StockPriceExamples.any(), otherTicker, StockPriceExamples.other()) }def 'should correctly map collapsed response into individual requests'() { given: def anyTicker = TickerExamples.any() def otherTicker = TickerExamples.other() def tickers = [anyTicker, otherTicker] as Set gateway.loadAll(tickers) >> ImmutableMap.of( anyTicker, StockPriceExamples.any(), otherTicker, StockPriceExamples.other())and: def commandOne = new StockTickerPriceCollapsedCommand(gateway, anyTicker) def commandTwo = new StockTickerPriceCollapsedCommand(gateway, otherTicker)when: Future<StockPrice> futureOne = commandOne.queue() Future<StockPrice> futureTwo = commandTwo.queue()and: def anyPrice = futureOne.get() def otherPrice = futureTwo.get()then: anyPrice == StockPriceExamples.any() otherPrice == StockPriceExamples.other() }First test proves that instead of calling load() twice we barely called loadAll() once. Also notice that since we asked for the same Ticker (from two different threads), loadAll() asks for only one ticker. Second test shows two concurrent requests for two different tickers being collapsed into one batch call. Third test makes sure we still get proper responses to each individual request. Instead of extending HystrixCommand we must extend more complexHystrixCollapser. Now it’s time to see StockTickerPriceCollapsedCommand implementation, that seamlessly replaced StockPriceCommand: class StockTickerPriceCollapsedCommand extends HystrixCollapser<ImmutableMap<Ticker, StockPrice>, StockPrice, Ticker> {private final StockPriceGateway gateway; private final Ticker stock;StockTickerPriceCollapsedCommand(StockPriceGateway gateway, Ticker stock) { super(HystrixCollapser.Setter.withCollapserKey(HystrixCollapserKey.Factory.asKey("Stock")) .andCollapserPropertiesDefaults(HystrixCollapserProperties.Setter().withTimerDelayInMilliseconds(100))); this.gateway = gateway; this.stock = stock; }@Override public Ticker getRequestArgument() { return stock; }@Override protected HystrixCommand<ImmutableMap<Ticker, StockPrice>> createCommand(Collection<CollapsedRequest<StockPrice, Ticker>> collapsedRequests) { final Set<Ticker> stocks = collapsedRequests.stream() .map(CollapsedRequest::getArgument) .collect(toSet()); return new StockPricesBatchCommand(gateway, stocks); }@Override protected void mapResponseToRequests(ImmutableMap<Ticker, StockPrice> batchResponse, Collection<CollapsedRequest<StockPrice, Ticker>> collapsedRequests) { collapsedRequests.forEach(request -> { final Ticker ticker = request.getArgument(); final StockPrice price = batchResponse.get(ticker); request.setResponse(price); }); }}A lot is going on here, so let’s review StockTickerPriceCollapsedCommand step by step. First three generic types:BatchReturnType (ImmutableMap<Ticker, StockPrice> in our example) is the type of batched command response. As you will see later, collapser turns multiple small commands into a batch command. This is the type of that batch command’s response. Notice that it’s the same as StockPriceGateway.loadAll() type). ResponseType (StockPrice) is the type of each individual command being collapsed. In our case we are collapsing HystrixCommand<StockPrice>. Later we will split value of BatchReturnType into multiple StockPrice. RequestArgumentType (Ticker) is the input of each individual command we are about to collapse (batch). When multiple commands are batched together, we are eventually replacing all of them with one batched command. This command should receive all individual requests in order to perform one batch request.withTimerDelayInMilliseconds(100) will be explained soon. createCommand() creates a batch command. This command should replace all individual commands and perform batched logic. In our case instead of multiple individualload() calls we just make one: class StockPricesBatchCommand extends HystrixCommand<ImmutableMap<Ticker, StockPrice>> {private final StockPriceGateway gateway; private final Set<Ticker> stocks;StockPricesBatchCommand(StockPriceGateway gateway, Set<Ticker> stocks) { super(HystrixCommandGroupKey.Factory.asKey("Stock")); this.gateway = gateway; this.stocks = stocks; }@Override protected ImmutableMap<Ticker, StockPrice> run() throws Exception { return gateway.loadAll(stocks); } }The only difference between this class and StockPriceCommand is that it takes a bunch of Tickers and returns prices for all of them. Hystrix will collect a few instances of StockTickerPriceCollapsedCommand and once it has enough(more on that later) it will create single StockPriceCommand. Hope this is clear, because mapResponseToRequests()is slightly more involved. Once our collapsed StockPricesBatchCommand finishes, we must somehow split batch response and communicate replies back to individual commands, unaware of collapsing. From that perspectivemapResponseToRequests() implementation is fairly straightforward: we receive batch response and a collection of wrapped CollapsedRequest<StockPrice, Ticker>. We must now iterate over all awaiting individual requests and complete them (setResponse()). If we don’t complete some of the requests, they will hang infinitely and eventually time out. How it works This is the right moment to describe how collapsing is implemented. I said before that collapsing happens when two requests occur at the same time. There is no such thing as the same time. In reality when first collapsible request comes in, Hystrix starts a timer. In our examples we set it to 100 milliseconds. During that period our command is suspended, waiting for other commands to join. After this configurable period Hystrix will call createCommand(), gathering all request keys (by calling getRequestArgument()) and run it. When batched command finishes, it will let us dispatch results to all awaiting individual commands. It is also possible to limit the number of collapsed requests if we are afraid of creating humongous batch – on the other hand how many concurrent requests can fit within this short time slot? Use cases and drawbacks Request collapsing should be used in systems with extreme load – high frequency of requests. If you get just one request per collapsing time window (100 milliseconds in examples), collapsing will just add overhead. That’s because every time you call collapsible command, it must wait just in case some other command wants to join and form batch. This makes sense only when at least couple of commands are collapsed. Time wasted for waiting is balanced by savings in network latency and/or better utilization of resources in our collaborator (very often batch requests are much faster compared to individual calls). But keep in mind collapsing is a double edged sword, useful in specific cases. Last thing to remember – in order to use request collapsing you needHystrixRequestContext.initializeContext() and shutdown() in try-finally block: HystrixRequestContext context = HystrixRequestContext.initializeContext(); try { //... } finally { context.shutdown(); }Collapsing vs. caching You might think that collapsing can be replaced with proper caching. This is not true. You use cache when:resource is likely to be accessed multiple times we can safely use previous value, it will remain valid for some period of time or we know precisely how to invalidate it we can afford concurrent requests for the same resource to compute it multiple timesOn the other hand collapsing does not enforce locality of data (1), it always hits the real service and never returns stale data (2). And finally if we ask for the same resource from multiple threads, we will only call backing service once (3). In case of caching, unless your cache is really smart, two threads will independently discover absence of given resource in cache and ask backing service twice. However collapsing can work together with caching – by consulting cache before running collapsible command. Summary Request collapsing is a useful tool, but with very limited use cases. It can significantly improve throughput in our system as well as limit load in external service. Collapsing can magically flatten peaks in traffic, rather than spreading it all over. Just make sure you are using it for commands running with extreme frequency.Reference: Batching (collapsing) requests in Hystrix from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

STOMP over WebSocket

STOMP is Simple Text Oriented Messaging Protocol. It defines an interoperable wire format that allows a STOMP client to communicate with any STOMP message broker. This provides easy and widespread messaging interoperability among different languages, platforms and brokers. The specification defines what makes it different from other messaging protocols: It is an alternative to other open messaging protocols such as AMQP and implementation specific wire protocols used in JMS brokers such as OpenWire. It distinguishes itself by covering a small subset of commonly used messaging operations rather than providing a comprehensive messaging API. STOMP is a frame-based protocol. A frame consists of a command, a set of optional headers and an optional body. Commonly used commands are:CONNECT SEND SUBSCRIBE UNSCUBSCRIBE ACK NACK DISCONNECTWebSocket messages are also transmitted as frames. STOMP over WebSocket maps STOMP frames to WebSocket frames. Different messaging servers like HornetQ, ActiveMQ, RabbitMQ, and others provide native support for STOMP over WebSocket. Lets take a look at a simple sample on how to use STOMP over WebSocket using ActiveMQ. The source code for the sample is available at github.com/arun-gupta/wildfly-samples/tree/master/websocket-stomp. Lets get started!Download ActiveMQ 5.10 or provision an ActiveMQ instance in OpenShift as explained at github.com/arun-gupta/activemq-openshift-cartridge. workspaces> rhc app-create activemq diy --from-code=git://github.com/arun-gupta/activemq-openshift-cartridge.git Using diy-0.1 (Do-It-Yourself 0.1) for 'diy'Application Options ------------------- Domain: milestogo Cartridges: diy-0.1 Source Code: git://github.com/arun-gupta/activemq-openshift-cartridge.git Gear Size: default Scaling: noCreating application 'activemq' ... doneDisclaimer: This is an experimental cartridge that provides a way to try unsupported languages, frameworks, and middleware on OpenShift.Waiting for your DNS name to be available ... doneCloning into 'activemq'... Warning: Permanently added the RSA host key for IP address '' to the list of known hosts.Your application 'activemq' is now available.URL: http://activemq-milestogo.rhcloud.com/ SSH to: 545b096a500446e6710004ae@activemq-milestogo.rhcloud.com Git remote: ssh://545b096a500446e6710004ae@activemq-milestogo.rhcloud.com/~/git/activemq.git/ Cloned to: /Users/arungupta/workspaces/activemqRun 'rhc show-app activemq' for more details about your app. workspaces> rhc port-forward activemq Checking available ports ... done Forwarding ports ...To connect to a service running on OpenShift, use the Local addressService Local OpenShift ------- --------------- ---- ----------------- java => java => java => java => java => java => CTRL-C to terminate port forwardingDownload WildFly 8.1 zip, unzip, and start as bin/standalone.sh Clone the repo and deploy the sample on WildFly: git clone https://github.com/arun-gupta/wildfly-samples.git cd wildfly-samples mvn wildfly:deployAccess the application at localhost:8080/websocket-stomp-1.0-SNAPSHOT/ to see the page as:  Specify text payload “foobar and usse ActiveMQ conventions for topics and queues to specify a queue name as “/queue/myQ1″. Click on Connect, Send Message, Subscribe, and Disconnect buttons one after the other. This will display messages on your browser window where WebSocket connection is established, STOMP message is sent to the queue, subscribed to the queue to receive the message, and then finally disconnected.STOMP frames can be seen using Chrome Developer Tools as shown:   As you can see, each STOMP frame is mapped to a WebSocket frame.In short, ActiveMQ on OpenShift is running a STOMP broker on port 61614 and is accessible on localhost:61614 by port-forwarding. Clicking on Connect button uses the Stomp library bundled with the application to establish a WebSocket connection with ws://localhost:61614/. Subsequent buttons send STOMP frames over WebSocket as shown in the Frames tab of Developer Tools. Read more details about how all the pieces work together at jmesnil.net/stomp-websocket/doc/. Jeff has also written an excellent book explaining STOMP over WebSocket and lot more other interesting things that can be done over WebSocket in his Mobile and Web Messaging book.Reference: STOMP over WebSocket from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Apache Camel please explain me what these endpoint options mean

In the upcoming Apache Camel 2.15, we have made Camel smarter. It is now able to act as a teacher and explain to you how its configured and what those options mean. The first lesson Camel can do is to tell you how all the endpoints have been configured and what these option mean. Lessons we are working on next is to let Camel explain the options for the EIPs are. Okay a picture is worth a thousand words, so let me show a screenshot from Apache Karaf, where you can use the new endpoint-explain command to explain how the endpoints have been configured.  The screenshot from Apache is from the SQL example which I have installed in Karaf. This example uses a number of endpoints, and among those a timer to trigger every 5 seconds. As you can see from above, the command list the endpoint uri: timer://foo?period=5s and then explain the option(s) below. As the uri only has 1 option, there is only one listed. We can see that the option is named period. Its java type is a long. The json schema type is integer. We can see the value is 5s, and below the description which explains what the value does. So why is there two types listed? The idea is that there is a type that is suitable for tooling etc, as it has a simpler category of types accordingly to the JSonSchema specification. The actual type in Java is listed as well. The timer endpoint has many more options, so we can use the –verbose option to list all the options, as shown below:The explain endpoint functionality is also available as JMX or as Java API on the CamelContext. For JMX each endpoint mbean has an explain operation that returns a tabular data with the data as above. This is illustrated in the screenshot below from jconsole:In addition there is a generic explainEndpointJson operation on the CamelContext MBean, this allows to explain any arbitrary uri that is provided. So you can explain endpoints that are not in use by Camel. So how does this works? During the built of the Apache Camel release, for each component we generate a HTML and JSon schema where each endpoint option is documented with their name, type, and description. And for enums we list the possible values. Here is an example of such a json schema for the camel-sql component:Now for this to work, the component must support the uri options, which requires to annotation the endpoint with the @UriEndpoint. Though the Camel team has not migrated all the 160+ components in the Camel release yet. But we plan to migrate the components over time. And certainly now where we have this new functionality, it encourages us to migrate all the components. So where do we get the documentation? Well its just java code, so all you have to do is to have getter/setter for an endpoint option. Add the @UriParam annotation, and for the setter you just add javadoc. Yes we grab the javadoc as the documentation. So its just documented in one place and its in the source code, as standard javadoc. I hope we in the future can auto generate the Camel website documentation for the components, so we do not have to maintain that separately in its wiki system. But that would take hard work to implement. But eventually we should get there, so every component is documented in the source code. For example we could have a readme.md for each component that has all the component documentation, and then the endpoint options is injected from the Camel built system into that readme.md file automatic. Having readme.md files also allow github users to browse the Camel component documentation nicely using github style!So what is next? The hawtio web console will integrate this as well, so users with Camel 2.15 onwards have that information in the web console out of the box. And then its onwards to include documentation about the EIP in the XML schemas for Spring/Blueprint users. And improve the javadoc for the EIPs, as that then becomes the single source of documentation as well. This then allows tooling such as Eclipse / IDEA / Netbeans and whatnot to show the documentation when people develop their Camel routes in the XML editor, as the documentation is provided in the XSD as xsd:documentation tags. We have captured some thoughts what else to do in the CAMEL-7999 ticket. If you have any ideas what else to improve or whatnot, then we love feedback from the community.Reference: Apache Camel please explain me what these endpoint options mean from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog....

OptaPlanner – Open benchmarks for the win

Recently, there was some commotion on Twitter because a competitor heavily restricts publicising benchmarks of their Solver as part of their license. That might seem harsh, but I can understand the sentiment: when a competitor publicizes a benchmark report comparing our product against their own, I know we’re gonna get screwed. Unlike single product benchmarking, competitive benchmarking is inherently dishonest…​           Competitive benchmarking for dummies As as competitor, you can utilize several (obvious and not so obvious) means to prove your superiority over another Solver:Publication biasPick a use case which is know to work well in your Solver. Use datasets with a scale and granularity which are known to work well in your Solver. If you’re really evil, benchmark multiple use cases and datasets in both Solvers and only retain those for which your Solver wins.Expertise imbalanceLet one of your experts develop an implementations for both Solvers.Motivation: like any other company, your company only employs experts in your own technology.If he has years of recent experience in your technology, it’s unlikely he’ll had time for any recent experience in the competitive technology.So you’re effectively using your jockey on someone else’s horse.Tweaking imbalanceSpend an equal amount of time on both implementations.The use case is probably already implemented in your Solver (or straightforward to implement), so you can spend most of the time budget to tweak it better. You ‘ll need to learn the competitor’s Solver first, so you ‘ll spend most of the time budget in that implementation to learn the technology, which leaves no room for tweaking.FundingThere’s no need to explicitly set a desired outcome: your developer will know better than to bite the hand that feeds him.Notice how these approaches don’t require any malice (except for the evil one): it’s normal to conduct a competitive benchmark like this…​ Furthermore, you can make the competitive benchmark comparison look more objective, by sponsoring an academic research group to do the benchmark for you. Just make sure that’s a research group which has been happily using your technology for years and has little or no experience with the competition. Marketing value The marketing value of a such a benchmark report should not be underestimated. These numbers, written in black and white, which clearly show the superiority of your Solver against another Solver, make a strong argument:To close sales deals, when in direct competition with the other Solver. To convince developers, researchers and students to learn and use your technology. To build a strong, long-term reputation.Benchmarks from the 90’s can still affect the Google search results today, for example for “performance of Java vs C++”. Such information spreads virally, and counter claims might not.Empirical evidence Are all competitive benchmark reports lying? Yes, they are probably misrepresenting the truth. Should we therefor restrict users from publicizing benchmarks on our Solver? No, of course not (even if our open source licence would allow such conditions, which it does not). Computer science – like any other science – is build on empirical evidence: the promise that any experiment I publish can be repeated by others independently. If we prevent people from publishing such repeated experiments, we undermine our science. In fact, the more people which report their benchmarks, the clearer our strengths and weaknesses show. Historically, this approach has already enabled us to diagnose and fix weaknesses, regardless whether those were caused by our Solver or the user’s domain specific implementation. Therefore, OptaPlanner welcomes external benchmark reports. I believe in Open Science, as strongly as I believe in Open Source. I do ask the courtesy of allowing public comments/feedback on a public report website, as well as to publicize the details (such as the Solver configuration). If you use the OptaPlanner Benchmarker toolkit (which you will find convenient), simply share the benchmarker HTML report. To run any of the benchmarks of the OptaPlanner Examples locally, simply run a *BenchmarkApp executable class, for example CloudBalancingBenchmarkApp. Notice how a small change in the *BenchmarkConfig.xml, such as switching score calculation from Easy Java to Drools or from Drools to Incremental Java, can have a serious effect in the results. In short: I like external benchmarks, but dislike competitive benchmarks, except for …​ Independent research challenges Can we compare fairly with our competition? Yes, through an independent research challenge. Regularly, the academic community launches such challenges. Each challenge:defines a real-world use case with real-world constraints provides multiple, real-world datasets (half of which they keep hidden) expects reproducible results within a specific time limit on specific hardware gets worldwide participation from the academic and/or enterprise Operations Research community benchmarks each contestant’s implementation on the same hardware in the same time limit to determine a winner benchmarks those hidden datasets to counter overfitting and dataset recognitionIt’s fair: each jockey rides his own horse. Most of the arguments against competitive benchmarking do not apply. And as an added bonus, we get to learn from and compare with the academic research community. In the past, OptaPlanner has done well on these challenges, despite the limited weekend time we have to spend on them. In the last challenge, the ICON power scheduling challenge, we (Lukas, Matej and me) finished 2th place. A minority of the researchers still beat us (with their innovative algorithms in their experimental contraptions and massive time to tweak/build those), but it’s been years since a competitive Solver has beaten us. Long term vision Sharing our benchmarks and enabling others to easily reproduce them, is part of a bigger vision: Too many research papers (on metaheuristics and other optimization algorithms) are hard to reproduce. That’s the paradox in computer science research: to reproduce the findings of a research paper, all we really need is a computer and the code. We don’t need an expensive laboratory. Yet, in practice, the code is usually closed and the raw benchmark data is not accessible. It’s like everyone is scared of sharing the dirty secrets of their code and their benchmarks. I believe that we – the worldwide optimization research community – need to create a benchmark repository: a centralized repository of benchmarks for every use case, for every dataset, for every algorithm, for every implementation version, for any amount of running time. That, together with a good statistical interface, will give us some real insight as to which optimization algorithms are good under which circumstances. We – in OptaPlanner – are well on our way to build exactly that:OptaPlanner Examples already implements 14 distinct use cases. For each use case, we’re already benchmarking on many different optimization algorithms. Our benchmarker HTML report already includes many useful statistics to analyse the raw benchmark data.Reference: OptaPlanner – Open benchmarks for the win from our JCG partner Geoffrey De Smet at the OptaPlanner blog....

JUnit Tutorial for Unit Testing – The ULTIMATE Guide (PDF Download)

EDITORIAL NOTE: We have provided plenty of JUnit tutorials here at Java Code Geeks, like JUnit Getting Started Example, JUnit Using Assertions and Annotations Example, JUnit Annotations Example and so on. However, we prefered to gather all the JUnit features in one detailed guide for the convenience of the reader. We hope you like it!        Want to be a JUnit Master ?Subscribe to our newsletter and download the JUnit Ultimate Guide right now! In order to help you master unit testing with JUnit, we have compiled a kick-ass guide with all the major JUnit features and use cases! Besides studying them online you may download the eBook in PDF format!Email address:Given email address is already subscribed, thank you!Oops. Something went wrong. Please try again later.Please provide a valid email address.Thank you, your sign-up request was successful! Please check your e-mail inbox.Please complete the CAPTCHA.Please fill in the required fields.Table Of Contents1. Unit testing introduction1.1. What is unit testing? 1.2. Test coverage 1.3.Unit testing in Java2. JUnit introduction2.1. JUnit Simple Example using Eclipse 2.2. JUnit annotations 2.3. JUnit assertions3. JUnit complete example using Eclipse3.1. Initial steps 3.2. Create a java class to be tested 3.3. Create and run a JUnit test case 3.4. Using @Ignore annotation 3.5. Creating suite tests 3.6. Creating parameterized tests 3.7. Rules 3.8. Categories4. Run JUnit tests from command line 5. Conclusions  1. Unit testing introduction 1.1. What is unit testing? A unit can be a function, a class, a package, or a subsystem. So, the term unit testing refers to the practice of testing such small units of your code, so as to ensure that they work as expected. For example, we can test whether an output is what we expected to see given some inputs or if a condition is true or false. This practice helps developers to discover failures in their logic behind their code and improve the quality of their code. Also, unit testing can be used so as to ensure that the code will work as expected in case of future changes. 1.2. Test coverage In general, the development community has different opinion regarding the percentage of code that should be tested (test coverage). Some developers believe that the code should have 100% test coverage, while others are comprised with a test coverage of 50% or less. In any case, you should write tests for complex or critical parts of your code. 1.3. Unit testing in Java The most popular testing framework in Java is JUnit. As this guide is focused to JUnit, more details for this testing framework will presented in the next sections. Another popular testing framework in Java is TestNG. 2. JUnit introduction JUnit is an open source testing framework which is used to write and run repeatable automated tests, so that we can be ensured that our code works as expected. JUnit is widely used in industry and can be used as stand alone Java program (from the command line) or within an IDE such as Eclipse. JUnit provides:Assertions for testing expected results. Test features for sharing common test data. Test suites for easily organizing and running tests. Graphical and textual test runners.JUnit is used to test:an entire object part of an object – a method or some interacting methods interaction between several objects2.1. JUnit Simple Example using Eclipse In this section we will see a simple JUnit example. First we will present the class we would like to test: Calculate.java package com.javacodegeeks.junit;public class Calculate {public int sum(int var1, int var2) { System.out.println("Adding values: " + var1 + " + " + var2); return var1 + var2; }}In the above source code, we can notice that the class has one public method named sum(), which gets as inputs two integers, adds them and returns the result. So, we will test this method. For this purpose, we will create another class including methods that will test each one of the methods of the previous class (in this case, we have only one method to be tested). This is the most common way of usage. Of course, if a method is very complex and extended, we can have more than one test methods for this complex method. The details of creating test cases will be presented in the next sections. Below, there is the code of the class named CalculateTest.java, which has the role of our test class: CalculateTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Test;public class CalculateTest {Calculate calculation = new Calculate(); int sum = calculation.sum(2, 5); int testSum = 7;@Test public void testSum() { System.out.println("@Test sum(): " + sum + " = " + testSum); assertEquals(sum, testSum); }}Let’s explain the above code. Firstly, we can see that there is a @Test annotation above the testSum() method. This annotation indicates that the public void method to which it is attached can be run as a test case. Hence, the testSum() method is the method that will test the sum() public method. We can also observe a method called assertEquals(sum, testsum). The method assertEquals ([String message], object expected, object actual) takes as inputs two objects and asserts that the two objects are equal. If we run the test class, by right-clicking in the test class and select Run As -> Junit Test, the program output will look like that: Adding values: 2 + 5 @Test sum(): 7 = 7To see the actual result of a JUnit test, Eclipse IDE provides a JUnit window which shows the results of the tests. In this case where the test succeeds, the JUnit window does not show any errors or failures, as we can see in the image below:Now, if we change this line of code:int testSum = 10;so that the integers to be tested are not equal, the output will be: Adding values: 2 + 5 @Test sum(): 7 = 10And in the JUnit window, an error will appear and this message will be displayed: java.lang.AssertionError: expected: but was: at com.javacodegeeks.junit.CalculateTest.testSum(CalculateTest.java:16) 2.2. JUnit annotations In this section we will mention the basic annotations supported in Junit 4. The table below presents a summary of those annotations:Annotation Description@Test public void method() The Test annotation indicates that the public void method to which it is attached can be run as a test case.@Before public void method() The Before annotation indicates that this method must be executed before each test in the class, so as to execute some preconditions necessary for the test.@BeforeClass public static void method() The BeforeClass annotation indicates that the static method to which is attached must be executed once and before all tests in the class. That happens when the test methods share computationally expensive setup (e.g. connect to database).@After public void method() The After annotation indicates that this method gets executed after execution of each test (e.g. reset some variables after execution of every test, delete temporary variables etc)@AfterClass public static void method() The AfterClass annotation can be used when a method needs to be executed after executing all the tests in a JUnit Test Case class so as to clean-up the expensive set-up (e.g disconnect from a database). Attention: The method attached with this annotation (similar to BeforeClass) must be defined as static.@Ignore public static void method() The Ignore annotation can be used when you want temporarily disable the execution of a specific test. Every method that is annotated with @Ignore won’t be executed.  Let’s see an example of a test class with some of the annotations mentioned above. AnnotationsTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*; import java.util.*; import org.junit.*;public class AnnotationsTest {private ArrayList testList;@BeforeClass public static void onceExecutedBeforeAll() { System.out.println("@BeforeClass: onceExecutedBeforeAll"); }@Before public void executedBeforeEach() { testList = new ArrayList(); System.out.println("@Before: executedBeforeEach"); }@AfterClass public static void onceExecutedAfterAll() { System.out.println("@AfterClass: onceExecutedAfterAll"); }@After public void executedAfterEach() { testList.clear(); System.out.println("@After: executedAfterEach"); }@Test public void EmptyCollection() { assertTrue(testList.isEmpty()); System.out.println("@Test: EmptyArrayList");}@Test public void OneItemCollection() { testList.add("oneItem"); assertEquals(1, testList.size()); System.out.println("@Test: OneItemArrayList"); }@Ignore public void executionIgnored() {System.out.println("@Ignore: This execution is ignored"); } }If we run the above test, the console output would be the following: @BeforeClass: onceExecutedBeforeAll @Before: executedBeforeEach @Test: EmptyArrayList @After: executedAfterEach @Before: executedBeforeEach @Test: OneItemArrayList @After: executedAfterEach @AfterClass: onceExecutedAfterAll2.3. JUnit assertions In this section we will present a number of assertion methods. All those methods are provided by the Assert class which extends the class java.lang.Object and they are useful for writing tests so as to detect failures. In the table below there is a more detailed explanation of the most commonly used assertion methods.Assertion Descriptionvoid assertEquals([String message], expected value, actual value) Asserts that two values are equal. Values might be type of int, short, long, byte, char or java.lang.Object. The first argument is an optional String message.void assertTrue([String message], boolean condition) Asserts that a condition is true.void assertFalse([String message],boolean condition) Asserts that a condition is false.void assertNotNull([String message], java.lang.Object object) Asserts that an object is not null.void assertNull([String message], java.lang.Object object) Asserts that an object is null.void assertSame([String message], java.lang.Object expected, java.lang.Object actual) Asserts that the two objects refer to the same object.void assertNotSame([String message], java.lang.Object unexpected, java.lang.Object actual) Asserts that the two objects do not refer to the same object.void assertArrayEquals([String message], expectedArray, resultArray) Asserts that the array expected and the resulted array are equal. The type of Array might be int, long, short, char, byte or java.lang.Object.  Let’s see an example of some of the aforementioned assertions. AssertionsTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*; import org.junit.Test;public class AssertionsTest {@Test public void test() { String obj1 = "junit"; String obj2 = "junit"; String obj3 = "test"; String obj4 = "test"; String obj5 = null; int var1 = 1; int var2 = 2; int[] arithmetic1 = { 1, 2, 3 }; int[] arithmetic2 = { 1, 2, 3 };assertEquals(obj1, obj2);assertSame(obj3, obj4);assertNotSame(obj2, obj4);assertNotNull(obj1);assertNull(obj5);assertTrue(var1 var2);assertArrayEquals(arithmetic1, arithmetic2); }}In the class above we can see how these assert methods work.The assertEquals() method will return normally if the two compared objects are equal, otherwise a failure will be displayed in the JUnit window and the test will abort. The assertSame() and assertNotSame() methods tests if two object references point to exactly the same object. The assertNull() and assertNotNull() methods test whether a variable is null or not null. The assertTrue() and assertFalse() methods tests if a condition or a variable is true or false. The assertArrayEquals() will compare the two arrays and if they are equal, the method will proceed without errors. Otherwise, a failure will be displayed in the JUnit window and the test will abort.3. JUnit complete example using Eclipse In this section we will show a complete example of using JUnit. We will see in detail how to create and run tests and we will show how to use specific annotations and assertions of JUnit. 3.1. Initial Steps Let’s create a java project named JUnitGuide. In the src folder, we right-click and select New -> Package, so as to create a new package named com.javacodegeeks.junit where we will locate the class to be tested. For the test classes, it is considered as good practice to create a new source folder dedicated to tests, so that the classes to be tested and the test classes will be in different source folders. For this purpose, right-click your project, select New -> Source Folder, name the new source folder test and click Finish.TipAlternatively, you can create a new source folder by right-clicking your project and select Properties -> Java Build Path, select the tab Source, select Add Folder -> Create New Folder, write the name test and press Finish. You can easily see that there are two source folders in your project:You can also create a new package in the newly created test folder, which will be called com.javacodegeeks.junit, so that your test classes won’t be located to the default package and we are ready to start! 3.2. Create the java class to be tested Right-click the src folder and create a new java class called FirstDayAtSchool.java. This will be the class whose public methods will be tested. FirstDayAtSchool.java package com.javacodegeeks.junit;import java.util.Arrays;public class FirstDayAtSchool {public String[] prepareMyBag() { String[] schoolbag = { "Books", "Notebooks", "Pens" }; System.out.println("My school bag contains: " + Arrays.toString(schoolbag)); return schoolbag; }public String[] addPencils() { String[] schoolbag = { "Books", "Notebooks", "Pens", "Pencils" }; System.out.println("Now my school bag contains: " + Arrays.toString(schoolbag)); return schoolbag; } }3.3. Create and run a JUnit test case To create a JUnit test case for the existing class FirstDayAtSchool.java, right-click on it in the Package Explorer view and select New → JUnit Test Case. Change the source folder so that the class will be located to test source folder and ensure that the flag New JUnit4 test is selected.Then, click Finish. If your project does not contain the JUnit library in its classpath, the following message will be displayed so as to add the JUnit library to the classpath:Below, there is the code of the class named FirstDayAtSchoolTest.java, which is our test class: FirstDayAtSchool.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Test;public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); }@Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}Now we can run the test case by right-clicking on the test class and select Run As -> JUnit Test. The program output will look like that: Inside testPrepareMyBag() My school bag contains: [Books, Notebooks, Pens] Inside testAddPencils() Now my school bag contains: [Books, Notebooks, Pens, Pencils]and in the JUnit view will be no failures or erros. If we change one of the arrays, so that it contains more than the expected elements:String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils", "Rulers"};and we run again the test class, the JUnit view will contain a failure:Else, if we change again one of the arrays, so that it contains a different element than the expected:String[] bag1 = { "Books", "Notebooks", "Rulers" };and we run again the test class, the JUnit view will contain once again a failure:3.4. Using @Ignore annotation Let’s see in the above example how can we use the @Ignore annotation. In the test class FirstDayAtSchoolTest we will add the @Ignore annotation to the testAddPencils() method. In that way, we expect that this testing method will be ignored and won’t be executed. package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Ignore; import org.junit.Test;public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); }@Ignore @Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}Indeed, this is what happens according to the output: Inside testPrepareMyBag() My school bag contains: [Books, Notebooks, Pens]Now, we will remove the @Ignore annotation from the testAddPencils() method and we will annotate the whole class instead. package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Ignore; import org.junit.Test;@Ignore public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); } @Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}The whose test class won’t be executed, so no result will be displayed int the console output and in the junit view:3.5. Creating suite tests In this section, we will see how to create suite tests. A test suite is a collection of some test cases from different classes that can be run all together using @RunWith and @Suite annotations. This is very helpful if you have many test classes and you want to run them all together instead of running each test one at a time. When a class is annotated with @RunWith, JUnit will invoke the class in which is annotated so as to run the tests, instead of using the runner built into JUnit. Based on the classes of the previous sections, we can create two test classes. The one class will test the public method prepareMyBag() and the other test class will test the method addPencils(). Hence, we will eventually have the classes below: PrepareMyBagTest.java package com.javacodegeeks.junit;import org.junit.Test; import static org.junit.Assert.*;public class PrepareMyBagTest {FirstDayAtSchool school = new FirstDayAtSchool();String[] bag = { "Books", "Notebooks", "Pens" };@Test public void testPrepareMyBag() {System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag, school.prepareMyBag());}}AddPencilsTest.java package com.javacodegeeks.junit;import org.junit.Test; import static org.junit.Assert.*;public class AddPencilsTest {FirstDayAtSchool school = new FirstDayAtSchool();String[] bag = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testAddPencils() {System.out.println("Inside testAddPencils()"); assertArrayEquals(bag, school.addPencils());}}Now we will create a test suite so as to run the above classes together. Right-click the test source folder and create a new java class named SuiteTest.java with the following code: SuiteTest.java package com.javacodegeeks.junit;import org.junit.runner.RunWith; import org.junit.runners.Suite;@RunWith(Suite.class) @Suite.SuiteClasses({ PrepareMyBagTest.class, AddPencilsTest.class }) public class SuitTest {}With the @Suite.SuiteClasses annotation you can define which test classes will be included in the execution. So, if you right-click the test suite and select Run As -> JUnit Test, the execution of both test classes will take place with the order that has been defined in the @Suite.SuiteClasses annotation. 3.6. Creating parameterized tests In this section we will see how to create parameterized tests. For this purpose, we will use the class mentioned in section 2.1 which provides a public method for adding integers. So, this will be the class to be tested. But when a test class can be considered as a parameterized test class? Of course, when it fullfills all the following requirements:The class is annotated with @RunWith(Parameterized.class). As explained in the previous section, @RunWith annotation enables JUnit to invoke the class in which is annotated to run the tests, instead of using the runner built into JUnit. Parameterized is a runner inside JUnit that will run the same test case with different set of inputs. The class has a single constructor that stores the test data. The class has a static method that generates and returns test data and is annotated with the @Parameters annotation. The class has a test, which obviously means that it needs a method annotated with the @Test annotation.Now, we will create a new test class named CalculateTest.java, which will follow the guidelines mentioned above. The source code of this class follows. CalculateTest.java package com.javacodegeeks.junit;import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection;import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters;@RunWith(Parameterized.class) public class CalculateTest {private int expected; private int first; private int second;public CalculateTest(int expectedResult, int firstNumber, int secondNumber) { this.expected = expectedResult; this.first = firstNumber; this.second = secondNumber; }@Parameters public static Collection addedNumbers() { return Arrays.asList(new Integer[][] { { 3, 1, 2 }, { 5, 2, 3 }, { 7, 3, 4 }, { 9, 4, 5 }, }); }@Test public void sum() { Calculate add = new Calculate(); System.out.println("Addition with parameters : " + first + " and " + second); assertEquals(expected, add.sum(first, second)); } }As we can observe in the class above, it fullfills all the above requirements. The method addedNumbers annotated with @Parameters returns a Collection of Arrays. Each array includes the inputs/output numbers of each test execution. The number of elements in each array must be the same with the number of parameters in the constructor. So, in this specific case, each array includes three elements, two elements that represent the numbers to be added and one element for the result. If we run the CalculateTest test case, the console output will be the following: Addition with parameters : 1 and 2 Adding values: 1 + 2 Addition with parameters : 2 and 3 Adding values: 2 + 3 Addition with parameters : 3 and 4 Adding values: 3 + 4 Addition with parameters : 4 and 5 Adding values: 4 + 5As we see in the output, the test case is executed four times, which is the number of inputs in the method annotated with @Parameters annotation. 3.7. Rules In this section we present a new feature of JUnit called Rules which allows very flexible addition or redefinition of the behavior of each test method in a test class. For this purpose, @Rule annotation should be used so as to mark public fields of a test class. Those fields should be of type MethodRule, which is an alteration in how a test method is run and reported. Multiple MethodRules can be applied to a test method. MethodRule interface has a lot of implementations, such as ErrorCollector which allows execution of a test to continue after the first problem is found, ExpectedException which allows in-test specification of expected exception types and messages, TestName which makes the current test name available inside test methods, and many others. Except for those already defined rules, developers can create their own custom rules and use them in their test cases as they wish. Below we present the way we can use one of the existing rules named TestName in our own tests. TestName is invoked when a test is about to start. NameRuleTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.*; import org.junit.rules.TestName;public class NameRuleTest { @Rule public TestName name = new TestName();@Test public void testA() { System.out.println(name.getMethodName()); assertEquals("testA", name.getMethodName());}@Test public void testB() { System.out.println(name.getMethodName()); assertEquals("testB", name.getMethodName()); } }We can see that the @Rule annotation marks the public field name which is of type MethodRule and specifically, TestName type. Then, we can use in our tests this name field and find for example the name of the test method, in this specific case. 3.8. Categories Another new feature of JUnit is called Categories and allows you to group certain kinds of tests together and even include or exclude groups (categories). For example, you can separate slow tests from fast tests. To assign a test case or a method to one of those categories the @Category annotation is provided. Below there is an example of how we can use this nice feature of JUnit, based on the release notes of JUnit 4.8. public interface FastTests { /* category marker */ } public interface SlowTests { /* category marker */ } Firstly, we define two categories, FastTests and SlowTests. A category can be either a class or an interface.public class A { @Test public void a() { fail(); }@Category(SlowTests.class) @Test public void b() { } }In the above code, we mark the test method b() of class A with @Category annotation so as to indicate that this specific method belongs to category SlowTests. So, we are able to mark not only whole classes but also some of their test methods individually.@Category({ SlowTests.class, FastTests.class }) public class B { @Test public void c() { } }In the above sample of code, we can see that the whole class B is annotated with @Category annotation . Annotating a test class with @Category annotation automatically includes all its test methods in this category. We can also see that a test class or a test method can belong to more than one categories.@RunWith(Categories.class) @IncludeCategory(SlowTests.class) @SuiteClasses({ A.class, B.class }) // Note that Categories is a kind of Suite public class SlowTestSuite { // Will run A.b and B.c, but not A.a } In this sample of code, we notice that there is a suite test named SlowTestSuite. Basically, categories are a kind of suite. In this suite, we observe a new annotation called @IncludeCategory, indicating which categories will be included in the execution. In this specific case, methods belonging to SlowTests category will be executed. Hence, only the test method b() of class A will be executed as well as the test method c() of class B, which both belong to SlowTests category.@RunWith(Categories.class) @IncludeCategory(SlowTests.class) @ExcludeCategory(FastTests.class) @SuiteClasses({ A.class, B.class }) // Note that Categories is a kind of Suite public class SlowTestSuite { // Will run A.b, but not A.a or B.c }Finally, we change a little bit the test suite and we add one more new annotation called @ExcludeCategory, indicating which categories will be excluded from the execution. In this specific case, only the test method b() of class A will be executed, as this is the only test method that belongs explicitly to SlowTests category. We notice that in both cases, the test method a() of class A won’t be executed as it doesn’t belong to any category. 4. Run JUnit tests from command line You can run your JUnit test outside Eclipse, by using the org.junit.runner.JUnitCore class. This class provides the runClasses() method which allows you to execute one or several test classes. The return type of runClasses() method is an object of the type org.junit.runner.Result. This object can be used to collect information about the tests. Also, in case there is a failed test, you can use the object org.junit.runner.notification.Failure which holds description of the failed tests. The procedure below shows how to run your test outside Eclipse. Create a new Java class named JunitRunner.java with the following code: JunitRunner.java package com.javacodegeeks.junit;import org.junit.runner.JUnitCore; import org.junit.runner.Result; import org.junit.runner.notification.Failure;public class JunitRunner {public static void main(String[] args) {Result result = JUnitCore.runClasses(AssertionsTest.class); for (Failure fail : result.getFailures()) { System.out.println(fail.toString()); } if (result.wasSuccessful()) { System.out.println("All tests finished successfully..."); } } }As an example, we choose to run the AssertionsTest test class.Open command prompt and move down directories so as to find the directory where the two classes are located. Compile the Test class and the Runner class. C:\Users\konstantina\eclipse_luna_workspace\JUnitGuide\test\com\javacodegeeks\junit>javac -classpath "C:\Users\konstantina\Downloads\junit-4.11.jar";"C:\Users\konstantina\Downloads\hamcrest-core-1.3.jar"; AssertionsTest.java JunitRunner.java As we did in Eclipse, we should also include library jars of JUnit to our classpath. Now run the JunitRunner. C:\Users\konstantina\eclipse_luna_workspace\JUnitGuide\test\com\javacodegeeks\junit>java -classpath "C:\Users\konstantina\Downloads\junit-4.11.jar";"C:\Users\konstantina\Downloads\hamcrest-core-1.3.jar"; JunitRunnerHere is the output: All tests finished successfully...5. Conclusions This was a detailed guide about JUnit testing framework, the most popular testing framework in Java. If you enjoyed this, then subscribe to our newsletter to enjoy weekly updates and complimentary whitepapers! Also, check out JCG Academy for more advanced training! DownloadYou can download the full source code of this guide here : JUnitGuide.zip ...

10 Things You Didn’t Know About Java

So, you’ve been working with Java since the very beginning? Remember the days when it was called “Oak”, when OO was still a hot topic, when C++ folks thought that Java had no chance, when Applets were still a thing? I bet that you didn’t know at least half of the following things. Let’s start this week with some great surprises about the inner workings of Java.           1. There is no such thing as a checked exception That’s right! The JVM doesn’t know any such thing, only the Java language does. Today, everyone agrees that checked exceptions were a mistake. As Bruce Eckel said on his closing keynote at GeeCON, Prague, no other language after Java has engaged in using checked exceptions, and even Java 8 does no longer embrace them in the new Streams API (which can actually be a bit of a pain, when your lambdas use IO or JDBC). Do you want proof that the JVM doesn’t know such a thing? Try the following code: public class Test { // No throws clause here public static void main(String[] args) { doThrow(new SQLException()); } static void doThrow(Exception e) { Test.<RuntimeException> doThrow0(e); } @SuppressWarnings("unchecked") static <E extends Exception> void doThrow0(Exception e) throws E { throw (E) e; } } Not only does this compile, this also actually throws the SQLException, you don’t even need Lombok’s @SneakyThrows for that.More details about the above can be found in this article here, or here, on Stack Overflow.2. You can have method overloads differing only in return types That doesn’t compile, right? class Test { Object x() { return "abc"; } String x() { return "123"; } } Right. The Java language doesn’t allow for two methods to be “override-equivalent” within the same class, regardless of their potentially differing throws clauses or return types. But wait a second. Check out the Javadoc of Class.getMethod(String, Class...). It reads:Note that there may be more than one matching method in a class because while the Java language forbids a class to declare multiple methods with the same signature but different return types, the Java virtual machine does not. This increased flexibility in the virtual machine can be used to implement various language features. For example, covariant returns can be implemented with bridge methods; the bridge method and the method being overridden would have the same signature but different return types.Wow, yes that makes sense. In fact, that’s pretty much what happens when you write the following: abstract class Parent<T> { abstract T x(); }class Child extends Parent<String> { @Override String x() { return "abc"; } } Check out the generated byte code in Child: // Method descriptor #15 ()Ljava/lang/String; // Stack: 1, Locals: 1 java.lang.String x(); 0 ldc <String "abc"> [16] 2 areturn Line numbers: [pc: 0, line: 7] Local variable table: [pc: 0, pc: 3] local: this index: 0 type: Child // Method descriptor #18 ()Ljava/lang/Object; // Stack: 1, Locals: 1 bridge synthetic java.lang.Object x(); 0 aload_0 [this] 1 invokevirtual Child.x() : java.lang.String [19] 4 areturn Line numbers: [pc: 0, line: 1] So, T is really just Object in byte code. That’s well understood. The synthetic bridge method is actually generated by the compiler because the return type of the Parent.x() signature may be expected to Object at certain call sites. Adding generics without such bridge methods would not have been possible in a binary compatible way. So, changing the JVM to allow for this feature was the lesser pain (which also allows covariant overriding as a side-effect…) Clever, huh? Are you into language specifics and internals? Then find some more very interesting details here. 3. All of these are two-dimensional arrays! class Test { int[][] a() { return new int[0][]; } int[] b() [] { return new int[0][]; } int c() [][] { return new int[0][]; } } Yes, it’s true. Even if your mental parser might not immediately understand the return type of the above methods, they are all the same! Similar to the following piece of code: class Test { int[][] a = {{}}; int[] b[] = {{}}; int c[][] = {{}}; } You think that’s crazy? Imagine using JSR-308 / Java 8 type annotations on the above. The number of syntactic possibilities explodes! @Target(ElementType.TYPE_USE) @interface Crazy {}class Test { @Crazy int[][] a1 = {{}}; int @Crazy [][] a2 = {{}}; int[] @Crazy [] a3 = {{}};@Crazy int[] b1[] = {{}}; int @Crazy [] b2[] = {{}}; int[] b3 @Crazy [] = {{}};@Crazy int c1[][] = {{}}; int c2 @Crazy [][] = {{}}; int c3[] @Crazy [] = {{}}; }Type annotations. A device whose mystery is only exceeded by its powerOr in other words:When I do that one last commit just before my 4 week vacationI let the actual exercise of finding a use-case for any of the above to you. 4. You don’t get the conditional expression So, you thought you knew it all when it comes to using the conditional expression? Let me tell you, you didn’t. Most of you will think that the below two snippets are equivalent: Object o1 = true ? new Integer(1) : new Double(2.0); … the same as this? Object o2;if (true) o2 = new Integer(1); else o2 = new Double(2.0); Nope. Let’s run a quick test System.out.println(o1); System.out.println(o2); This programme will print: 1.0 1 Yep! The conditional operator will implement numeric type promotion, if “needed”, with a very very very strong set of quotation marks on that “needed”. Because, would you expect this programme to throw a NullPointerException? Integer i = new Integer(1); if (i.equals(1)) i = null; Double d = new Double(2.0); Object o = true ? i : d; // NullPointerException! System.out.println(o);More information about the above can be found here.5. You also don’t get the compound assignment operator Quirky enough? Let’s consider the following two pieces of code: i += j; i = i + j; Intuitively, they should be equivalent, right? But guess what. They aren’t! The JLS specifies:A compound assignment expression of the form E1 op= E2 is equivalent to E1 = (T)((E1) op (E2)), where T is the type of E1, except that E1 is evaluated only once.This is so beautiful, I would like to cite Peter Lawrey‘s answer to this Stack Overflow question:A good example of this casting is using *= or /= byte b = 10; b *= 5.7; System.out.println(b); // prints 57 or byte b = 100; b /= 2.5; System.out.println(b); // prints 40 or char ch = '0'; ch *= 1.1; System.out.println(ch); // prints '4' or char ch = 'A'; ch *= 1.5; System.out.println(ch); // prints 'a'Now, how incredibly useful is that? I’m going to cast/multiply chars right there in my application. Because, you know… 6. Random integers Now, this is more of a puzzler. Don’t read the solution yet. See if you can find this one out yourself. When I run the following programme: for (int i = 0; i < 10; i++) { System.out.println((Integer) i); } … then “sometimes”, I get the following output: 92 221 45 48 236 183 39 193 33 84 How is that even possible?? . . . . . . spoiler… solution ahead… . . . . . OK, the solution is here and has to do with overriding the JDK’s Integer cache via reflection, and then using auto-boxing and auto-unboxing. Don’t do this at home! Or in other words, let’s think about it this way, once moreWhen I do that one last commit just before my 4 week vacation7. GOTO This is one of my favourite. Java has GOTO! Type it… int goto = 1; This will result in: Test.java:44: error: <identifier> expected int goto = 1; ^ This is because goto is an unused keyword, just in case… But that’s not the exciting part. The exciting part is that you can actually implement goto with break, continue and labelled blocks: Jumping forward label: { // do stuff if (check) break label; // do more stuff } In bytecode: 2 iload_1 [check] 3 ifeq 6 // Jumping forward 6 .. Jumping backward label: do { // do stuff if (check) continue label; // do more stuff break label; } while(true); In bytecode: 2 iload_1 [check] 3 ifeq 9 6 goto 2 // Jumping backward 9 .. 8. Java has type aliases In other languages (e.g. Ceylon), we can define type aliases very easily: interface People => Set<Person>; A People type constructed in such a way can then be used interchangably with Set<Person>: People? p1 = null; Set<Person>? p2 = p1; People? p3 = p2; In Java, we can’t define type aliases at a top level. But we can do so for the scope of a class, or a method. Let’s consider that we’re unhappy with the namings of Integer, Long etc, we want shorter names: I and L. Easy: class Test<I extends Integer> { <L extends Long> void x(I i, L l) { System.out.println( i.intValue() + ", " + l.longValue() ); } } In the above programme, Integer is “aliased” to I for the scope of the Test class, whereas Long is “aliased” to L for the scope of the x() method. We can then call the above method like this: new Test().x(1, 2L); This technique is of course not to be taken seriously. In this case, Integer and Long are both final types, which means that the types I and L are effectively aliases (almost. assignment-compatibility only goes one way). If we had used non-final types (e.g. Object), then we’d be really using ordinary generics. Enough of these silly tricks. Now for something truly remarkable! 9. Some type relationships are undecidable! OK, this will now get really funky, so take a cup of coffee and concentrate. Consider the following two types: // A helper type. You could also just use List interface Type<T> {}class C implements Type<Type<? super C>> {} class D<P> implements Type<Type<? super D<D<P>>>> {} Now, what do the types C and D even mean? They are somewhat recursive, in a similar (yet subtly different) way that java.lang.Enum is recursive. Consider: public abstract class Enum<E extends Enum<E>> { ... } With the above specification, an actual enum implementation is just mere syntactic sugar: // This enum MyEnum {}// Is really just sugar for this class MyEnum extends Enum<MyEnum> { ... } With this in mind, let’s get back to our two types. Does the following compile? class Test { Type<? super C> c = new C(); Type<? super D<Byte>> d = new D<Byte>(); } Hard question, and Ross Tate has an answer to it. The question is in fact undecidable: Is C a subtype of Type<? super C>? Step 0) C <?: Type<? super C> Step 1) Type<Type<? super C>> <?: Type (inheritance) Step 2) C (checking wildcard ? super C) Step . . . (cycle forever) And then: Is D a subtype of Type<? super D<Byte>>? Step 0) D<Byte> <?: Type<? super C<Byte>> Step 1) Type<Type<? super D<D<Byte>>>> <?: Type<? super D<Byte>> Step 2) D<Byte> <?: Type<? super D<D<Byte>>> Step 3) List<List<? super C<C>>> <?: List<? super C<C>> Step 4) D<D<Byte>> <?: Type<? super D<D<Byte>>> Step . . . (expand forever) Try compiling the above in your Eclipse, it’ll crash! (don’t worry. I’ve filed a bug) Let this sink in…Some type relationships in Java are undecidable!If you’re interested in more details about this peculiar Java quirk, read Ross Tate’s paper “Taming Wildcards in Java’s Type System” (co-authored with Alan Leung and Sorin Lerner), or also our own musings on correlating subtype polymorphism with generic polymorphism 10. Type intersections Java has a very peculiar feature called type intersections. You can declare a (generic) type that is in fact the intersection of two types. For instance: class Test<T extends Serializable & Cloneable> { } The generic type parameter T that you’re binding to instances of the class Test must implement both Serializable and Cloneable. For instance, String is not a possible bound, but Date is: // Doesn't compile Test<String> s = null;// Compiles Test<Date> d = null; This feature has seen reuse in Java 8, where you can now cast types to ad-hoc type intersections. How is this useful? Almost not at all, but if you want to coerce a lambda expression into such a type, there’s no other way. Let’s assume you have this crazy type constraint on your method: <T extends Runnable & Serializable> void execute(T t) {} You want a Runnable that is also Serializable just in case you’d like to execute it somewhere else and send it over the wire. Lambdas and serialisation are a bit of a quirk. Lambdas can be serialised:You can serialize a lambda expression if its target type and its captured arguments are serializableBut even if that’s true, they do not automatically implement the Serializable marker interface. To coerce them to that type, you must cast. But when you cast only to Serializable… execute((Serializable) (() -> {})); … then the lambda will no longer be Runnable. Egh… So… Cast it to both types: execute((Runnable & Serializable) (() -> {})); Conclusion I usually say this only about SQL, but it’s about time to conclude an article with the following:Java is a device whose mystery is only exceeded by its power.Reference: 10 Things You Didn’t Know About Java from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Java EE 7 / JAX-RS 2.0 – CORS on REST

Java EE REST application usually works well out of the box on a development machine where all server side resources and client side UIs point to “localhost” or But when it comes to cross domain deployment (when the REST client is no longer on the same domain as the server that host the REST APIs), some work-around is required. This article is about how to make Cross Domain or better known as Cross-origin Resource Sharing a.k.a CORS work when it comes to Java EE 7 / JAX-RS 2.0 REST APIs. It is not the intention of this article to discuss about browser and other security related mechanisms, you may find this on other websites; but what we truly want to achieve here is again, to get things working as soon as possible.     What is the Problem? Demo Java EE 7 (JAX-RS 2.0) REST Service In this article, I’ll just code a a simple Java EE 7 JAX-RS 2.0 based REST web service and client for demo purpose. Here, I’ll define an interface annotating it with the url path of the REST service, along with the accepted HTTP methods and MIME Type for the HTTP response. Codes for RESTCorsDemoResourceProxy.java: package com.developerscrappad.intf;   import java.io.Serializable; import javax.ejb.Local; import javax.ws.rs.DELETE; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;   @Local @Path( "rest-cors-demo" ) public interface RESTCorsDemoResourceProxy extends Serializable {   @GET @Path( "get-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response getMethod();   @PUT @Path( "put-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response putMethod();   @POST @Path( "post-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response postMethod();   @DELETE @Path( "delete-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response deleteMethod(); }Codes for RESTCorsDemoResource.java: package com.developerscrappad.business;   import com.developerscrappad.intf.RESTCorsDemoResourceProxy; import javax.ejb.Stateless; import javax.json.Json; import javax.json.JsonObject; import javax.json.JsonObjectBuilder; import javax.ws.rs.core.Response;   @Stateless( name = "RESTCorsDemoResource", mappedName = "ejb/RESTCorsDemoResource" ) public class RESTCorsDemoResource implements RESTCorsDemoResourceProxy {   @Override public Response getMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "get method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.OK ).entity( jsonObj.toString() ).build(); }   @Override public Response putMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "get method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); }   @Override public Response postMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "post method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.CREATED ).entity( jsonObj.toString() ).build(); }   @Override public Response deleteMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "delete method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); } }The codes in RESTCorsDemoResource is straight forward but please bare in mind that this is just a demo application and it has no valid purpose in its business logic. The RESTCorsDemoResource class implements the method signatures defined in the interface RESTCorsDemoResourceProxy. It has several methods which process incoming HTTP request through specific HTTP methods like GET, PUT, POST and DELETE, and at the end of the method, returns a simple JSON message when the process is done. Not forgetting the web.xml below which tells the app server to treat it as a REST API call for any incoming HTTP request when the path detects “/rest-api/*” (e.g. http://<host>:<port>/AppName/rest-api/get-method/). Contents in web.xml:  javax.ws.rs.core.Application 1 javax.ws.rs.core.Application /rest-api/*  Deployment Let’s package the above in a war file say RESTCorsDemo.war and deploy it to a Java EE 7 compatible app server. On my side, I’m running this on Glassfish 4.0 with default settings, which resides in machine with the public domain developerscrappad.com Once deployed, the URLs to the REST services should be as the below:Method REST URLRESTCorsDemoResourceProxy.getMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/get-method/RESTCorsDemoResourceProxy.postMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/post-method/RESTCorsDemoResourceProxy.putMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/put-method/RESTCorsDemoResourceProxy.deleteMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/delete-method/  HTML REST Client On my local machine, I’ll just create a simple HTML page to invoke the deployed REST server resources with the below: Codes for rest-test.html: <!DOCTYPE html> <html> <head> <title>REST Tester</title> <meta charset="UTF-8"> </head> <body> <div id="logMsgDiv"></div>   <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script type="text/javascript"> var $ = jQuery.noConflict();   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/put-method/", type: "PUT", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/delete-method/", type: "DELETE", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } ); </script> </body> </html>Here, I’m using jQuery’s ajax object for REST Services call with the defined option. The purpose of the rest-test.html is to invoke the REST Service URLs with the appropriate HTTP method and obtain the response as JSON result for processing later. I won’t go into detail here but in case if you like to know more about the $.ajax call options available, you may visit jQuery’s documentation site on this. What happens when we run rest-test.html? When I run the rest-test.html file on my Firefox browser, equip with the Firebug plugin, the below screen shots are what I get.As you can see, when I check on the console tab, both the “/rest-api/rest-cors-demo/get-method/” and the “/rest-api/rest-cors-demo/post-method/” returned the right HTTP Status, but I can be absolutely sure that the method wasn’t executed on the remote Glassfish app server, the REST service calls were just bypassed, on the rest-test.html client, it just went straight to the $.ajax error callbacks. What about the “/rest-api/rest-cors-demo/put-method/” and the “/rest-api/rest-cors-demo/delete-method/“, when I check on the Firebug Net Tab as shown on one of the screen shots, the browser sent a Preflight Request by firing OPTIONS as the HTTP Method instead of the PUT and the DELETE. This phenomenon relates to both server side and browser security; I have compiled some other websites relating this at the bottom of the page. How To Make CORS Works in Java EE 7 / JAX-RS 2.0 (Through Interceptors) In order to make cross domain calls or simply known as CORS work on both the client and the server side REST resource, I have created two JAX-RS 2.0 interceptor classes, one implementing the ContainerRequestFilter and another implementing the ContainerResponseFilter. Additional HTTP Headers in ContainerResponseFilter The browser will require some additional HTTP headers to be responded back to it to further verify whether the server side resources allow cross domain / cross-origin resource sharing and to which level of security or limitation it permits. These are the headers which work pretty well out of the box for enabling CORS. Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Methods: GET, POST, DELETE, PUT These sets of of additional HTTP Headers that could be included as part of the HTTP response when it goes back to the browser by having it included in a class which implements ContainerResponseFilter. ** But do take note: Having “Access-Control-Allow-Origin: *” will allow all calls to be accepted regardless of the location of the client. There are ways for you to further restrict this is you only want the server side to permit REST service calls from only a specific domain. Please check out the related articles at the bottom of the page. Codes for RESTCorsDemoResponseFilter.java: package com.developerscrappad.filter;   import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class RESTCorsDemoResponseFilter implements ContainerResponseFilter {   private final static Logger log = Logger.getLogger( RESTCorsDemoResponseFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx, ContainerResponseContext responseCtx ) throws IOException { log.info( "Executing REST response filter" );   responseCtx.getHeaders().add( "Access-Control-Allow-Origin", "*" ); responseCtx.getHeaders().add( "Access-Control-Allow-Credentials", "true" ); responseCtx.getHeaders().add( "Access-Control-Allow-Methods", "GET, POST, DELETE, PUT" ); } }Dealing With Browser Preflight Request HTTP Method: OPTIONS The RESTCorsDemoResponseFilter class which implements ContainerResponseFilter only solved part of the issue. We still have to deal with the browser’s pre-flight request for the PUT and the DELETE HTTP methods. The underlying pre-flight request mechanism of most of the popular browsers work in such a way that they send a request with OPTIONS as the HTTP Method just to test the waters. If the server side resource acknowledges the path url of the request and allows PUT or DELETE HTTP Method to be accepted for processing, the server side will typically have to send an HTTP Status 200 (OK) response (or any sort of 20x HTTP Status) back to the browser before the browser sends the actual request as HTTP Method PUT or DELETE after that. However, this mechanism would have to implemented manually by the developer. So, I have implemented a new class by the name of RESTCorsDemoRequestFilter which implements ContainerRequestFilter shown at the below for this mechanism. Codes for RESTCorsDemoRequestFilter.java: package com.developerscrappad.filter;   import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class RESTCorsDemoRequestFilter implements ContainerRequestFilter {   private final static Logger log = Logger.getLogger( RESTCorsDemoRequestFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx ) throws IOException { log.info( "Executing REST request filter" );   // When HttpMethod comes as OPTIONS, just acknowledge that it accepts... if ( requestCtx.getRequest().getMethod().equals( "OPTIONS" ) ) { log.info( "HTTP Method (OPTIONS) - Detected!" );   // Just send a OK signal back to the browser requestCtx.abortWith( Response.status( Response.Status.OK ).build() ); } } }The Result After the RESTCorsDemoResponseFilter and the RESTCorsDemoRequestFilter are included in the application and deployed. I then rerun rest-test.html on my browser again. As a result, all the HTTP requests with different HTTP Methods of GET, POST, PUT and DELETE from a different location handled very well by the JAX-RS 2.0 application. The screen shots below are the successful HTTP requests made by my browser. These results of Firebug Console and NET Tab are what should be expected:  Final Words JAX-RS 2.0 Interceptors are very handy when it comes to intercepting REST related request and response for such scenario like enabling CORS. If you are using specific implementation of REST library for your Java project e.g. Jersey or RESTEasy, do check out how request and response interceptors are to be specifically implemented, apply the above technique and you should be able to get the same result. The same principles are pretty much the same. Well, hopefully this article will help you in solving cross domain or CORS issues on your Java EE 7 / JAX-RS 2.0 REST project. Thank you for reading. Related Articles:http://en.wikipedia.org/wiki/Cross-origin_resource_sharing http://www.html5rocks.com/en/tutorials/cors/ http://www.w3.org/TR/cors/ https://developer.mozilla.org/en/docs/HTTP/Access_control_CORSReference: Java EE 7 / JAX-RS 2.0 – CORS on REST from our JCG partner Max Lam at the A Developer’s Scrappad blog....

Revamping WSO2 API Manager Key Management Architecture around Open Standards

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for scalably routing API traffic. It leverages proven, production-ready integration, security, and governance components from the WSO2 Enterprise Service Bus, WSO2 Identity Server, and WSO2 Governance Registry. In addition, it leverages the WSO2 Business Activity Monitor for Big Data analytics, giving you instant insight into APIs behavior. One of the limitation we had in API Manager so far is its tight integration with the WSO2 Identity Server. WSO2 Identity Server acts as the key manager, which issues and validates OAuth tokens.   With the revamped architecture (still under discussion) we plan to make all integration points with the key manager, extensible – so you can bring in your own OAuth authorizations server. And also – we will ship the product with standard extension points. These extension points are built around corresponding OAuth 2.0 profiles. In case, your authorization server deviates from the standard, you need to implement the KeyManager interface and plug in your own implementation.   API Publisher API Developer first logs-in to the API Publisher and creates an API with all the related metadata and publishes it to the API Store and the API Gateway. API Publisher will also publish API metadata into the external authorization server via OAuth Resource Set Registration endpoint [1]. Sample Request: { "name": "Photo Album", "icon_uri": "http://www.example.com/icons/flower.png", "scopes": [ "http://photoz.example.com/dev/scopes/view", "http://photoz.example.com/dev/scopes/all" ], "type": "http://www.example.com/rsets/photoalbum" }name REQUIRED. A human-readable string describing a set of one or more resources. This name MAY be used by the authorization server in its resource owner user interface for the resource owner. icon_uri OPTIONAL. A URI for a graphic icon representing the resource set. The referenced icon MAY be used by the authorization server in its resource owner user interface for the resource owner. scopes REQUIRED. An array providing the URI references of scope descriptions that are available for this resource set. type OPTIONAL. A string uniquely identifying the semantics of the resource set. For example, if the resource set consists of a single resource that is an identity claim that leverages standardized claim semantics for “verified email address”, the value of this property could be an identifying URI for this claim. Sample Response: HTTP/1.1 201 Created Content-Type: application/json ETag: (matches "_rev" property in returned object) ...{ "status": "created", "_id": (id of created resource set), "_rev": (ETag of created resource set) } The objective of publishing the resources to the authorization server is to make it aware of the available resources and the scopes associated with them. An identity administrator can build the relationship between these scopes and the enterprise roles. Basically you can associate scopes with enterprise roles. API Store Application Developer logs-in to the API Store and discovers the APIs he/she wants for his application and subscribes to those – and finally creates an application. Each application is uniquely identified by its client id. There are two ways to associate a client id with an application created in API Store.Application developer brings in the client id. Application developer creates a client id out-of-band with the authorization server, and associates the client id with the application he just created in the API Store. In this case, Dynamic Client Registration endpoint of the Authorization Serve is not used (No step 3 & 4). API Store calls Dynamic Client Registration endpoint of the external Authorization Server. Once the application is created by the application developer (by grouping a set of APIs) – API Store will call the Dynamic Client Registration endpoint of the authorization server.Sample Request (Step 3): POST /register HTTP/1.1 Content-Type: application/json Accept: application/json Host: authz.server.com{ "client_name": "My Application”, "redirect_uris":[" https://client.org/callback","https://client.org/callback2 "], "token_endpoint_auth_method":"client_secret_basic", "grant_types": ["authorization_code" , "implicit"], "response_types": ["code" , "token"], "scope": ["sc1" , "sc2"], } client_name: Human-readable name of the client to be presented to the user during authorization. If omitted, the authorization server MAY display the raw “client_id” value to the user instead. It is RECOMMENDED that clients always send this field. client_uri: URL of a web page providing information about the client. If present, the server SHOULD display this URL to the end user in a clickable fashion. It is RECOMMENDED that clients always send this field. logo_uri: URL that references a logo for the client. If present, the server SHOULD display this image to the end user during approval. The value of this field MUST point to a valid image file. scope :Space separated list of scope values that the client can use when requesting access tokens. The semantics of values in this list is service specific. If omitted, an authorization server MAY register a client with a default set of scopes. grant_types: Array of OAuth 2.0 grant types that the client may use. response_types: Array of the OAuth 2.0 response types that the client may use. token_endpoint_auth_method: The requested authentication method for the token endpoint. redirect_uris: Array of redirection URI values for use in redirect-based flows such as the authorization code and implicit flows. Sample Response (Step 4): HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Pragma: no-cache{ "client_id":"iuyiSgfgfhffgfh", "client_secret": "hkjhkiiu89hknhkjhuyjhk", "client_id_issued_at":2343276600, "client_secret_expires_at":2503286900, "redirect_uris":[" https://client.org/callback ", " https://client.org/callback2 "], "grant_types": "authorization_code", "token_endpoint_auth_method": "client_secret_basic" } OAuth Client Application This is outside the scope of the API Manager. The client application can talk to the external authorization server via any of the grant types it supports and obtain an access token [3]. The scope parameter is optional in all the token requests – when omitted by the client, the authorization server can associate a default scope with the access token. If no scopes used at all – then the API Gateway can do an authorization check based on other parameters associated with OAuth client, end user, resource and action. If the client sends a set of scopes with the OAuth grant request, then these scopes will be meaningful to the authorization server only if we have published API metadata into the external authorization server via the OAuth Resource Set Registration endpoint – from the API Publisher. Based on the user’s role and the scopes associated with role, authorization server can issue the access token, only for a subset of the scopes request by the OAuth client. Client Credentials Grant Type Sample Request: POST /token HTTP/1.1 Host: server.example.com Authorization: Basic Base64Encode(Client ID:Client Secret) Content-Type: application/x-www-form-urlencodedgrant_type=client_credentialsSample Response:HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache{ "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"example", "expires_in":3600, "example_parameter":"example_value" } Resource Owner Password Grant Type Sample Request: POST /token HTTP/1.1 Host: server.example.com Authorization: Basic Base64Encode(Client ID:Client Secret) Content-Type: application/x-www-form-urlencodedgrant_type=password&username=johndoe&password=A3ddj3wSample Response:HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache{ "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"example", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA", "example_parameter":"example_value" } API Gateway The API Gateway will intercept all the messages flowing between the OAuth client application and the API – and extract out the access token comes in the HTTP Authorization header. Once the access token is extracted out, API Gateway will call the Token Introspection endpoint[4] of the authorization server. Sample Request: POST /introspect HTTP/1.1 Host: authserver.example.com Content-type: application/x-www-form-urlencoded Accept: application/json Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3token=X3241Affw.4233-99JXJ Sample Response: { "active": true, "client_id":"s6BhdRkqt3", "scope": "read write dolphin", "sub": "2309fj32kl", "user_id": "jdoe", "aud": "https://example.org/protected-resource/*", "iss": "https://authserver.example.com/" } active REQUIRED. Boolean indicator of whether or not the presented token is currently active. exp OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token will expire. iat OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token was originally issued. scope OPTIONAL. A space-separated list of strings representing the scopes associated with this token. client_id REQUIRED. Client Identifier for the OAuth Client that requested this token. sub OPTIONAL. Machine-readable identifier local to the AS of the Resource Owner who authorized this token. user_id REQUIRED. Human-readable identifier for the user who authorized this token. aud OPTIONAL. Service-specific string identifier or list of string identifiers representing the intended audience for this token. iss OPTIONAL. String representing the issuer of this token. token_type OPTIONAL. Type of the token as defined in OAuth 2.0 Once the API Gateway gets the token introspection response from the authorization server, it will check whether the client application (client id) has subscribed to the corresponding API and then also will validate the scope. API Gateway knows the required scopes for the API and the introspection response returns back the scopes associated with access token. If everything is fine, API Gateway will generate a JWT and send it  to the downstream API. The generated JWT can optionally include user attributes as well. In that case API Gateway will  talk to the UserInfo endpoint of the authorization server. Also – the API Gateway can simply pass-thru the access token as well – without validating the access token and its associated scopes. In that case API Gateway will only do throttling and monitoring. Secured Endpoints In this proposed revamped architecture, WSO2 API Manager has to talk to following endpoints exposed by the key manager.Resource set registration Dynamic client registration endpoint Introspection endpoint UserInfo endpointFor the first three endpoints, the API Manager will just act as a trusted system. The corresponding KeyManager implementation should know how to authenticate to those endpoints. The OpenID Connect UserInfo endpoint will be invoked with the user provided access token, in run-time. This will work only if the corresponding access token has the privileges to read user’s profile from the authorization server. References [1]: http://tools.ietf.org/html/draft-hardjono-oauth-resource-reg-02 [2]: http://tools.ietf.org/html/draft-ietf-oauth-dyn-reg-19 [3]: http://tools.ietf.org/html/rfc6749 [4]: http://tools.ietf.org/html/draft-richer-oauth-introspection-06Reference: Revamping WSO2 API Manager Key Management Architecture around Open Standards from our JCG partner Prabath Siriwardena at the Facile Login blog....

Accelerated Development: Team Conflict is for Losers

It is a guarantee that don’t like someone on your development team and they have behaviors or habits that you might find objectionable:Mashable talks about 45 most annoying office habits [the nest] talks about 10 Annoying Work Habits That Can Get You Fired.But as irritating as you find your co-workers, odds are: You do something that they find annoying… Annoyances and poor communication can lead to conflicts that range from avoidance to all out war where people get drawn into taking sides.  But consider the cost of team conflict :Issue Productivity Software QualityInternal team conflict -10% -15%Management conflict -14% -19%The table above is only showing the average result of conflict, some of us have been in situations that get much, much worse. Software development is not a popularity contest, you don’t have to like everyone that you work with.  However, if you allow your feelings of annoyance escalate into conflict then there is a real cost to your project and ultimately in your stress levels.All conflicts start with disagreements.  The Communications Catalyst 2 talks about the following cycle:Disagree Defend DestroyWhen you disagree with your coworkers then they don’t feel listened to.  They will then defend their position by digging in their heels, then you will dig in your heels and the road to destruction starts. If there are any annoying habits present then the conflict will escalate quickly.  If things get out of hand then people start taking sides and productivity takes a major hit. In the worst conflicts this leads to loss of key personnel, which has been measured to be: Loss of key personnel, productivity -16%, quality -22% Losing key personnel who have comprehensive knowledge of business rules and organizational practices tied up in their heads often causes projects to face fault and come to a stand still. You may feel justified in starting a conflict or escalating one, however, as clever as you think you are, conflict hurts everyone — yourself included.  Just remember: It is virtually impossible to start/escalate a conflict that doesn’t boomerang back and bite you in the @ss! 4 Ways to Avoid or Reduce Conflicts Things to consider to avoid conflict:Don’t disagree first, signal that the other person has been heardYou will rarely agree with everything that someone else says, but start by agreeing with the part that you do agree with.1 This will at least signal that you have heard them and reduce their anxiety that you are not listening to them. Even mechanically echoing everything that they just said is a way to signal that you heard what was said. Once this is done, then talk about what you don’t agree with.Don’t interrupt people.When you are excited and thoughts are springing to mind then you may be tempted to do all the talking and stop listening; get this under control, take a breath, and let others talk. People generally consider it rude when you interrupt and will assume arrogance on your part.  If you are not trying to be arrogant and someone tells you this then wake up — you need to listen.Don’t be frustrated when people don’t understand youIf you really know something that others don’t then simply restating your point of view will not improve their understanding. If your friend is lost in a new shopping mall then describing your location will not be helpful in helping him find you.  You need to find out where he is and walk him through the steps of getting to your location. Be open to the idea that there might be something that you are not seeing.  With additional information you might revise your point of view.Don’t automatically assume that someone is insulting youIn virtually every case where someone feel insulted this is a knee-jerk reaction to a misunderstanding where no insult was intended. Jumping to conclusions is not good under any circumstance, but is lethal in social interactions.Managers should be on the lookout for the signs of conflict and clear them up while they are still small.  Most conflicts arise from simple misunderstandings. You will notice that most organizations will promote people based on their ability to work with others and resolve conflicts over competence. Learning how to resolve conflicts is likely your ticket to an overdue promotion… Other articles in the “Loser” seriesWant to see more sacred cows get tipped? Check out:Comments are for Losers Defects are for Losers Debuggers are for Losers Efficiency is for Losers Testing departments are for LosersMake no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once! ReferencesCarnegie, Dale.  How to Win Friends and Influence People. 1998. Connolly, Mickey and Rianoshek, Richard.  The Communication Catalyst, 2002. Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011 Kahneman, Daniel. Thinking Fast and Slow. 2011Side Note My best friend also works in the tech sector, and despite being friends for almost 25 years we have very few beliefs or habits in common.  There are subjects that we agree on, but then we don’t agree on how they should be handled. Even though we are very different people this has never stood in the way of us being able to do things together.  If you look around you will see radically different people that manage to cooperate and even thrive. The key to all working relationships especially when the other person is very different from you is respect.Reference: Accelerated Development: Team Conflict is for Losers from our JCG partner Dalip Mahal at the Accelerated Development blog....

Securing the Insecure

The 33 years old, Craig Spencer returned back to USA on 17th October from Africa after treating Ebola patients. Just after few days, he was tested positive for Ebola. Everyone was concerned – specially the people around him – and the New Yorkers. The mayor of the New York came in front of the media and gave an assurance to its citizens – that they have the world’s top medical staff as well as the most advanced medical equipments to treat Ebola – and they have been prepared for this for so many month. That for sure might have calm down most of the people. Let me take another example. When my little daughter was three months old, she used to go to anyone’s hand. Now – she is eleven months and knows who her mother is. Whenever she finds any difficulty she keeps on crying till she gets to the mother. She only feels secured in her mother’s arms. When we type a password into the computer screen – we are so much worried that, it will be seen by our neighbors. But – we never worried of our prestigious business emails been seen by NSA. Why ? Either its totally out of our control – or – we believe NSA will only use them to tighten national security and for nothing else. What I am try to say with all these examples is, insecurity is a perception. Its a perception triggered by undesirable behaviors. Undesirable behavior is a reflection of how much a situation deviates from the correctness. Its all about perception and about building the  perception. There are no 100% secured systems on the earth. Most of the cryptographic algorithms developed in 80s and 90s are now broken due to the advancements in computer processing power. CorrectnessIn the computer world, most developers and operators are concerned about the correctness. The correctness is about achieving the desired behavior. You deposit $ 1000 in your account you would expect the savings to grow exactly by 1000. You send a document to a printer and you would expect the output to be as it is as you see it on the computer screen. The security is concerned about preventing undesirable behaviors. C-I-A There are three security properties that can lead into undesirable behaviors, if those are violated: confidentiality, integrity and availability.Confidentiality means protecting data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption. Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modification, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest. Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on public endpoints, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack. Attacks In March, 2011 the RSA corporation was breached. Attackers were able to steal sensitive tokens related to RSA SecureID devices. These tokens were then used to break into companies that used SecureID.In October, 2013 the Adobe corporation was breached. Both the source code and the customer records were stolen – including passwords. Just a month after the Adobe attack, in November, 2013 – the Target was attacked and 40 million credit card and debit card data were stolen. How all these attacks are possible? Many breaches begin by exploiting a vulnerability in the system under question. A vulnerability is a defect that an attacker can exploit to effect an undesired behavior, with a set of carefully crafted interactions. In general a defect is a problem in either the design or the implementation of the system so that it fails to meet its desired requirements. To be precise, a flow is a defect in the design and a bug is a defect in the implementation. A vulnerability is a defect in the system that affects security relevant behavior of a system, rather than just the correctness. If you take the RSA 2011 breach, it was based on a vulnerability in the Adobe flash player. A carefully crafted flash program when run by a vulnerable flash player, allowed the attacker to execute arbitrary code on the running machine – which was in fact due to a bug in the code. To ensure security, we must eliminate bugs and design flows and make them harder to exploit. The Weakest Link In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.The take-away there is, a proper security design should include all the communication links in the system. Your system is no stronger than its weakest link. The Defense in DepthA layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers. This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage? Insider Attacks Insider attacks are less powerful and less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, are all insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco.Insider attacks are identified as a growing threat in the military. To address this concern, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Cyber Insider Threat (CINDER) in 2010. The objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible. Security by ObscurityKerckhoffs’ Principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. Microsoft’s NTLM design was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. In a proper security design, it’s highly recommended not to use any custom-developed algorithms or protocols. Standards are like design patterns: they’ve been discussed, designed, and tested in an open forum. Every time you have to deviate from a standard, should think twice—or more. Software Security The software security is only a part or a branch of computer security. Software security is kind of computer security that  focuses on the secure design and the implementation of software, using best language, tools and methods. Focus of study of software security is the ‘code’. Most of the popular approaches to security treat software as a black box. They tend to ignore software security.In other words, it focuses on avoiding software vulnerabilities, flaws and bugs. While software security overlaps with and complements other areas of computer security, it is distinguished by its focus on a secure system’s code. This focus makes it a white box approach, where other approaches are more black box. They tend to ignore the software’s internals. Why is software security’s focus on the code important? The short answer is that software defects are often the root cause of security problems, and software security aims to address these defects directly. Other forms of security tend to ignore the software and build up defenses around it. Just like the walls of a castle, these defenses are important and work up to a point. But when software defects remain, cleaver attackers often find a way to bypass those walls. Operating System Security We’ll now consider a few standard methods for security enforcement and see how their black box nature presents limitations that software security techniques can address. Our first example is security enforcement by the operating system or OS. When computer security was growing up as a field in the early 1970s, the operating system was the focus. To the operating system, the code of a running program is not what is important. Instead, the OS cares about what the program does, that is, its actions as it executes. These actions, called system calls, include reading or writing files, sending network packets and running new programs. The operating system enforces security policies that limit the scope of system calls. For example, the OS can ensure that Alice’s programs cannot access Bob’s files. Or that untrusted user programs cannot set up trusted services on standard network ports.The operating system’s security is critically important, but it is not always sufficient. In particular, some of the security relevant actions of a program are too fine-grained to be mediated as system calls. And so the software itself needs to be involved. For example, a database management system or DMBS is a server that manages data whose security policy is specific to an application that is using that data. For an online store, for example, a database may contain security sensitive account information for customers and vendors alongside other records such as product descriptions which are not security sensitive at all. It is up to the DBMS to implement security policies that control access to this data, not the OS. Operating systems are also unable to enforce certain kinds of security policies. Operating systems typically act as an execution monitor which determines whether to allow or disallow a program action based on current execution context and the program’s prior actions. However, there are some kinds of policies, such as information flow policies, that can not be, that simply cannot be enforced precisely without consideration for potential future actions, or even non-actions. Software level mechanisms can be brought to bear in these cases, perhaps in cooperation with the OS. Firewalls and IDS Another popular sort of security enforcement mechanism is a network monitor like a firewall or intrusion detection system or IDS. A firewall generally works by blocking connections and packets from entering the network. For example, a firewall may block all attempts to connect to network servers except those listening on designated ports. Such as TCP port 80, the standard port for web servers. Firewalls are particularly useful when there is software running on the local network that is only intended to be used by local users. An intrusion detection system provides more fine-grained control by examining the contents of network packets, looking for suspicious patterns. For example, to exploit a vulnerable server, an attacker may send a carefully crafted input to that server as a network packet. An IDS can look for such packets and filter them out to prevent the attack from taking place. Firewalls and IDSs are good at reducing the avenues for attack and preventing known vectors of attack. But both devices can be worked around. For example, most firewalls will allow traffic on port 80, because they assume it is benign web traffic. But there is no guarantee that port 80 only runs web servers, even if that’s usually the case. In fact, developers have invented SOAP, which stands for simple object access protocol (no more an acronym since SOAP 1.2), to work around firewall blocking on ports other than port 80. SOAP permits more general purpose message exchanges, but encodes them using the web protocol.Now, IDS patterns are more fine-grained and are more able to look at the details of what’s going on than our firewalls. But IDSs can be fooled as well by inconsequential differences in attack patterns. Attempts to fill those gaps by using more sophisticated filters can slow down traffic, and attackers can exploit such slow downs by sending lots of problematic traffic, creating a denial of service, that is, a loss of availability. Finally, consider anti-virus scanners. These are tools that examine the contents of files, emails, and other traffic on a host machine, looking for signs of attack. These are quite similar to IDSs, but they operate on files and have less stringent performance requirements as a result. But they too can often be bypassed by making small changes to attack vectors. Heartbleed Now we conclude our comparison of software security to black box security with an example, the Heartbleed bug. Heartbleed is the name given to a bug in version 1.0.1 of the OpenSSL implementation of the transport layer security protocol or TLS. This bug can be exploited by getting the buggy server running OpenSSL to return portions of its memory. The bug is an example of a buffer overflow. Let’s look at black box security mechanisms, and how they fare against Heartbleed.Operating system enforcement and anti-virus scanners can do little to help. For the former, an exploit that steals data does so using the privileges normally granted to a TLS-enabled server. So the OS can see nothing wrong. For the latter, the exploit occurs while the TLS server is executing, therefore leaving no obvious traces in the file system. Basic packet filters used by IDSs can look for signs of exploit packets. The FBI issued signatures for the snort IDS soon after Heartbleed was announced. These signatures should work against basic exploits, but exploits may be able to apply variations in packet format such as chunking to bypass the signatures. In any case, the ramifications of a successful attack are not easily determined, because any exfiltrated data will go back on the encrypted channel. Now, compared to these, software security methods would aim to go straight to the source of the problem by preventing or more completely mitigating the defect in the software. Threat Modeling Threat modeling is a methodical, systematic approach to identifying possible security threats and vulnerabilities in a system deployment. First you need to identify all the assets in the system. Assets are the resources you have to protect from intruders. These can be user records/credentials stored in an LDAP, data in a database, files in a file system, CPU power, memory, network bandwidth, and so on. Identifying assets also means identifying all their interfaces and the interaction patterns with other system components. For example, the data stored in a database can be exposed in multiple ways. Database administrators have physical access to the database servers. Application developers have JDBC-level access, and end users have access to an API. Once you identify all the assets in the system to be protected and all the related interaction patterns, you need to list all possible threats and associated attacks. Threats can be identified by observing interactions, based on the CIA triad.From the application server to the database is a JDBC connection. A third party can eavesdrop on that connection to read or modify the data flowing through it. That’s a threat. How does the application server keep the JDBC connection username and password? If they’re kept in a configuration file, anyone having access to the application server’s file system can find them and then access the database over JDBC. That’s another threat. The JDBC connection is protected with a username and password, which can potentially be broken by carrying out a brute-force attack. Another threat. Administrators have direct access to the database servers. How do they access the servers? If access is open for SSH via username/password, then a brute-force attack is likely a threat. If it’s based on SSH keys, where those keys are stored? Are they stored on the physical personal machines of administrators or uploaded to a key server? Losing SSH keys to an intruder is another threat. How about the ports? Have you opened any ports to the database servers, where some intruder can telnet and get control or carry out an attack on an open port to exhaust system resources? Can the physical machine running the database be accessed from outside the corporate network? Is it only available over VPN? All these questions lead you to identifying possible threats over the database server. End users have access to the data via the API. This is a public API, which is exposed from the corporate firewall. A brute-force attack is always a threat if the API is secured with HTTP Basic/Digest Authentication. Having broken the authentication layer, anyone could get free access to the data. Another possible threat is someone accessing the confidential data that flows through the transport channels. Executing a man-in-the-middle attack can do this. DoS is also a possible threat. An attacker can send carefully crafted, malicious, extremely large payloads to exhaust server resources. STRIDE is a popular technique to identify threats associated with a system in a methodical manner. STRIDE stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privileges.Reference: Securing the Insecure from our JCG partner Prabath Siriwardena at the Facile Login blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: