Featured FREE Whitepapers

What's New Here?

apache-camel-logo

Apache Camel please explain me what these endpoint options mean

In the upcoming Apache Camel 2.15, we have made Camel smarter. It is now able to act as a teacher and explain to you how its configured and what those options mean. The first lesson Camel can do is to tell you how all the endpoints have been configured and what these option mean. Lessons we are working on next is to let Camel explain the options for the EIPs are. Okay a picture is worth a thousand words, so let me show a screenshot from Apache Karaf, where you can use the new endpoint-explain command to explain how the endpoints have been configured.  The screenshot from Apache is from the SQL example which I have installed in Karaf. This example uses a number of endpoints, and among those a timer to trigger every 5 seconds. As you can see from above, the command list the endpoint uri: timer://foo?period=5s and then explain the option(s) below. As the uri only has 1 option, there is only one listed. We can see that the option is named period. Its java type is a long. The json schema type is integer. We can see the value is 5s, and below the description which explains what the value does. So why is there two types listed? The idea is that there is a type that is suitable for tooling etc, as it has a simpler category of types accordingly to the JSonSchema specification. The actual type in Java is listed as well. The timer endpoint has many more options, so we can use the –verbose option to list all the options, as shown below:The explain endpoint functionality is also available as JMX or as Java API on the CamelContext. For JMX each endpoint mbean has an explain operation that returns a tabular data with the data as above. This is illustrated in the screenshot below from jconsole:In addition there is a generic explainEndpointJson operation on the CamelContext MBean, this allows to explain any arbitrary uri that is provided. So you can explain endpoints that are not in use by Camel. So how does this works? During the built of the Apache Camel release, for each component we generate a HTML and JSon schema where each endpoint option is documented with their name, type, and description. And for enums we list the possible values. Here is an example of such a json schema for the camel-sql component:Now for this to work, the component must support the uri options, which requires to annotation the endpoint with the @UriEndpoint. Though the Camel team has not migrated all the 160+ components in the Camel release yet. But we plan to migrate the components over time. And certainly now where we have this new functionality, it encourages us to migrate all the components. So where do we get the documentation? Well its just java code, so all you have to do is to have getter/setter for an endpoint option. Add the @UriParam annotation, and for the setter you just add javadoc. Yes we grab the javadoc as the documentation. So its just documented in one place and its in the source code, as standard javadoc. I hope we in the future can auto generate the Camel website documentation for the components, so we do not have to maintain that separately in its wiki system. But that would take hard work to implement. But eventually we should get there, so every component is documented in the source code. For example we could have a readme.md for each component that has all the component documentation, and then the endpoint options is injected from the Camel built system into that readme.md file automatic. Having readme.md files also allow github users to browse the Camel component documentation nicely using github style!So what is next? The hawtio web console will integrate this as well, so users with Camel 2.15 onwards have that information in the web console out of the box. And then its onwards to include documentation about the EIP in the XML schemas for Spring/Blueprint users. And improve the javadoc for the EIPs, as that then becomes the single source of documentation as well. This then allows tooling such as Eclipse / IDEA / Netbeans and whatnot to show the documentation when people develop their Camel routes in the XML editor, as the documentation is provided in the XSD as xsd:documentation tags. We have captured some thoughts what else to do in the CAMEL-7999 ticket. If you have any ideas what else to improve or whatnot, then we love feedback from the community.Reference: Apache Camel please explain me what these endpoint options mean from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog....
software-development-2-logo

OptaPlanner – Open benchmarks for the win

Recently, there was some commotion on Twitter because a competitor heavily restricts publicising benchmarks of their Solver as part of their license. That might seem harsh, but I can understand the sentiment: when a competitor publicizes a benchmark report comparing our product against their own, I know we’re gonna get screwed. Unlike single product benchmarking, competitive benchmarking is inherently dishonest…​           Competitive benchmarking for dummies As as competitor, you can utilize several (obvious and not so obvious) means to prove your superiority over another Solver:Publication biasPick a use case which is know to work well in your Solver. Use datasets with a scale and granularity which are known to work well in your Solver. If you’re really evil, benchmark multiple use cases and datasets in both Solvers and only retain those for which your Solver wins.Expertise imbalanceLet one of your experts develop an implementations for both Solvers.Motivation: like any other company, your company only employs experts in your own technology.If he has years of recent experience in your technology, it’s unlikely he’ll had time for any recent experience in the competitive technology.So you’re effectively using your jockey on someone else’s horse.Tweaking imbalanceSpend an equal amount of time on both implementations.The use case is probably already implemented in your Solver (or straightforward to implement), so you can spend most of the time budget to tweak it better. You ‘ll need to learn the competitor’s Solver first, so you ‘ll spend most of the time budget in that implementation to learn the technology, which leaves no room for tweaking.FundingThere’s no need to explicitly set a desired outcome: your developer will know better than to bite the hand that feeds him.Notice how these approaches don’t require any malice (except for the evil one): it’s normal to conduct a competitive benchmark like this…​ Furthermore, you can make the competitive benchmark comparison look more objective, by sponsoring an academic research group to do the benchmark for you. Just make sure that’s a research group which has been happily using your technology for years and has little or no experience with the competition. Marketing value The marketing value of a such a benchmark report should not be underestimated. These numbers, written in black and white, which clearly show the superiority of your Solver against another Solver, make a strong argument:To close sales deals, when in direct competition with the other Solver. To convince developers, researchers and students to learn and use your technology. To build a strong, long-term reputation.Benchmarks from the 90’s can still affect the Google search results today, for example for “performance of Java vs C++”. Such information spreads virally, and counter claims might not.Empirical evidence Are all competitive benchmark reports lying? Yes, they are probably misrepresenting the truth. Should we therefor restrict users from publicizing benchmarks on our Solver? No, of course not (even if our open source licence would allow such conditions, which it does not). Computer science – like any other science – is build on empirical evidence: the promise that any experiment I publish can be repeated by others independently. If we prevent people from publishing such repeated experiments, we undermine our science. In fact, the more people which report their benchmarks, the clearer our strengths and weaknesses show. Historically, this approach has already enabled us to diagnose and fix weaknesses, regardless whether those were caused by our Solver or the user’s domain specific implementation. Therefore, OptaPlanner welcomes external benchmark reports. I believe in Open Science, as strongly as I believe in Open Source. I do ask the courtesy of allowing public comments/feedback on a public report website, as well as to publicize the details (such as the Solver configuration). If you use the OptaPlanner Benchmarker toolkit (which you will find convenient), simply share the benchmarker HTML report. To run any of the benchmarks of the OptaPlanner Examples locally, simply run a *BenchmarkApp executable class, for example CloudBalancingBenchmarkApp. Notice how a small change in the *BenchmarkConfig.xml, such as switching score calculation from Easy Java to Drools or from Drools to Incremental Java, can have a serious effect in the results. In short: I like external benchmarks, but dislike competitive benchmarks, except for …​ Independent research challenges Can we compare fairly with our competition? Yes, through an independent research challenge. Regularly, the academic community launches such challenges. Each challenge:defines a real-world use case with real-world constraints provides multiple, real-world datasets (half of which they keep hidden) expects reproducible results within a specific time limit on specific hardware gets worldwide participation from the academic and/or enterprise Operations Research community benchmarks each contestant’s implementation on the same hardware in the same time limit to determine a winner benchmarks those hidden datasets to counter overfitting and dataset recognitionIt’s fair: each jockey rides his own horse. Most of the arguments against competitive benchmarking do not apply. And as an added bonus, we get to learn from and compare with the academic research community. In the past, OptaPlanner has done well on these challenges, despite the limited weekend time we have to spend on them. In the last challenge, the ICON power scheduling challenge, we (Lukas, Matej and me) finished 2th place. A minority of the researchers still beat us (with their innovative algorithms in their experimental contraptions and massive time to tweak/build those), but it’s been years since a competitive Solver has beaten us. Long term vision Sharing our benchmarks and enabling others to easily reproduce them, is part of a bigger vision: Too many research papers (on metaheuristics and other optimization algorithms) are hard to reproduce. That’s the paradox in computer science research: to reproduce the findings of a research paper, all we really need is a computer and the code. We don’t need an expensive laboratory. Yet, in practice, the code is usually closed and the raw benchmark data is not accessible. It’s like everyone is scared of sharing the dirty secrets of their code and their benchmarks. I believe that we – the worldwide optimization research community – need to create a benchmark repository: a centralized repository of benchmarks for every use case, for every dataset, for every algorithm, for every implementation version, for any amount of running time. That, together with a good statistical interface, will give us some real insight as to which optimization algorithms are good under which circumstances. We – in OptaPlanner – are well on our way to build exactly that:OptaPlanner Examples already implements 14 distinct use cases. For each use case, we’re already benchmarking on many different optimization algorithms. Our benchmarker HTML report already includes many useful statistics to analyse the raw benchmark data.Reference: OptaPlanner – Open benchmarks for the win from our JCG partner Geoffrey De Smet at the OptaPlanner blog....
junit-logo

JUnit Tutorial for Unit Testing – The ULTIMATE Guide (PDF Download)

EDITORIAL NOTE: We have provided plenty of JUnit tutorials here at Java Code Geeks, like JUnit Getting Started Example, JUnit Using Assertions and Annotations Example, JUnit Annotations Example and so on. However, we prefered to gather all the JUnit features in one detailed guide for the convenience of the reader. We hope you like it!        Want to be a JUnit Master ?Subscribe to our newsletter and download the JUnit Ultimate Guide right now! In order to help you master unit testing with JUnit, we have compiled a kick-ass guide with all the major JUnit features and use cases! Besides studying them online you may download the eBook in PDF format!Email address:Given email address is already subscribed, thank you!Oops. Something went wrong. Please try again later.Please provide a valid email address.Thank you, your sign-up request was successful! Please check your e-mail inbox.Please complete the CAPTCHA.Please fill in the required fields.Table Of Contents1. Unit testing introduction1.1. What is unit testing? 1.2. Test coverage 1.3.Unit testing in Java2. JUnit introduction2.1. JUnit Simple Example using Eclipse 2.2. JUnit annotations 2.3. JUnit assertions3. JUnit complete example using Eclipse3.1. Initial steps 3.2. Create a java class to be tested 3.3. Create and run a JUnit test case 3.4. Using @Ignore annotation 3.5. Creating suite tests 3.6. Creating parameterized tests 3.7. Rules 3.8. Categories4. Run JUnit tests from command line 5. Conclusions  1. Unit testing introduction 1.1. What is unit testing? A unit can be a function, a class, a package, or a subsystem. So, the term unit testing refers to the practice of testing such small units of your code, so as to ensure that they work as expected. For example, we can test whether an output is what we expected to see given some inputs or if a condition is true or false. This practice helps developers to discover failures in their logic behind their code and improve the quality of their code. Also, unit testing can be used so as to ensure that the code will work as expected in case of future changes. 1.2. Test coverage In general, the development community has different opinion regarding the percentage of code that should be tested (test coverage). Some developers believe that the code should have 100% test coverage, while others are comprised with a test coverage of 50% or less. In any case, you should write tests for complex or critical parts of your code. 1.3. Unit testing in Java The most popular testing framework in Java is JUnit. As this guide is focused to JUnit, more details for this testing framework will presented in the next sections. Another popular testing framework in Java is TestNG. 2. JUnit introduction JUnit is an open source testing framework which is used to write and run repeatable automated tests, so that we can be ensured that our code works as expected. JUnit is widely used in industry and can be used as stand alone Java program (from the command line) or within an IDE such as Eclipse. JUnit provides:Assertions for testing expected results. Test features for sharing common test data. Test suites for easily organizing and running tests. Graphical and textual test runners.JUnit is used to test:an entire object part of an object – a method or some interacting methods interaction between several objects2.1. JUnit Simple Example using Eclipse In this section we will see a simple JUnit example. First we will present the class we would like to test: Calculate.java package com.javacodegeeks.junit;public class Calculate {public int sum(int var1, int var2) { System.out.println("Adding values: " + var1 + " + " + var2); return var1 + var2; }}In the above source code, we can notice that the class has one public method named sum(), which gets as inputs two integers, adds them and returns the result. So, we will test this method. For this purpose, we will create another class including methods that will test each one of the methods of the previous class (in this case, we have only one method to be tested). This is the most common way of usage. Of course, if a method is very complex and extended, we can have more than one test methods for this complex method. The details of creating test cases will be presented in the next sections. Below, there is the code of the class named CalculateTest.java, which has the role of our test class: CalculateTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Test;public class CalculateTest {Calculate calculation = new Calculate(); int sum = calculation.sum(2, 5); int testSum = 7;@Test public void testSum() { System.out.println("@Test sum(): " + sum + " = " + testSum); assertEquals(sum, testSum); }}Let’s explain the above code. Firstly, we can see that there is a @Test annotation above the testSum() method. This annotation indicates that the public void method to which it is attached can be run as a test case. Hence, the testSum() method is the method that will test the sum() public method. We can also observe a method called assertEquals(sum, testsum). The method assertEquals ([String message], object expected, object actual) takes as inputs two objects and asserts that the two objects are equal. If we run the test class, by right-clicking in the test class and select Run As -> Junit Test, the program output will look like that: Adding values: 2 + 5 @Test sum(): 7 = 7To see the actual result of a JUnit test, Eclipse IDE provides a JUnit window which shows the results of the tests. In this case where the test succeeds, the JUnit window does not show any errors or failures, as we can see in the image below:Now, if we change this line of code:int testSum = 10;so that the integers to be tested are not equal, the output will be: Adding values: 2 + 5 @Test sum(): 7 = 10And in the JUnit window, an error will appear and this message will be displayed: java.lang.AssertionError: expected: but was: at com.javacodegeeks.junit.CalculateTest.testSum(CalculateTest.java:16) 2.2. JUnit annotations In this section we will mention the basic annotations supported in Junit 4. The table below presents a summary of those annotations:Annotation Description@Test public void method() The Test annotation indicates that the public void method to which it is attached can be run as a test case.@Before public void method() The Before annotation indicates that this method must be executed before each test in the class, so as to execute some preconditions necessary for the test.@BeforeClass public static void method() The BeforeClass annotation indicates that the static method to which is attached must be executed once and before all tests in the class. That happens when the test methods share computationally expensive setup (e.g. connect to database).@After public void method() The After annotation indicates that this method gets executed after execution of each test (e.g. reset some variables after execution of every test, delete temporary variables etc)@AfterClass public static void method() The AfterClass annotation can be used when a method needs to be executed after executing all the tests in a JUnit Test Case class so as to clean-up the expensive set-up (e.g disconnect from a database). Attention: The method attached with this annotation (similar to BeforeClass) must be defined as static.@Ignore public static void method() The Ignore annotation can be used when you want temporarily disable the execution of a specific test. Every method that is annotated with @Ignore won’t be executed.  Let’s see an example of a test class with some of the annotations mentioned above. AnnotationsTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*; import java.util.*; import org.junit.*;public class AnnotationsTest {private ArrayList testList;@BeforeClass public static void onceExecutedBeforeAll() { System.out.println("@BeforeClass: onceExecutedBeforeAll"); }@Before public void executedBeforeEach() { testList = new ArrayList(); System.out.println("@Before: executedBeforeEach"); }@AfterClass public static void onceExecutedAfterAll() { System.out.println("@AfterClass: onceExecutedAfterAll"); }@After public void executedAfterEach() { testList.clear(); System.out.println("@After: executedAfterEach"); }@Test public void EmptyCollection() { assertTrue(testList.isEmpty()); System.out.println("@Test: EmptyArrayList");}@Test public void OneItemCollection() { testList.add("oneItem"); assertEquals(1, testList.size()); System.out.println("@Test: OneItemArrayList"); }@Ignore public void executionIgnored() {System.out.println("@Ignore: This execution is ignored"); } }If we run the above test, the console output would be the following: @BeforeClass: onceExecutedBeforeAll @Before: executedBeforeEach @Test: EmptyArrayList @After: executedAfterEach @Before: executedBeforeEach @Test: OneItemArrayList @After: executedAfterEach @AfterClass: onceExecutedAfterAll2.3. JUnit assertions In this section we will present a number of assertion methods. All those methods are provided by the Assert class which extends the class java.lang.Object and they are useful for writing tests so as to detect failures. In the table below there is a more detailed explanation of the most commonly used assertion methods.Assertion Descriptionvoid assertEquals([String message], expected value, actual value) Asserts that two values are equal. Values might be type of int, short, long, byte, char or java.lang.Object. The first argument is an optional String message.void assertTrue([String message], boolean condition) Asserts that a condition is true.void assertFalse([String message],boolean condition) Asserts that a condition is false.void assertNotNull([String message], java.lang.Object object) Asserts that an object is not null.void assertNull([String message], java.lang.Object object) Asserts that an object is null.void assertSame([String message], java.lang.Object expected, java.lang.Object actual) Asserts that the two objects refer to the same object.void assertNotSame([String message], java.lang.Object unexpected, java.lang.Object actual) Asserts that the two objects do not refer to the same object.void assertArrayEquals([String message], expectedArray, resultArray) Asserts that the array expected and the resulted array are equal. The type of Array might be int, long, short, char, byte or java.lang.Object.  Let’s see an example of some of the aforementioned assertions. AssertionsTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*; import org.junit.Test;public class AssertionsTest {@Test public void test() { String obj1 = "junit"; String obj2 = "junit"; String obj3 = "test"; String obj4 = "test"; String obj5 = null; int var1 = 1; int var2 = 2; int[] arithmetic1 = { 1, 2, 3 }; int[] arithmetic2 = { 1, 2, 3 };assertEquals(obj1, obj2);assertSame(obj3, obj4);assertNotSame(obj2, obj4);assertNotNull(obj1);assertNull(obj5);assertTrue(var1 var2);assertArrayEquals(arithmetic1, arithmetic2); }}In the class above we can see how these assert methods work.The assertEquals() method will return normally if the two compared objects are equal, otherwise a failure will be displayed in the JUnit window and the test will abort. The assertSame() and assertNotSame() methods tests if two object references point to exactly the same object. The assertNull() and assertNotNull() methods test whether a variable is null or not null. The assertTrue() and assertFalse() methods tests if a condition or a variable is true or false. The assertArrayEquals() will compare the two arrays and if they are equal, the method will proceed without errors. Otherwise, a failure will be displayed in the JUnit window and the test will abort.3. JUnit complete example using Eclipse In this section we will show a complete example of using JUnit. We will see in detail how to create and run tests and we will show how to use specific annotations and assertions of JUnit. 3.1. Initial Steps Let’s create a java project named JUnitGuide. In the src folder, we right-click and select New -> Package, so as to create a new package named com.javacodegeeks.junit where we will locate the class to be tested. For the test classes, it is considered as good practice to create a new source folder dedicated to tests, so that the classes to be tested and the test classes will be in different source folders. For this purpose, right-click your project, select New -> Source Folder, name the new source folder test and click Finish.TipAlternatively, you can create a new source folder by right-clicking your project and select Properties -> Java Build Path, select the tab Source, select Add Folder -> Create New Folder, write the name test and press Finish. You can easily see that there are two source folders in your project:You can also create a new package in the newly created test folder, which will be called com.javacodegeeks.junit, so that your test classes won’t be located to the default package and we are ready to start! 3.2. Create the java class to be tested Right-click the src folder and create a new java class called FirstDayAtSchool.java. This will be the class whose public methods will be tested. FirstDayAtSchool.java package com.javacodegeeks.junit;import java.util.Arrays;public class FirstDayAtSchool {public String[] prepareMyBag() { String[] schoolbag = { "Books", "Notebooks", "Pens" }; System.out.println("My school bag contains: " + Arrays.toString(schoolbag)); return schoolbag; }public String[] addPencils() { String[] schoolbag = { "Books", "Notebooks", "Pens", "Pencils" }; System.out.println("Now my school bag contains: " + Arrays.toString(schoolbag)); return schoolbag; } }3.3. Create and run a JUnit test case To create a JUnit test case for the existing class FirstDayAtSchool.java, right-click on it in the Package Explorer view and select New → JUnit Test Case. Change the source folder so that the class will be located to test source folder and ensure that the flag New JUnit4 test is selected.Then, click Finish. If your project does not contain the JUnit library in its classpath, the following message will be displayed so as to add the JUnit library to the classpath:Below, there is the code of the class named FirstDayAtSchoolTest.java, which is our test class: FirstDayAtSchool.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Test;public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); }@Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}Now we can run the test case by right-clicking on the test class and select Run As -> JUnit Test. The program output will look like that: Inside testPrepareMyBag() My school bag contains: [Books, Notebooks, Pens] Inside testAddPencils() Now my school bag contains: [Books, Notebooks, Pens, Pencils]and in the JUnit view will be no failures or erros. If we change one of the arrays, so that it contains more than the expected elements:String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils", "Rulers"};and we run again the test class, the JUnit view will contain a failure:Else, if we change again one of the arrays, so that it contains a different element than the expected:String[] bag1 = { "Books", "Notebooks", "Rulers" };and we run again the test class, the JUnit view will contain once again a failure:3.4. Using @Ignore annotation Let’s see in the above example how can we use the @Ignore annotation. In the test class FirstDayAtSchoolTest we will add the @Ignore annotation to the testAddPencils() method. In that way, we expect that this testing method will be ignored and won’t be executed. package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Ignore; import org.junit.Test;public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); }@Ignore @Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}Indeed, this is what happens according to the output: Inside testPrepareMyBag() My school bag contains: [Books, Notebooks, Pens]Now, we will remove the @Ignore annotation from the testAddPencils() method and we will annotate the whole class instead. package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.Ignore; import org.junit.Test;@Ignore public class FirstDayAtSchoolTest {FirstDayAtSchool school = new FirstDayAtSchool(); String[] bag1 = { "Books", "Notebooks", "Pens" }; String[] bag2 = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testPrepareMyBag() { System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag1, school.prepareMyBag()); } @Test public void testAddPencils() { System.out.println("Inside testAddPencils()"); assertArrayEquals(bag2, school.addPencils()); }}The whose test class won’t be executed, so no result will be displayed int the console output and in the junit view:3.5. Creating suite tests In this section, we will see how to create suite tests. A test suite is a collection of some test cases from different classes that can be run all together using @RunWith and @Suite annotations. This is very helpful if you have many test classes and you want to run them all together instead of running each test one at a time. When a class is annotated with @RunWith, JUnit will invoke the class in which is annotated so as to run the tests, instead of using the runner built into JUnit. Based on the classes of the previous sections, we can create two test classes. The one class will test the public method prepareMyBag() and the other test class will test the method addPencils(). Hence, we will eventually have the classes below: PrepareMyBagTest.java package com.javacodegeeks.junit;import org.junit.Test; import static org.junit.Assert.*;public class PrepareMyBagTest {FirstDayAtSchool school = new FirstDayAtSchool();String[] bag = { "Books", "Notebooks", "Pens" };@Test public void testPrepareMyBag() {System.out.println("Inside testPrepareMyBag()"); assertArrayEquals(bag, school.prepareMyBag());}}AddPencilsTest.java package com.javacodegeeks.junit;import org.junit.Test; import static org.junit.Assert.*;public class AddPencilsTest {FirstDayAtSchool school = new FirstDayAtSchool();String[] bag = { "Books", "Notebooks", "Pens", "Pencils" };@Test public void testAddPencils() {System.out.println("Inside testAddPencils()"); assertArrayEquals(bag, school.addPencils());}}Now we will create a test suite so as to run the above classes together. Right-click the test source folder and create a new java class named SuiteTest.java with the following code: SuiteTest.java package com.javacodegeeks.junit;import org.junit.runner.RunWith; import org.junit.runners.Suite;@RunWith(Suite.class) @Suite.SuiteClasses({ PrepareMyBagTest.class, AddPencilsTest.class }) public class SuitTest {}With the @Suite.SuiteClasses annotation you can define which test classes will be included in the execution. So, if you right-click the test suite and select Run As -> JUnit Test, the execution of both test classes will take place with the order that has been defined in the @Suite.SuiteClasses annotation. 3.6. Creating parameterized tests In this section we will see how to create parameterized tests. For this purpose, we will use the class mentioned in section 2.1 which provides a public method for adding integers. So, this will be the class to be tested. But when a test class can be considered as a parameterized test class? Of course, when it fullfills all the following requirements:The class is annotated with @RunWith(Parameterized.class). As explained in the previous section, @RunWith annotation enables JUnit to invoke the class in which is annotated to run the tests, instead of using the runner built into JUnit. Parameterized is a runner inside JUnit that will run the same test case with different set of inputs. The class has a single constructor that stores the test data. The class has a static method that generates and returns test data and is annotated with the @Parameters annotation. The class has a test, which obviously means that it needs a method annotated with the @Test annotation.Now, we will create a new test class named CalculateTest.java, which will follow the guidelines mentioned above. The source code of this class follows. CalculateTest.java package com.javacodegeeks.junit;import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection;import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters;@RunWith(Parameterized.class) public class CalculateTest {private int expected; private int first; private int second;public CalculateTest(int expectedResult, int firstNumber, int secondNumber) { this.expected = expectedResult; this.first = firstNumber; this.second = secondNumber; }@Parameters public static Collection addedNumbers() { return Arrays.asList(new Integer[][] { { 3, 1, 2 }, { 5, 2, 3 }, { 7, 3, 4 }, { 9, 4, 5 }, }); }@Test public void sum() { Calculate add = new Calculate(); System.out.println("Addition with parameters : " + first + " and " + second); assertEquals(expected, add.sum(first, second)); } }As we can observe in the class above, it fullfills all the above requirements. The method addedNumbers annotated with @Parameters returns a Collection of Arrays. Each array includes the inputs/output numbers of each test execution. The number of elements in each array must be the same with the number of parameters in the constructor. So, in this specific case, each array includes three elements, two elements that represent the numbers to be added and one element for the result. If we run the CalculateTest test case, the console output will be the following: Addition with parameters : 1 and 2 Adding values: 1 + 2 Addition with parameters : 2 and 3 Adding values: 2 + 3 Addition with parameters : 3 and 4 Adding values: 3 + 4 Addition with parameters : 4 and 5 Adding values: 4 + 5As we see in the output, the test case is executed four times, which is the number of inputs in the method annotated with @Parameters annotation. 3.7. Rules In this section we present a new feature of JUnit called Rules which allows very flexible addition or redefinition of the behavior of each test method in a test class. For this purpose, @Rule annotation should be used so as to mark public fields of a test class. Those fields should be of type MethodRule, which is an alteration in how a test method is run and reported. Multiple MethodRules can be applied to a test method. MethodRule interface has a lot of implementations, such as ErrorCollector which allows execution of a test to continue after the first problem is found, ExpectedException which allows in-test specification of expected exception types and messages, TestName which makes the current test name available inside test methods, and many others. Except for those already defined rules, developers can create their own custom rules and use them in their test cases as they wish. Below we present the way we can use one of the existing rules named TestName in our own tests. TestName is invoked when a test is about to start. NameRuleTest.java package com.javacodegeeks.junit;import static org.junit.Assert.*;import org.junit.*; import org.junit.rules.TestName;public class NameRuleTest { @Rule public TestName name = new TestName();@Test public void testA() { System.out.println(name.getMethodName()); assertEquals("testA", name.getMethodName());}@Test public void testB() { System.out.println(name.getMethodName()); assertEquals("testB", name.getMethodName()); } }We can see that the @Rule annotation marks the public field name which is of type MethodRule and specifically, TestName type. Then, we can use in our tests this name field and find for example the name of the test method, in this specific case. 3.8. Categories Another new feature of JUnit is called Categories and allows you to group certain kinds of tests together and even include or exclude groups (categories). For example, you can separate slow tests from fast tests. To assign a test case or a method to one of those categories the @Category annotation is provided. Below there is an example of how we can use this nice feature of JUnit, based on the release notes of JUnit 4.8. public interface FastTests { /* category marker */ } public interface SlowTests { /* category marker */ } Firstly, we define two categories, FastTests and SlowTests. A category can be either a class or an interface.public class A { @Test public void a() { fail(); }@Category(SlowTests.class) @Test public void b() { } }In the above code, we mark the test method b() of class A with @Category annotation so as to indicate that this specific method belongs to category SlowTests. So, we are able to mark not only whole classes but also some of their test methods individually.@Category({ SlowTests.class, FastTests.class }) public class B { @Test public void c() { } }In the above sample of code, we can see that the whole class B is annotated with @Category annotation . Annotating a test class with @Category annotation automatically includes all its test methods in this category. We can also see that a test class or a test method can belong to more than one categories.@RunWith(Categories.class) @IncludeCategory(SlowTests.class) @SuiteClasses({ A.class, B.class }) // Note that Categories is a kind of Suite public class SlowTestSuite { // Will run A.b and B.c, but not A.a } In this sample of code, we notice that there is a suite test named SlowTestSuite. Basically, categories are a kind of suite. In this suite, we observe a new annotation called @IncludeCategory, indicating which categories will be included in the execution. In this specific case, methods belonging to SlowTests category will be executed. Hence, only the test method b() of class A will be executed as well as the test method c() of class B, which both belong to SlowTests category.@RunWith(Categories.class) @IncludeCategory(SlowTests.class) @ExcludeCategory(FastTests.class) @SuiteClasses({ A.class, B.class }) // Note that Categories is a kind of Suite public class SlowTestSuite { // Will run A.b, but not A.a or B.c }Finally, we change a little bit the test suite and we add one more new annotation called @ExcludeCategory, indicating which categories will be excluded from the execution. In this specific case, only the test method b() of class A will be executed, as this is the only test method that belongs explicitly to SlowTests category. We notice that in both cases, the test method a() of class A won’t be executed as it doesn’t belong to any category. 4. Run JUnit tests from command line You can run your JUnit test outside Eclipse, by using the org.junit.runner.JUnitCore class. This class provides the runClasses() method which allows you to execute one or several test classes. The return type of runClasses() method is an object of the type org.junit.runner.Result. This object can be used to collect information about the tests. Also, in case there is a failed test, you can use the object org.junit.runner.notification.Failure which holds description of the failed tests. The procedure below shows how to run your test outside Eclipse. Create a new Java class named JunitRunner.java with the following code: JunitRunner.java package com.javacodegeeks.junit;import org.junit.runner.JUnitCore; import org.junit.runner.Result; import org.junit.runner.notification.Failure;public class JunitRunner {public static void main(String[] args) {Result result = JUnitCore.runClasses(AssertionsTest.class); for (Failure fail : result.getFailures()) { System.out.println(fail.toString()); } if (result.wasSuccessful()) { System.out.println("All tests finished successfully..."); } } }As an example, we choose to run the AssertionsTest test class.Open command prompt and move down directories so as to find the directory where the two classes are located. Compile the Test class and the Runner class. C:\Users\konstantina\eclipse_luna_workspace\JUnitGuide\test\com\javacodegeeks\junit>javac -classpath "C:\Users\konstantina\Downloads\junit-4.11.jar";"C:\Users\konstantina\Downloads\hamcrest-core-1.3.jar"; AssertionsTest.java JunitRunner.java As we did in Eclipse, we should also include library jars of JUnit to our classpath. Now run the JunitRunner. C:\Users\konstantina\eclipse_luna_workspace\JUnitGuide\test\com\javacodegeeks\junit>java -classpath "C:\Users\konstantina\Downloads\junit-4.11.jar";"C:\Users\konstantina\Downloads\hamcrest-core-1.3.jar"; JunitRunnerHere is the output: All tests finished successfully...5. Conclusions This was a detailed guide about JUnit testing framework, the most popular testing framework in Java. If you enjoyed this, then subscribe to our newsletter to enjoy weekly updates and complimentary whitepapers! Also, check out JCG Academy for more advanced training! DownloadYou can download the full source code of this guide here : JUnitGuide.zip ...
java-logo

10 Things You Didn’t Know About Java

So, you’ve been working with Java since the very beginning? Remember the days when it was called “Oak”, when OO was still a hot topic, when C++ folks thought that Java had no chance, when Applets were still a thing? I bet that you didn’t know at least half of the following things. Let’s start this week with some great surprises about the inner workings of Java.           1. There is no such thing as a checked exception That’s right! The JVM doesn’t know any such thing, only the Java language does. Today, everyone agrees that checked exceptions were a mistake. As Bruce Eckel said on his closing keynote at GeeCON, Prague, no other language after Java has engaged in using checked exceptions, and even Java 8 does no longer embrace them in the new Streams API (which can actually be a bit of a pain, when your lambdas use IO or JDBC). Do you want proof that the JVM doesn’t know such a thing? Try the following code: public class Test { // No throws clause here public static void main(String[] args) { doThrow(new SQLException()); } static void doThrow(Exception e) { Test.<RuntimeException> doThrow0(e); } @SuppressWarnings("unchecked") static <E extends Exception> void doThrow0(Exception e) throws E { throw (E) e; } } Not only does this compile, this also actually throws the SQLException, you don’t even need Lombok’s @SneakyThrows for that.More details about the above can be found in this article here, or here, on Stack Overflow.2. You can have method overloads differing only in return types That doesn’t compile, right? class Test { Object x() { return "abc"; } String x() { return "123"; } } Right. The Java language doesn’t allow for two methods to be “override-equivalent” within the same class, regardless of their potentially differing throws clauses or return types. But wait a second. Check out the Javadoc of Class.getMethod(String, Class...). It reads:Note that there may be more than one matching method in a class because while the Java language forbids a class to declare multiple methods with the same signature but different return types, the Java virtual machine does not. This increased flexibility in the virtual machine can be used to implement various language features. For example, covariant returns can be implemented with bridge methods; the bridge method and the method being overridden would have the same signature but different return types.Wow, yes that makes sense. In fact, that’s pretty much what happens when you write the following: abstract class Parent<T> { abstract T x(); }class Child extends Parent<String> { @Override String x() { return "abc"; } } Check out the generated byte code in Child: // Method descriptor #15 ()Ljava/lang/String; // Stack: 1, Locals: 1 java.lang.String x(); 0 ldc <String "abc"> [16] 2 areturn Line numbers: [pc: 0, line: 7] Local variable table: [pc: 0, pc: 3] local: this index: 0 type: Child // Method descriptor #18 ()Ljava/lang/Object; // Stack: 1, Locals: 1 bridge synthetic java.lang.Object x(); 0 aload_0 [this] 1 invokevirtual Child.x() : java.lang.String [19] 4 areturn Line numbers: [pc: 0, line: 1] So, T is really just Object in byte code. That’s well understood. The synthetic bridge method is actually generated by the compiler because the return type of the Parent.x() signature may be expected to Object at certain call sites. Adding generics without such bridge methods would not have been possible in a binary compatible way. So, changing the JVM to allow for this feature was the lesser pain (which also allows covariant overriding as a side-effect…) Clever, huh? Are you into language specifics and internals? Then find some more very interesting details here. 3. All of these are two-dimensional arrays! class Test { int[][] a() { return new int[0][]; } int[] b() [] { return new int[0][]; } int c() [][] { return new int[0][]; } } Yes, it’s true. Even if your mental parser might not immediately understand the return type of the above methods, they are all the same! Similar to the following piece of code: class Test { int[][] a = {{}}; int[] b[] = {{}}; int c[][] = {{}}; } You think that’s crazy? Imagine using JSR-308 / Java 8 type annotations on the above. The number of syntactic possibilities explodes! @Target(ElementType.TYPE_USE) @interface Crazy {}class Test { @Crazy int[][] a1 = {{}}; int @Crazy [][] a2 = {{}}; int[] @Crazy [] a3 = {{}};@Crazy int[] b1[] = {{}}; int @Crazy [] b2[] = {{}}; int[] b3 @Crazy [] = {{}};@Crazy int c1[][] = {{}}; int c2 @Crazy [][] = {{}}; int c3[] @Crazy [] = {{}}; }Type annotations. A device whose mystery is only exceeded by its powerOr in other words:When I do that one last commit just before my 4 week vacationI let the actual exercise of finding a use-case for any of the above to you. 4. You don’t get the conditional expression So, you thought you knew it all when it comes to using the conditional expression? Let me tell you, you didn’t. Most of you will think that the below two snippets are equivalent: Object o1 = true ? new Integer(1) : new Double(2.0); … the same as this? Object o2;if (true) o2 = new Integer(1); else o2 = new Double(2.0); Nope. Let’s run a quick test System.out.println(o1); System.out.println(o2); This programme will print: 1.0 1 Yep! The conditional operator will implement numeric type promotion, if “needed”, with a very very very strong set of quotation marks on that “needed”. Because, would you expect this programme to throw a NullPointerException? Integer i = new Integer(1); if (i.equals(1)) i = null; Double d = new Double(2.0); Object o = true ? i : d; // NullPointerException! System.out.println(o);More information about the above can be found here.5. You also don’t get the compound assignment operator Quirky enough? Let’s consider the following two pieces of code: i += j; i = i + j; Intuitively, they should be equivalent, right? But guess what. They aren’t! The JLS specifies:A compound assignment expression of the form E1 op= E2 is equivalent to E1 = (T)((E1) op (E2)), where T is the type of E1, except that E1 is evaluated only once.This is so beautiful, I would like to cite Peter Lawrey‘s answer to this Stack Overflow question:A good example of this casting is using *= or /= byte b = 10; b *= 5.7; System.out.println(b); // prints 57 or byte b = 100; b /= 2.5; System.out.println(b); // prints 40 or char ch = '0'; ch *= 1.1; System.out.println(ch); // prints '4' or char ch = 'A'; ch *= 1.5; System.out.println(ch); // prints 'a'Now, how incredibly useful is that? I’m going to cast/multiply chars right there in my application. Because, you know… 6. Random integers Now, this is more of a puzzler. Don’t read the solution yet. See if you can find this one out yourself. When I run the following programme: for (int i = 0; i < 10; i++) { System.out.println((Integer) i); } … then “sometimes”, I get the following output: 92 221 45 48 236 183 39 193 33 84 How is that even possible?? . . . . . . spoiler… solution ahead… . . . . . OK, the solution is here and has to do with overriding the JDK’s Integer cache via reflection, and then using auto-boxing and auto-unboxing. Don’t do this at home! Or in other words, let’s think about it this way, once moreWhen I do that one last commit just before my 4 week vacation7. GOTO This is one of my favourite. Java has GOTO! Type it… int goto = 1; This will result in: Test.java:44: error: <identifier> expected int goto = 1; ^ This is because goto is an unused keyword, just in case… But that’s not the exciting part. The exciting part is that you can actually implement goto with break, continue and labelled blocks: Jumping forward label: { // do stuff if (check) break label; // do more stuff } In bytecode: 2 iload_1 [check] 3 ifeq 6 // Jumping forward 6 .. Jumping backward label: do { // do stuff if (check) continue label; // do more stuff break label; } while(true); In bytecode: 2 iload_1 [check] 3 ifeq 9 6 goto 2 // Jumping backward 9 .. 8. Java has type aliases In other languages (e.g. Ceylon), we can define type aliases very easily: interface People => Set<Person>; A People type constructed in such a way can then be used interchangably with Set<Person>: People? p1 = null; Set<Person>? p2 = p1; People? p3 = p2; In Java, we can’t define type aliases at a top level. But we can do so for the scope of a class, or a method. Let’s consider that we’re unhappy with the namings of Integer, Long etc, we want shorter names: I and L. Easy: class Test<I extends Integer> { <L extends Long> void x(I i, L l) { System.out.println( i.intValue() + ", " + l.longValue() ); } } In the above programme, Integer is “aliased” to I for the scope of the Test class, whereas Long is “aliased” to L for the scope of the x() method. We can then call the above method like this: new Test().x(1, 2L); This technique is of course not to be taken seriously. In this case, Integer and Long are both final types, which means that the types I and L are effectively aliases (almost. assignment-compatibility only goes one way). If we had used non-final types (e.g. Object), then we’d be really using ordinary generics. Enough of these silly tricks. Now for something truly remarkable! 9. Some type relationships are undecidable! OK, this will now get really funky, so take a cup of coffee and concentrate. Consider the following two types: // A helper type. You could also just use List interface Type<T> {}class C implements Type<Type<? super C>> {} class D<P> implements Type<Type<? super D<D<P>>>> {} Now, what do the types C and D even mean? They are somewhat recursive, in a similar (yet subtly different) way that java.lang.Enum is recursive. Consider: public abstract class Enum<E extends Enum<E>> { ... } With the above specification, an actual enum implementation is just mere syntactic sugar: // This enum MyEnum {}// Is really just sugar for this class MyEnum extends Enum<MyEnum> { ... } With this in mind, let’s get back to our two types. Does the following compile? class Test { Type<? super C> c = new C(); Type<? super D<Byte>> d = new D<Byte>(); } Hard question, and Ross Tate has an answer to it. The question is in fact undecidable: Is C a subtype of Type<? super C>? Step 0) C <?: Type<? super C> Step 1) Type<Type<? super C>> <?: Type (inheritance) Step 2) C (checking wildcard ? super C) Step . . . (cycle forever) And then: Is D a subtype of Type<? super D<Byte>>? Step 0) D<Byte> <?: Type<? super C<Byte>> Step 1) Type<Type<? super D<D<Byte>>>> <?: Type<? super D<Byte>> Step 2) D<Byte> <?: Type<? super D<D<Byte>>> Step 3) List<List<? super C<C>>> <?: List<? super C<C>> Step 4) D<D<Byte>> <?: Type<? super D<D<Byte>>> Step . . . (expand forever) Try compiling the above in your Eclipse, it’ll crash! (don’t worry. I’ve filed a bug) Let this sink in…Some type relationships in Java are undecidable!If you’re interested in more details about this peculiar Java quirk, read Ross Tate’s paper “Taming Wildcards in Java’s Type System” (co-authored with Alan Leung and Sorin Lerner), or also our own musings on correlating subtype polymorphism with generic polymorphism 10. Type intersections Java has a very peculiar feature called type intersections. You can declare a (generic) type that is in fact the intersection of two types. For instance: class Test<T extends Serializable & Cloneable> { } The generic type parameter T that you’re binding to instances of the class Test must implement both Serializable and Cloneable. For instance, String is not a possible bound, but Date is: // Doesn't compile Test<String> s = null;// Compiles Test<Date> d = null; This feature has seen reuse in Java 8, where you can now cast types to ad-hoc type intersections. How is this useful? Almost not at all, but if you want to coerce a lambda expression into such a type, there’s no other way. Let’s assume you have this crazy type constraint on your method: <T extends Runnable & Serializable> void execute(T t) {} You want a Runnable that is also Serializable just in case you’d like to execute it somewhere else and send it over the wire. Lambdas and serialisation are a bit of a quirk. Lambdas can be serialised:You can serialize a lambda expression if its target type and its captured arguments are serializableBut even if that’s true, they do not automatically implement the Serializable marker interface. To coerce them to that type, you must cast. But when you cast only to Serializable… execute((Serializable) (() -> {})); … then the lambda will no longer be Runnable. Egh… So… Cast it to both types: execute((Runnable & Serializable) (() -> {})); Conclusion I usually say this only about SQL, but it’s about time to conclude an article with the following:Java is a device whose mystery is only exceeded by its power.Reference: 10 Things You Didn’t Know About Java from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-interview-questions-answers

Java EE 7 / JAX-RS 2.0 – CORS on REST

Java EE REST application usually works well out of the box on a development machine where all server side resources and client side UIs point to “localhost” or 127.0.0.1. But when it comes to cross domain deployment (when the REST client is no longer on the same domain as the server that host the REST APIs), some work-around is required. This article is about how to make Cross Domain or better known as Cross-origin Resource Sharing a.k.a CORS work when it comes to Java EE 7 / JAX-RS 2.0 REST APIs. It is not the intention of this article to discuss about browser and other security related mechanisms, you may find this on other websites; but what we truly want to achieve here is again, to get things working as soon as possible.     What is the Problem? Demo Java EE 7 (JAX-RS 2.0) REST Service In this article, I’ll just code a a simple Java EE 7 JAX-RS 2.0 based REST web service and client for demo purpose. Here, I’ll define an interface annotating it with the url path of the REST service, along with the accepted HTTP methods and MIME Type for the HTTP response. Codes for RESTCorsDemoResourceProxy.java: package com.developerscrappad.intf;   import java.io.Serializable; import javax.ejb.Local; import javax.ws.rs.DELETE; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;   @Local @Path( "rest-cors-demo" ) public interface RESTCorsDemoResourceProxy extends Serializable {   @GET @Path( "get-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response getMethod();   @PUT @Path( "put-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response putMethod();   @POST @Path( "post-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response postMethod();   @DELETE @Path( "delete-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response deleteMethod(); }Codes for RESTCorsDemoResource.java: package com.developerscrappad.business;   import com.developerscrappad.intf.RESTCorsDemoResourceProxy; import javax.ejb.Stateless; import javax.json.Json; import javax.json.JsonObject; import javax.json.JsonObjectBuilder; import javax.ws.rs.core.Response;   @Stateless( name = "RESTCorsDemoResource", mappedName = "ejb/RESTCorsDemoResource" ) public class RESTCorsDemoResource implements RESTCorsDemoResourceProxy {   @Override public Response getMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "get method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.OK ).entity( jsonObj.toString() ).build(); }   @Override public Response putMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "get method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); }   @Override public Response postMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "post method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.CREATED ).entity( jsonObj.toString() ).build(); }   @Override public Response deleteMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "delete method ok" );   JsonObject jsonObj = jsonObjBuilder.build();   return Response.status( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); } }The codes in RESTCorsDemoResource is straight forward but please bare in mind that this is just a demo application and it has no valid purpose in its business logic. The RESTCorsDemoResource class implements the method signatures defined in the interface RESTCorsDemoResourceProxy. It has several methods which process incoming HTTP request through specific HTTP methods like GET, PUT, POST and DELETE, and at the end of the method, returns a simple JSON message when the process is done. Not forgetting the web.xml below which tells the app server to treat it as a REST API call for any incoming HTTP request when the path detects “/rest-api/*” (e.g. http://<host>:<port>/AppName/rest-api/get-method/). Contents in web.xml:  javax.ws.rs.core.Application 1 javax.ws.rs.core.Application /rest-api/*  Deployment Let’s package the above in a war file say RESTCorsDemo.war and deploy it to a Java EE 7 compatible app server. On my side, I’m running this on Glassfish 4.0 with default settings, which resides in machine with the public domain developerscrappad.com Once deployed, the URLs to the REST services should be as the below:Method REST URLRESTCorsDemoResourceProxy.getMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/get-method/RESTCorsDemoResourceProxy.postMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/post-method/RESTCorsDemoResourceProxy.putMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/put-method/RESTCorsDemoResourceProxy.deleteMethod() http://developerscrappad.com/RESTCorsDemo/rest-api/rest-cors-demo/delete-method/  HTML REST Client On my local machine, I’ll just create a simple HTML page to invoke the deployed REST server resources with the below: Codes for rest-test.html: <!DOCTYPE html> <html> <head> <title>REST Tester</title> <meta charset="UTF-8"> </head> <body> <div id="logMsgDiv"></div>   <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script type="text/javascript"> var $ = jQuery.noConflict();   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/put-method/", type: "PUT", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   $.ajax( { cache: false, crossDomain: true, dataType: "json", url: "http://developerscrappad.com:8080/RESTCorsDemo/rest-api/rest-cors-demo/delete-method/", type: "DELETE", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>" + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } ); </script> </body> </html>Here, I’m using jQuery’s ajax object for REST Services call with the defined option. The purpose of the rest-test.html is to invoke the REST Service URLs with the appropriate HTTP method and obtain the response as JSON result for processing later. I won’t go into detail here but in case if you like to know more about the $.ajax call options available, you may visit jQuery’s documentation site on this. What happens when we run rest-test.html? When I run the rest-test.html file on my Firefox browser, equip with the Firebug plugin, the below screen shots are what I get.As you can see, when I check on the console tab, both the “/rest-api/rest-cors-demo/get-method/” and the “/rest-api/rest-cors-demo/post-method/” returned the right HTTP Status, but I can be absolutely sure that the method wasn’t executed on the remote Glassfish app server, the REST service calls were just bypassed, on the rest-test.html client, it just went straight to the $.ajax error callbacks. What about the “/rest-api/rest-cors-demo/put-method/” and the “/rest-api/rest-cors-demo/delete-method/“, when I check on the Firebug Net Tab as shown on one of the screen shots, the browser sent a Preflight Request by firing OPTIONS as the HTTP Method instead of the PUT and the DELETE. This phenomenon relates to both server side and browser security; I have compiled some other websites relating this at the bottom of the page. How To Make CORS Works in Java EE 7 / JAX-RS 2.0 (Through Interceptors) In order to make cross domain calls or simply known as CORS work on both the client and the server side REST resource, I have created two JAX-RS 2.0 interceptor classes, one implementing the ContainerRequestFilter and another implementing the ContainerResponseFilter. Additional HTTP Headers in ContainerResponseFilter The browser will require some additional HTTP headers to be responded back to it to further verify whether the server side resources allow cross domain / cross-origin resource sharing and to which level of security or limitation it permits. These are the headers which work pretty well out of the box for enabling CORS. Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true Access-Control-Allow-Methods: GET, POST, DELETE, PUT These sets of of additional HTTP Headers that could be included as part of the HTTP response when it goes back to the browser by having it included in a class which implements ContainerResponseFilter. ** But do take note: Having “Access-Control-Allow-Origin: *” will allow all calls to be accepted regardless of the location of the client. There are ways for you to further restrict this is you only want the server side to permit REST service calls from only a specific domain. Please check out the related articles at the bottom of the page. Codes for RESTCorsDemoResponseFilter.java: package com.developerscrappad.filter;   import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class RESTCorsDemoResponseFilter implements ContainerResponseFilter {   private final static Logger log = Logger.getLogger( RESTCorsDemoResponseFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx, ContainerResponseContext responseCtx ) throws IOException { log.info( "Executing REST response filter" );   responseCtx.getHeaders().add( "Access-Control-Allow-Origin", "*" ); responseCtx.getHeaders().add( "Access-Control-Allow-Credentials", "true" ); responseCtx.getHeaders().add( "Access-Control-Allow-Methods", "GET, POST, DELETE, PUT" ); } }Dealing With Browser Preflight Request HTTP Method: OPTIONS The RESTCorsDemoResponseFilter class which implements ContainerResponseFilter only solved part of the issue. We still have to deal with the browser’s pre-flight request for the PUT and the DELETE HTTP methods. The underlying pre-flight request mechanism of most of the popular browsers work in such a way that they send a request with OPTIONS as the HTTP Method just to test the waters. If the server side resource acknowledges the path url of the request and allows PUT or DELETE HTTP Method to be accepted for processing, the server side will typically have to send an HTTP Status 200 (OK) response (or any sort of 20x HTTP Status) back to the browser before the browser sends the actual request as HTTP Method PUT or DELETE after that. However, this mechanism would have to implemented manually by the developer. So, I have implemented a new class by the name of RESTCorsDemoRequestFilter which implements ContainerRequestFilter shown at the below for this mechanism. Codes for RESTCorsDemoRequestFilter.java: package com.developerscrappad.filter;   import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class RESTCorsDemoRequestFilter implements ContainerRequestFilter {   private final static Logger log = Logger.getLogger( RESTCorsDemoRequestFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx ) throws IOException { log.info( "Executing REST request filter" );   // When HttpMethod comes as OPTIONS, just acknowledge that it accepts... if ( requestCtx.getRequest().getMethod().equals( "OPTIONS" ) ) { log.info( "HTTP Method (OPTIONS) - Detected!" );   // Just send a OK signal back to the browser requestCtx.abortWith( Response.status( Response.Status.OK ).build() ); } } }The Result After the RESTCorsDemoResponseFilter and the RESTCorsDemoRequestFilter are included in the application and deployed. I then rerun rest-test.html on my browser again. As a result, all the HTTP requests with different HTTP Methods of GET, POST, PUT and DELETE from a different location handled very well by the JAX-RS 2.0 application. The screen shots below are the successful HTTP requests made by my browser. These results of Firebug Console and NET Tab are what should be expected:  Final Words JAX-RS 2.0 Interceptors are very handy when it comes to intercepting REST related request and response for such scenario like enabling CORS. If you are using specific implementation of REST library for your Java project e.g. Jersey or RESTEasy, do check out how request and response interceptors are to be specifically implemented, apply the above technique and you should be able to get the same result. The same principles are pretty much the same. Well, hopefully this article will help you in solving cross domain or CORS issues on your Java EE 7 / JAX-RS 2.0 REST project. Thank you for reading. Related Articles:http://en.wikipedia.org/wiki/Cross-origin_resource_sharing http://www.html5rocks.com/en/tutorials/cors/ http://www.w3.org/TR/cors/ https://developer.mozilla.org/en/docs/HTTP/Access_control_CORSReference: Java EE 7 / JAX-RS 2.0 – CORS on REST from our JCG partner Max Lam at the A Developer’s Scrappad blog....
software-development-2-logo

Revamping WSO2 API Manager Key Management Architecture around Open Standards

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for scalably routing API traffic. It leverages proven, production-ready integration, security, and governance components from the WSO2 Enterprise Service Bus, WSO2 Identity Server, and WSO2 Governance Registry. In addition, it leverages the WSO2 Business Activity Monitor for Big Data analytics, giving you instant insight into APIs behavior. One of the limitation we had in API Manager so far is its tight integration with the WSO2 Identity Server. WSO2 Identity Server acts as the key manager, which issues and validates OAuth tokens.   With the revamped architecture (still under discussion) we plan to make all integration points with the key manager, extensible – so you can bring in your own OAuth authorizations server. And also – we will ship the product with standard extension points. These extension points are built around corresponding OAuth 2.0 profiles. In case, your authorization server deviates from the standard, you need to implement the KeyManager interface and plug in your own implementation.   API Publisher API Developer first logs-in to the API Publisher and creates an API with all the related metadata and publishes it to the API Store and the API Gateway. API Publisher will also publish API metadata into the external authorization server via OAuth Resource Set Registration endpoint [1]. Sample Request: { "name": "Photo Album", "icon_uri": "http://www.example.com/icons/flower.png", "scopes": [ "http://photoz.example.com/dev/scopes/view", "http://photoz.example.com/dev/scopes/all" ], "type": "http://www.example.com/rsets/photoalbum" }name REQUIRED. A human-readable string describing a set of one or more resources. This name MAY be used by the authorization server in its resource owner user interface for the resource owner. icon_uri OPTIONAL. A URI for a graphic icon representing the resource set. The referenced icon MAY be used by the authorization server in its resource owner user interface for the resource owner. scopes REQUIRED. An array providing the URI references of scope descriptions that are available for this resource set. type OPTIONAL. A string uniquely identifying the semantics of the resource set. For example, if the resource set consists of a single resource that is an identity claim that leverages standardized claim semantics for “verified email address”, the value of this property could be an identifying URI for this claim. Sample Response: HTTP/1.1 201 Created Content-Type: application/json ETag: (matches "_rev" property in returned object) ...{ "status": "created", "_id": (id of created resource set), "_rev": (ETag of created resource set) } The objective of publishing the resources to the authorization server is to make it aware of the available resources and the scopes associated with them. An identity administrator can build the relationship between these scopes and the enterprise roles. Basically you can associate scopes with enterprise roles. API Store Application Developer logs-in to the API Store and discovers the APIs he/she wants for his application and subscribes to those – and finally creates an application. Each application is uniquely identified by its client id. There are two ways to associate a client id with an application created in API Store.Application developer brings in the client id. Application developer creates a client id out-of-band with the authorization server, and associates the client id with the application he just created in the API Store. In this case, Dynamic Client Registration endpoint of the Authorization Serve is not used (No step 3 & 4). API Store calls Dynamic Client Registration endpoint of the external Authorization Server. Once the application is created by the application developer (by grouping a set of APIs) – API Store will call the Dynamic Client Registration endpoint of the authorization server.Sample Request (Step 3): POST /register HTTP/1.1 Content-Type: application/json Accept: application/json Host: authz.server.com{ "client_name": "My Application”, "redirect_uris":[" https://client.org/callback","https://client.org/callback2 "], "token_endpoint_auth_method":"client_secret_basic", "grant_types": ["authorization_code" , "implicit"], "response_types": ["code" , "token"], "scope": ["sc1" , "sc2"], } client_name: Human-readable name of the client to be presented to the user during authorization. If omitted, the authorization server MAY display the raw “client_id” value to the user instead. It is RECOMMENDED that clients always send this field. client_uri: URL of a web page providing information about the client. If present, the server SHOULD display this URL to the end user in a clickable fashion. It is RECOMMENDED that clients always send this field. logo_uri: URL that references a logo for the client. If present, the server SHOULD display this image to the end user during approval. The value of this field MUST point to a valid image file. scope :Space separated list of scope values that the client can use when requesting access tokens. The semantics of values in this list is service specific. If omitted, an authorization server MAY register a client with a default set of scopes. grant_types: Array of OAuth 2.0 grant types that the client may use. response_types: Array of the OAuth 2.0 response types that the client may use. token_endpoint_auth_method: The requested authentication method for the token endpoint. redirect_uris: Array of redirection URI values for use in redirect-based flows such as the authorization code and implicit flows. Sample Response (Step 4): HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Pragma: no-cache{ "client_id":"iuyiSgfgfhffgfh", "client_secret": "hkjhkiiu89hknhkjhuyjhk", "client_id_issued_at":2343276600, "client_secret_expires_at":2503286900, "redirect_uris":[" https://client.org/callback ", " https://client.org/callback2 "], "grant_types": "authorization_code", "token_endpoint_auth_method": "client_secret_basic" } OAuth Client Application This is outside the scope of the API Manager. The client application can talk to the external authorization server via any of the grant types it supports and obtain an access token [3]. The scope parameter is optional in all the token requests – when omitted by the client, the authorization server can associate a default scope with the access token. If no scopes used at all – then the API Gateway can do an authorization check based on other parameters associated with OAuth client, end user, resource and action. If the client sends a set of scopes with the OAuth grant request, then these scopes will be meaningful to the authorization server only if we have published API metadata into the external authorization server via the OAuth Resource Set Registration endpoint – from the API Publisher. Based on the user’s role and the scopes associated with role, authorization server can issue the access token, only for a subset of the scopes request by the OAuth client. Client Credentials Grant Type Sample Request: POST /token HTTP/1.1 Host: server.example.com Authorization: Basic Base64Encode(Client ID:Client Secret) Content-Type: application/x-www-form-urlencodedgrant_type=client_credentialsSample Response:HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache{ "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"example", "expires_in":3600, "example_parameter":"example_value" } Resource Owner Password Grant Type Sample Request: POST /token HTTP/1.1 Host: server.example.com Authorization: Basic Base64Encode(Client ID:Client Secret) Content-Type: application/x-www-form-urlencodedgrant_type=password&username=johndoe&password=A3ddj3wSample Response:HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache{ "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"example", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA", "example_parameter":"example_value" } API Gateway The API Gateway will intercept all the messages flowing between the OAuth client application and the API – and extract out the access token comes in the HTTP Authorization header. Once the access token is extracted out, API Gateway will call the Token Introspection endpoint[4] of the authorization server. Sample Request: POST /introspect HTTP/1.1 Host: authserver.example.com Content-type: application/x-www-form-urlencoded Accept: application/json Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3token=X3241Affw.4233-99JXJ Sample Response: { "active": true, "client_id":"s6BhdRkqt3", "scope": "read write dolphin", "sub": "2309fj32kl", "user_id": "jdoe", "aud": "https://example.org/protected-resource/*", "iss": "https://authserver.example.com/" } active REQUIRED. Boolean indicator of whether or not the presented token is currently active. exp OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token will expire. iat OPTIONAL. Integer timestamp, measured in the number of seconds since January 1 1970 UTC, indicating when this token was originally issued. scope OPTIONAL. A space-separated list of strings representing the scopes associated with this token. client_id REQUIRED. Client Identifier for the OAuth Client that requested this token. sub OPTIONAL. Machine-readable identifier local to the AS of the Resource Owner who authorized this token. user_id REQUIRED. Human-readable identifier for the user who authorized this token. aud OPTIONAL. Service-specific string identifier or list of string identifiers representing the intended audience for this token. iss OPTIONAL. String representing the issuer of this token. token_type OPTIONAL. Type of the token as defined in OAuth 2.0 Once the API Gateway gets the token introspection response from the authorization server, it will check whether the client application (client id) has subscribed to the corresponding API and then also will validate the scope. API Gateway knows the required scopes for the API and the introspection response returns back the scopes associated with access token. If everything is fine, API Gateway will generate a JWT and send it  to the downstream API. The generated JWT can optionally include user attributes as well. In that case API Gateway will  talk to the UserInfo endpoint of the authorization server. Also – the API Gateway can simply pass-thru the access token as well – without validating the access token and its associated scopes. In that case API Gateway will only do throttling and monitoring. Secured Endpoints In this proposed revamped architecture, WSO2 API Manager has to talk to following endpoints exposed by the key manager.Resource set registration Dynamic client registration endpoint Introspection endpoint UserInfo endpointFor the first three endpoints, the API Manager will just act as a trusted system. The corresponding KeyManager implementation should know how to authenticate to those endpoints. The OpenID Connect UserInfo endpoint will be invoked with the user provided access token, in run-time. This will work only if the corresponding access token has the privileges to read user’s profile from the authorization server. References [1]: http://tools.ietf.org/html/draft-hardjono-oauth-resource-reg-02 [2]: http://tools.ietf.org/html/draft-ietf-oauth-dyn-reg-19 [3]: http://tools.ietf.org/html/rfc6749 [4]: http://tools.ietf.org/html/draft-richer-oauth-introspection-06Reference: Revamping WSO2 API Manager Key Management Architecture around Open Standards from our JCG partner Prabath Siriwardena at the Facile Login blog....
software-development-2-logo

Accelerated Development: Team Conflict is for Losers

It is a guarantee that don’t like someone on your development team and they have behaviors or habits that you might find objectionable:Mashable talks about 45 most annoying office habits [the nest] talks about 10 Annoying Work Habits That Can Get You Fired.But as irritating as you find your co-workers, odds are: You do something that they find annoying… Annoyances and poor communication can lead to conflicts that range from avoidance to all out war where people get drawn into taking sides.  But consider the cost of team conflict :Issue Productivity Software QualityInternal team conflict -10% -15%Management conflict -14% -19%The table above is only showing the average result of conflict, some of us have been in situations that get much, much worse. Software development is not a popularity contest, you don’t have to like everyone that you work with.  However, if you allow your feelings of annoyance escalate into conflict then there is a real cost to your project and ultimately in your stress levels.All conflicts start with disagreements.  The Communications Catalyst 2 talks about the following cycle:Disagree Defend DestroyWhen you disagree with your coworkers then they don’t feel listened to.  They will then defend their position by digging in their heels, then you will dig in your heels and the road to destruction starts. If there are any annoying habits present then the conflict will escalate quickly.  If things get out of hand then people start taking sides and productivity takes a major hit. In the worst conflicts this leads to loss of key personnel, which has been measured to be: Loss of key personnel, productivity -16%, quality -22% Losing key personnel who have comprehensive knowledge of business rules and organizational practices tied up in their heads often causes projects to face fault and come to a stand still. You may feel justified in starting a conflict or escalating one, however, as clever as you think you are, conflict hurts everyone — yourself included.  Just remember: It is virtually impossible to start/escalate a conflict that doesn’t boomerang back and bite you in the @ss! 4 Ways to Avoid or Reduce Conflicts Things to consider to avoid conflict:Don’t disagree first, signal that the other person has been heardYou will rarely agree with everything that someone else says, but start by agreeing with the part that you do agree with.1 This will at least signal that you have heard them and reduce their anxiety that you are not listening to them. Even mechanically echoing everything that they just said is a way to signal that you heard what was said. Once this is done, then talk about what you don’t agree with.Don’t interrupt people.When you are excited and thoughts are springing to mind then you may be tempted to do all the talking and stop listening; get this under control, take a breath, and let others talk. People generally consider it rude when you interrupt and will assume arrogance on your part.  If you are not trying to be arrogant and someone tells you this then wake up — you need to listen.Don’t be frustrated when people don’t understand youIf you really know something that others don’t then simply restating your point of view will not improve their understanding. If your friend is lost in a new shopping mall then describing your location will not be helpful in helping him find you.  You need to find out where he is and walk him through the steps of getting to your location. Be open to the idea that there might be something that you are not seeing.  With additional information you might revise your point of view.Don’t automatically assume that someone is insulting youIn virtually every case where someone feel insulted this is a knee-jerk reaction to a misunderstanding where no insult was intended. Jumping to conclusions is not good under any circumstance, but is lethal in social interactions.Managers should be on the lookout for the signs of conflict and clear them up while they are still small.  Most conflicts arise from simple misunderstandings. You will notice that most organizations will promote people based on their ability to work with others and resolve conflicts over competence. Learning how to resolve conflicts is likely your ticket to an overdue promotion… Other articles in the “Loser” seriesWant to see more sacred cows get tipped? Check out:Comments are for Losers Defects are for Losers Debuggers are for Losers Efficiency is for Losers Testing departments are for LosersMake no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once! ReferencesCarnegie, Dale.  How to Win Friends and Influence People. 1998. Connolly, Mickey and Rianoshek, Richard.  The Communication Catalyst, 2002. Jones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011 Kahneman, Daniel. Thinking Fast and Slow. 2011Side Note My best friend also works in the tech sector, and despite being friends for almost 25 years we have very few beliefs or habits in common.  There are subjects that we agree on, but then we don’t agree on how they should be handled. Even though we are very different people this has never stood in the way of us being able to do things together.  If you look around you will see radically different people that manage to cooperate and even thrive. The key to all working relationships especially when the other person is very different from you is respect.Reference: Accelerated Development: Team Conflict is for Losers from our JCG partner Dalip Mahal at the Accelerated Development blog....
software-development-2-logo

Securing the Insecure

The 33 years old, Craig Spencer returned back to USA on 17th October from Africa after treating Ebola patients. Just after few days, he was tested positive for Ebola. Everyone was concerned – specially the people around him – and the New Yorkers. The mayor of the New York came in front of the media and gave an assurance to its citizens – that they have the world’s top medical staff as well as the most advanced medical equipments to treat Ebola – and they have been prepared for this for so many month. That for sure might have calm down most of the people. Let me take another example. When my little daughter was three months old, she used to go to anyone’s hand. Now – she is eleven months and knows who her mother is. Whenever she finds any difficulty she keeps on crying till she gets to the mother. She only feels secured in her mother’s arms. When we type a password into the computer screen – we are so much worried that, it will be seen by our neighbors. But – we never worried of our prestigious business emails been seen by NSA. Why ? Either its totally out of our control – or – we believe NSA will only use them to tighten national security and for nothing else. What I am try to say with all these examples is, insecurity is a perception. Its a perception triggered by undesirable behaviors. Undesirable behavior is a reflection of how much a situation deviates from the correctness. Its all about perception and about building the  perception. There are no 100% secured systems on the earth. Most of the cryptographic algorithms developed in 80s and 90s are now broken due to the advancements in computer processing power. CorrectnessIn the computer world, most developers and operators are concerned about the correctness. The correctness is about achieving the desired behavior. You deposit $ 1000 in your account you would expect the savings to grow exactly by 1000. You send a document to a printer and you would expect the output to be as it is as you see it on the computer screen. The security is concerned about preventing undesirable behaviors. C-I-A There are three security properties that can lead into undesirable behaviors, if those are violated: confidentiality, integrity and availability.Confidentiality means protecting data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption. Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modification, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest. Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on public endpoints, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack. Attacks In March, 2011 the RSA corporation was breached. Attackers were able to steal sensitive tokens related to RSA SecureID devices. These tokens were then used to break into companies that used SecureID.In October, 2013 the Adobe corporation was breached. Both the source code and the customer records were stolen – including passwords. Just a month after the Adobe attack, in November, 2013 – the Target was attacked and 40 million credit card and debit card data were stolen. How all these attacks are possible? Many breaches begin by exploiting a vulnerability in the system under question. A vulnerability is a defect that an attacker can exploit to effect an undesired behavior, with a set of carefully crafted interactions. In general a defect is a problem in either the design or the implementation of the system so that it fails to meet its desired requirements. To be precise, a flow is a defect in the design and a bug is a defect in the implementation. A vulnerability is a defect in the system that affects security relevant behavior of a system, rather than just the correctness. If you take the RSA 2011 breach, it was based on a vulnerability in the Adobe flash player. A carefully crafted flash program when run by a vulnerable flash player, allowed the attacker to execute arbitrary code on the running machine – which was in fact due to a bug in the code. To ensure security, we must eliminate bugs and design flows and make them harder to exploit. The Weakest Link In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.The take-away there is, a proper security design should include all the communication links in the system. Your system is no stronger than its weakest link. The Defense in DepthA layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers. This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage? Insider Attacks Insider attacks are less powerful and less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, are all insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco.Insider attacks are identified as a growing threat in the military. To address this concern, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Cyber Insider Threat (CINDER) in 2010. The objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible. Security by ObscurityKerckhoffs’ Principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. Microsoft’s NTLM design was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. In a proper security design, it’s highly recommended not to use any custom-developed algorithms or protocols. Standards are like design patterns: they’ve been discussed, designed, and tested in an open forum. Every time you have to deviate from a standard, should think twice—or more. Software Security The software security is only a part or a branch of computer security. Software security is kind of computer security that  focuses on the secure design and the implementation of software, using best language, tools and methods. Focus of study of software security is the ‘code’. Most of the popular approaches to security treat software as a black box. They tend to ignore software security.In other words, it focuses on avoiding software vulnerabilities, flaws and bugs. While software security overlaps with and complements other areas of computer security, it is distinguished by its focus on a secure system’s code. This focus makes it a white box approach, where other approaches are more black box. They tend to ignore the software’s internals. Why is software security’s focus on the code important? The short answer is that software defects are often the root cause of security problems, and software security aims to address these defects directly. Other forms of security tend to ignore the software and build up defenses around it. Just like the walls of a castle, these defenses are important and work up to a point. But when software defects remain, cleaver attackers often find a way to bypass those walls. Operating System Security We’ll now consider a few standard methods for security enforcement and see how their black box nature presents limitations that software security techniques can address. Our first example is security enforcement by the operating system or OS. When computer security was growing up as a field in the early 1970s, the operating system was the focus. To the operating system, the code of a running program is not what is important. Instead, the OS cares about what the program does, that is, its actions as it executes. These actions, called system calls, include reading or writing files, sending network packets and running new programs. The operating system enforces security policies that limit the scope of system calls. For example, the OS can ensure that Alice’s programs cannot access Bob’s files. Or that untrusted user programs cannot set up trusted services on standard network ports.The operating system’s security is critically important, but it is not always sufficient. In particular, some of the security relevant actions of a program are too fine-grained to be mediated as system calls. And so the software itself needs to be involved. For example, a database management system or DMBS is a server that manages data whose security policy is specific to an application that is using that data. For an online store, for example, a database may contain security sensitive account information for customers and vendors alongside other records such as product descriptions which are not security sensitive at all. It is up to the DBMS to implement security policies that control access to this data, not the OS. Operating systems are also unable to enforce certain kinds of security policies. Operating systems typically act as an execution monitor which determines whether to allow or disallow a program action based on current execution context and the program’s prior actions. However, there are some kinds of policies, such as information flow policies, that can not be, that simply cannot be enforced precisely without consideration for potential future actions, or even non-actions. Software level mechanisms can be brought to bear in these cases, perhaps in cooperation with the OS. Firewalls and IDS Another popular sort of security enforcement mechanism is a network monitor like a firewall or intrusion detection system or IDS. A firewall generally works by blocking connections and packets from entering the network. For example, a firewall may block all attempts to connect to network servers except those listening on designated ports. Such as TCP port 80, the standard port for web servers. Firewalls are particularly useful when there is software running on the local network that is only intended to be used by local users. An intrusion detection system provides more fine-grained control by examining the contents of network packets, looking for suspicious patterns. For example, to exploit a vulnerable server, an attacker may send a carefully crafted input to that server as a network packet. An IDS can look for such packets and filter them out to prevent the attack from taking place. Firewalls and IDSs are good at reducing the avenues for attack and preventing known vectors of attack. But both devices can be worked around. For example, most firewalls will allow traffic on port 80, because they assume it is benign web traffic. But there is no guarantee that port 80 only runs web servers, even if that’s usually the case. In fact, developers have invented SOAP, which stands for simple object access protocol (no more an acronym since SOAP 1.2), to work around firewall blocking on ports other than port 80. SOAP permits more general purpose message exchanges, but encodes them using the web protocol.Now, IDS patterns are more fine-grained and are more able to look at the details of what’s going on than our firewalls. But IDSs can be fooled as well by inconsequential differences in attack patterns. Attempts to fill those gaps by using more sophisticated filters can slow down traffic, and attackers can exploit such slow downs by sending lots of problematic traffic, creating a denial of service, that is, a loss of availability. Finally, consider anti-virus scanners. These are tools that examine the contents of files, emails, and other traffic on a host machine, looking for signs of attack. These are quite similar to IDSs, but they operate on files and have less stringent performance requirements as a result. But they too can often be bypassed by making small changes to attack vectors. Heartbleed Now we conclude our comparison of software security to black box security with an example, the Heartbleed bug. Heartbleed is the name given to a bug in version 1.0.1 of the OpenSSL implementation of the transport layer security protocol or TLS. This bug can be exploited by getting the buggy server running OpenSSL to return portions of its memory. The bug is an example of a buffer overflow. Let’s look at black box security mechanisms, and how they fare against Heartbleed.Operating system enforcement and anti-virus scanners can do little to help. For the former, an exploit that steals data does so using the privileges normally granted to a TLS-enabled server. So the OS can see nothing wrong. For the latter, the exploit occurs while the TLS server is executing, therefore leaving no obvious traces in the file system. Basic packet filters used by IDSs can look for signs of exploit packets. The FBI issued signatures for the snort IDS soon after Heartbleed was announced. These signatures should work against basic exploits, but exploits may be able to apply variations in packet format such as chunking to bypass the signatures. In any case, the ramifications of a successful attack are not easily determined, because any exfiltrated data will go back on the encrypted channel. Now, compared to these, software security methods would aim to go straight to the source of the problem by preventing or more completely mitigating the defect in the software. Threat Modeling Threat modeling is a methodical, systematic approach to identifying possible security threats and vulnerabilities in a system deployment. First you need to identify all the assets in the system. Assets are the resources you have to protect from intruders. These can be user records/credentials stored in an LDAP, data in a database, files in a file system, CPU power, memory, network bandwidth, and so on. Identifying assets also means identifying all their interfaces and the interaction patterns with other system components. For example, the data stored in a database can be exposed in multiple ways. Database administrators have physical access to the database servers. Application developers have JDBC-level access, and end users have access to an API. Once you identify all the assets in the system to be protected and all the related interaction patterns, you need to list all possible threats and associated attacks. Threats can be identified by observing interactions, based on the CIA triad.From the application server to the database is a JDBC connection. A third party can eavesdrop on that connection to read or modify the data flowing through it. That’s a threat. How does the application server keep the JDBC connection username and password? If they’re kept in a configuration file, anyone having access to the application server’s file system can find them and then access the database over JDBC. That’s another threat. The JDBC connection is protected with a username and password, which can potentially be broken by carrying out a brute-force attack. Another threat. Administrators have direct access to the database servers. How do they access the servers? If access is open for SSH via username/password, then a brute-force attack is likely a threat. If it’s based on SSH keys, where those keys are stored? Are they stored on the physical personal machines of administrators or uploaded to a key server? Losing SSH keys to an intruder is another threat. How about the ports? Have you opened any ports to the database servers, where some intruder can telnet and get control or carry out an attack on an open port to exhaust system resources? Can the physical machine running the database be accessed from outside the corporate network? Is it only available over VPN? All these questions lead you to identifying possible threats over the database server. End users have access to the data via the API. This is a public API, which is exposed from the corporate firewall. A brute-force attack is always a threat if the API is secured with HTTP Basic/Digest Authentication. Having broken the authentication layer, anyone could get free access to the data. Another possible threat is someone accessing the confidential data that flows through the transport channels. Executing a man-in-the-middle attack can do this. DoS is also a possible threat. An attacker can send carefully crafted, malicious, extremely large payloads to exhaust server resources. STRIDE is a popular technique to identify threats associated with a system in a methodical manner. STRIDE stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privileges.Reference: Securing the Insecure from our JCG partner Prabath Siriwardena at the Facile Login blog....
hazelcast-logo

Beginner’s Guide to Hazelcast Part 3

This is a continuation of a series of posts on how to use Hazelcast from a beginner’s point of view.  If you haven’t read the last two, I encourage reading them: Beginner’s Guide to Hazelcast Part 1 Beginner’s Guide to Hazelcast Part 2           The Primitives are Coming During my last post I mentioned to use an ILock with IList and ISet because they are not thread safe.  It hit me that I had not covered a basic part of Hazelcast, the distributed primitives.  They solve the problem of  synchronizing the use of resources in a distributed way.  Those who do a lot of threaded programming will recognize them right away.  For those of you who are new to programming in threads, I will explain what each primitive does and give an example. IAtomicLong This is a distributed atomic long.  This means that every operation happens all at once.  For example, one can add a number and retrieve the resulting value in one operation.  One can get the value then add a value.  This is true for every operation one does on this primitive.  As one can imagine, it is thread safe but one cannot do this and it be thread safe. atomicLong.addAndGet(2 * atomicLong.get()); The line above creates a race condition because there are three operations, the reading of the contents of the atomic long, multiplying by two and adding that to the instance. The thread safely is only there if the operation is guaranteed to happen in one step. To do that, IAtomicLong has a method called alterAndGet. AlterAndGet takes a IFunction object. This makes multi-step operations one step. There is always one synchronous backup of an IAtomicLong and it is not configurable. IdGenerator IAtomicLongs are great to use to keep track of how many of what one has. The problem is that since the call is most likely remote, IAtomicLongs for some situations are not an ideal solution. One of those situations is generating unique ids. IdGenerator was made just for that purpose. The way it works is that each member claims one million ids to generate. Once all of those claimed numbers are taken, the segment claims another million. So since each member has a million ids tucked away, the chances the call to an IdGenerator is remote is one in a million. This makes it very fast way to generate unique ids. If any duplicates happen it may be because the members didn’t join up. If a member goes down before its segment is used up, there will be gaps in the ids. For unique id generation missing numbers are not an issue. I do feel members not hooking up to the cluster is an issue but if that is happening, there are bigger things to worry about. If the cluster get restarted, the ids start at zero again. That is because the id is not persisted. This is a in memory database, one takes their chances. To counter that, IdGenerators can be set to start at particular number as long it isn’t claimed by someone else and no ids have been generated yet. Alternatives are to creating ones own id generator or use the java.util.UUID class. This may take more space but each project has its own requirements to meet. IdGenerators always have one synchronous backup and cannot be configured. ILock Here is a classic synchronization method with a twist. It is an exclusive lock that is distributed. One just invokes the method lock and a thread either waits or obtains a lock. Once the lock is established, the critical section can be preformed. Once the work is done, the unlock method is used. Veterans of this technique will put the critical section in a try finally block, establishing the lock just outside the try block and the unlock in the finally section. This is invaluable for performing actions on structures that are not thread safe. The process that gets the lock owns the lock and is required to call unlock for other processes to be able to establish locks. This can be problematic when one has threads in multiple locations on the network. Hazelcast thought of this problem and has the lock released when a member goes down. Another feature is that the lock method has a timeout of 300 seconds. This prevents starved threads. ILocks have one synchronous backup and is not configurable. A bit of advice from someone that has experience, keep the critical sections as small as possible; this helps performance, and prevent deadlocks. Deadlocks are a pain to debug and harder to test because of the unknown execution order of threads. One time the bug manifests itself then it does not. This can continue for a week or more because of a misplaced lock. Then one has to make sure it will not happen again. This is hard to prove because of the unknown execution of the threads. By the time it is all done, the boss is frustrated because of the time it took and one does not know if the bug is fixed or not. ICondition Ever wanted to wait for an event to happen but did not want other people to have to wait for it too? That is exactly what conditions are for in threaded programming. Before Java 1.5, this was accomplished via the synchronized-wait-notify technique. This can be preformed by the lock-condition technique. Take a trip with me and I can show one how this works. Imagine a situation where there is a non-thread safe list and it has a producer and a consumer writing and reading from it. Obviously, there are critical sections that need to be protected. That falls into the lap of a lock. After a lock is established, critical work can begin. The only problem is that the resource in a state that is useless to the thread. For example, a consumer cannot pull entries from an empty list. A producer cannot put entries on a full list. This is where a condition comes in. The producer or consumer will enter a while loop that tests for the condition that is favorable and call condition.await(). Once await is called, the thread gives up its lock and lets other threads access their critical sections. The awaiting thread will get the lock back to test for its condition and may await some more or the condition is satisfied and starts doing work. Once the critical section is complete, the thread can call signal() or signalAll() to tell the other threads to wake up and check their conditions. Conditions are created by the lock instead of the Hazelcast instance. Another thing is that if one wants the condition to be distributed, one must use the lock.newCondition(String name) method. IConditions have one synchronous backup and cannot be configured. I cannot tell one how many deadlocks can occur using this technique. Sometimes the signal comes when the thread is waiting and everything is good. The other side is that the signal is sent when the thread is not waiting, enters the wait state and it waits forever. For this reason, I advocate using a timeout while waiting so the thread can check every once in a while if the condition has been met. That way if the signal misses, the worst that can happen is a little waiting time instead of forever waiting. I used the timeout technique in my example. Copy and paste the code as much as one wants. I would rather have tested techniques being used rather than untested code invading the Internet. ICountDownLatch A ICountDownLatch is a synchronizing tool that triggers when its counter goes to zero. This is not a common way to do coordinate but it is there when needed. The example section, I think, provides a much better explanation of how it works. The latch can be reset after it goes to zero so it can be used over again. If the owning member goes away, all of the threads waiting for the latch to strike zero are signaled as if it zero has been achieved. The ICountDownLatch is backed up synchronously in one other place and cannot be configured. ISemaphore Yes, there is a distributed version of the classic semaphore. This is exciting to me be because last time I went to a Operating System class, semaphores needed a bit of hardware support. Maybe I just dated myself, oh well, it is still cool (again dating myself). Semaphores work by limiting the number of threads that can access a resource. Unlike locks, semaphores have no sense of ownership so different threads can release the claim on the resource. Unlike the rest of the primitives, the ISemaphore can be configured. I configure one in my example. It is in the hazelcast.xml in the default package of my project. Examples Here are the examples. I had a comment about my last post asking me to indent my code so it is more readable. I will do that for sure this time because of the amount of code I am posting. One will see a couple of things that I have not discussed before. One is the IExecutorService. This is a distributed version of the ExecutorService. One can actually sent jobs off to be completed by different members. Another thing is that all of the Runnable/Callable classes that are defined implement Serializable. This is necessary in a distributed environment because the object can be sent across to different members. The last thing is the HazelcastInstanceAware interface. It allows a class to access the local Hazelcast instance. Then the class can get instances of the resources it needs (like ILists). Without further ado, here we go. IAtomicLong package hazelcastprimitives.iatomiclong;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IAtomicLong; import com.hazelcast.core.IFunction; import java.io.Serializable;/** * * @author Daryl */ public class IAtomicLongExample { public static class MultiplyByTwoAndSubtractOne implements IFunction, Serializable {@Override public Long apply(Long t) { return (long)(2 * t - 1); } } public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); final String NAME = "atomic"; IAtomicLong aLong = instance.getAtomicLong(NAME); IAtomicLong bLong = instance.getAtomicLong(NAME); aLong.getAndSet(1L); System.out.println("bLong is now: " + bLong.getAndAdd(2)); System.out.println("aLong is now: " + aLong.getAndAdd(0L)); MultiplyByTwoAndSubtractOne alter = new MultiplyByTwoAndSubtractOne(); aLong.alter(alter); System.out.println("bLong is now: " + bLong.getAndAdd(0L)); bLong.alter(alter); System.out.println("aLong is now: " + aLong.getAndAdd(0L)); System.exit(0); } } Notice that even the MutilpyAndSubtractOne class implements Serializable. IdGenerator package hazelcastprimitives.idgenerator;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IdGenerator;/** * * @author Daryl */ public class IdGeneratorExample { public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance();IdGenerator generator = instance.getIdGenerator("generator"); for(int i = 0; i < 10; i++) { System.out.println("The generated value is " + generator.newId()); } instance.shutdown(); System.exit(0); } } ILock This ILock example can also be considered an ICondition example. I had to use a condition because the ListConsumer was always running before the ListProducer so I made the ListConsumer wait until the IList had something to consume. package hazelcastprimitives.ilock;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICondition; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class ILockExample {static final String LIST_NAME = "to be locked"; static final String LOCK_NAME = "to lock with"; static final String CONDITION_NAME = "to signal with"; /** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService("service"); ListConsumer consumer = new ListConsumer(); ListProducer producer = new ListProducer(); try { service.submit(producer); service.submit(consumer); Thread.sleep(10000); } catch(InterruptedException ie){ System.out.println("Got interrupted"); } finally { instance.shutdown(); } } public static class ListConsumer implements Runnable, Serializable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { while(list.isEmpty()) { condition.await(2, TimeUnit.SECONDS); } while(!list.isEmpty()) { System.out.println("value is " + list.get(0)); list.remove(0); } } catch(InterruptedException ie) { System.out.println("Consumer got interrupted"); } finally { lock.unlock(); } System.out.println("Consumer leaving"); }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } } public static class ListProducer implements Runnable, Serializable, HazelcastInstanceAware { private transient HazelcastInstance instance;@Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { for(int i = 1; i <= 10; i++){ list.add(i); } condition.signalAll(); } finally { lock.unlock(); } System.out.println("Producer leaving"); }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } } } ICondition Here is the real ICondition example. Notice how the SpunProducer and SpunConsumer both share the same ICondition and signal each other. Note I am making use of timeouts to prevent deadlocks. package hazelcastprimitives.icondition;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICondition; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class IConditionExample { static final String LOCK_NAME = "lock"; static final String CONDITION_NAME = "condition"; static final String SERVICE_NAME = "spinderella"; static final String LIST_NAME = "list"; public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService(SERVICE_NAME); service.execute(new SpunConsumer()); service.execute(new SpunProducer()); try { Thread.sleep(10000);} catch(InterruptedException ie) { System.out.println("Hey we got out sooner than I expected"); } finally { instance.shutdown(); System.exit(0); } } public static class SpunProducer implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; private long counter = 0; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { if(list.isEmpty()) { populate(list); System.out.println("telling the consumers"); condition.signalAll(); } for(int i = 0; i < 2; i++) { while(!list.isEmpty()) { System.out.println("Waiting for the list to be empty"); System.out.println("list size: " + list.size() ); condition.await(2, TimeUnit.SECONDS); } populate(list); System.out.println("Telling the consumers"); condition.signalAll(); } } catch(InterruptedException ie) { System.out.println("We have a found an interuption"); } finally { condition.signalAll(); System.out.println("Producer exiting stage left"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } private void populate(IList list) { System.out.println("Populating list"); long currentCounter = counter; for(; counter < currentCounter + 10; counter++) { list.add(counter); } } } public static class SpunConsumer implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { for(int i = 0; i < 3; i++) { while(list.isEmpty()) { System.out.println("Waiting for the list to be filled"); condition.await(1, TimeUnit.SECONDS); } System.out.println("removing values"); while(!list.isEmpty()){ System.out.println("value is " + list.get(0)); list.remove(0); } System.out.println("Signaling the producer"); condition.signalAll(); } } catch(InterruptedException ie) { System.out.println("We had an interrupt"); } finally { System.out.println("Consumer exiting stage right"); condition.signalAll(); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} ICountDownLatch package hazelcastprimitives.icountdownlatch;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICountDownLatch; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class ICountDownLatchExample { static final String LOCK_NAME = "lock"; static final String LATCH_NAME = "condition"; static final String SERVICE_NAME = "spinderella"; static final String LIST_NAME = "list"; public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService(SERVICE_NAME); service.execute(new SpunMaster()); service.execute(new SpunSlave()); try { Thread.sleep(10000);} catch(InterruptedException ie) { System.out.println("Hey we got out sooner than I expected"); } finally { instance.shutdown(); System.exit(0); } } public static class SpunMaster implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; private long counter = 0; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICountDownLatch latch = instance.getCountDownLatch(LATCH_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { latch.trySetCount(10); populate(list, latch); } finally { System.out.println("Master exiting stage left"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } private void populate(IList list, ICountDownLatch latch) { System.out.println("Populating list"); long currentCounter = counter; for(; counter < currentCounter + 10; counter++) { list.add(counter); latch.countDown(); } } } public static class SpunSlave implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICountDownLatch latch = instance.getCountDownLatch(LATCH_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { if(latch.await(2, TimeUnit.SECONDS)) { while(!list.isEmpty()){ System.out.println("value is " + list.get(0)); list.remove(0); }} } catch(InterruptedException ie) { System.out.println("We had an interrupt"); } finally { System.out.println("Slave exiting stage right"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} ISemaphore Configuration Here is the ISemaphore configuration <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation ="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd " xmlns ="http://www.hazelcast.com/schema/config " xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"> <network> <join><multicast enabled="true"/></join> </network> <semaphore name="to reduce access"> <initial-permits>3</initial-permits> </semaphore> </hazelcast> Example Code package hazelcastprimitives.isemaphore;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.ISemaphore; import com.hazelcast.core.IdGenerator; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future;/** * * @author Daryl */ public class ISemaphoreExample { static final String SEMAPHORE_NAME = "to reduce access"; static final String GENERATOR_NAME = "to use"; /** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService("service"); List<Future> futures = new ArrayList(10); try { for(int i = 0; i < 10; i++) { futures.add(service.submit(new GeneratorUser(i))); } // so I wait til the last man. No this may not be scalable. for(Future future: futures) { future.get(); } } catch(InterruptedException ie){ System.out.printf("Got interrupted."); } catch(ExecutionException ee) { System.out.printf("Cannot execute on Future. reason: %s\n", ee.toString()); } finally { service.shutdown(); instance.shutdown(); }} static class GeneratorUser implements Callable, Serializable, HazelcastInstanceAware { private transient HazelcastInstance instance; private final int number; public GeneratorUser(int number) { this.number = number; } @Override public Long call() { ISemaphore semaphore = instance.getSemaphore(SEMAPHORE_NAME); IdGenerator gen = instance.getIdGenerator(GENERATOR_NAME); long lastId = -1; try { semaphore.acquire(); try { for(int i = 0; i < 10; i++){ lastId = gen.newId(); System.out.printf("current value of generator on %d is %d\n", number, lastId); Thread.sleep(1000); } } catch(InterruptedException ie) { System.out.printf("User %d was Interrupted\n", number); } finally { semaphore.release(); } } catch(InterruptedException ie) { System.out.printf("User %d Got interrupted\n", number); } System.out.printf("User %d is leaving\n", number); return lastId; }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} Conclusion Hazelcast’s primitives were discussed in this post. Most if not all of them revolved around thread coordination. Explanations of the primitive and personal experience were shared. In the examples, the different types of coordination were shown. The examples can be downloaded via subversion at http://darylmathisonblog.googlecode.com/svn/trunk/HazelcastPrimitives. ReferencesThe Book of Hazelcast: found at www.hazelcast.com Hazelcast documentation: found in the Hazelcast download found at www.hazelcast.orgReference: Beginner’s Guide to Hazelcast Part 3 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
agile-logo

Does Agile require cultural change?

If Wood Allen was an Agile Coach Consultant he might say: “#Agile without culture change is an empty experience; but as empty experience go its one of the best” I sit in Agile conferences (and I include Lean and Kanban here) and I hear people say “To really become Agile you need culture change.” And I agree with them. Yes, if you really want to be Agile, and get the greatest benefit from Agile you need to change the culture of the organization to embrace the Agile way. I agree. And I also know that every speaker on this topic – and myself – warn again “doing Agile” without being Agile in culture and mindset. Heck, I kind of wrote a book about this once upon a time. For me “Agile culture” is a “Learning Organization Culture.” But… But Agile is a toolkit, you can pick out and use many of those tool without adopting others and without adopting an Agile mindset. For example, you can do Test Driven Development without the need to adopt an Agile culture in your organization. And even without culture change Test Driven Development (TDD) will make you better. True, if you have to force march your programmers through TDD it isn’t going to be as beneficial as it will be if your programmers embrace TDD and want to do it and make it part of their life. Given this we – and I include myself – build an argument for undertaking cultural change. But, big BUT…. TDD is not alone, there are lots of tools in the Agile toolkit that you can pick up and use individually, or with a couple of others, which will help you improve. But if you want the full benefit, well, you are going to have to pick up more tools and change that culture! Culture change is a far bigger effort than introducing any Agile tool – or even an Agile method. And most of the people who go by the title “Agile coach” or “Agile consultant” or similar are – in my opinion – drawn from the technology side and aren’t necessarily equipped to undertake culture change initiatives. To be fair, I don’t think many people are equipped (training, experience, knowledge, etc) to undertake culture change. Please don’t take offence Agile consultants/coaches, I include myself here. On paper I have more qualification to change culture than the vast majority of Agile consultants/coaches I meet and I’m wary of trying to change culture. Certainly, if we believe folk-law (or the updated version “wisdom of crowds”) culture change is hard and often fails. Let me say something some people will disagree with:Culture Change is not necessary to introduce Agile. Culture change is not an enabler. Rather culture change may be the result of adopting Agile.I hope it is the result but I also recognise that organizations have the cultures they have and we mess with it at our peril, culture may look bad but embedded in there is a lot of knowledge and norms. Company culture is makes a company what it is, change it and you risk destroying the company. Messing with culture is likely to bring out the corporate antibodies. Anyone who wants to change an organization, particularly anyone wanting to change tools, methods of working and culture in an organization is well advised to go and study the history of Business Process Re-engineering (BPR). BPR was an 1990’s attempt to change the ways companies work, through the use of technology, or make them more efficient. Agile has a lot in common with BPR but BPR is an example of how not to do it. I am prepared to take people through Agile tools, practices, methods, I encourage them to adopt these approaches, and I don’t really work, directly, to change culture. I believe that if people start to live and Agile lifestyle then the culture will follow. I believe that Agile-Lean is good, I believe that if we pick tools which will make peoples work better in a way they appreciate it then I believe that in time the culture will change. In short, I believe culture follows tools, the tools we choose to use – whether that be stand-up meetings, Jira, Rally or paper and pen – influence our culture and lead somewhere. Its not a one way street, its not that simple, but tools are a lot easier to change than culture. Change comes first, culture follows.Reference: Does Agile require cultural change? from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close