Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

Schedule Java EE 7 Batch Jobs

Java EE 7 added the capability to perform Batch jobs in a standard way using JSR 352.                   <job id="myJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <step id="myStep"> <chunk item-count="3"> <reader ref="myItemReader"/> <processor ref="myItemProcessor"/> <writer ref="myItemWriter"/> </chunk> This code fragment is the Job Specification Language defined as XML, a.k.a. Job XML. It defines a canonical job, with a single step, using item-oriented or chunk-oriented processing. A chunk can have a reader, optional processor, and a writer. Each of these elements are identified using the corresponding elements in the Job XML, and are CDI beans packaged in the archive. This job can be easily started using: BatchRuntime.getJobOperator().start("myJob", new Properties()); A typical question asked in different forums and conferences is how to schedule these jobs in a Java EE runtime. Batch 1.0 API itself does not offer anything to be schedule these jobs. However Java EE platform offers three different ways to schedule these jobs:Use the @javax.ejb.Schedule annotation in an EJB. Here is a sample code that will trigger the execution of batch job at 11:59:59 PM every day. @Singleton public class MyEJB { @Schedule(hour = "23", minute = "59", second = "59") public void myJob() { BatchRuntime.getJobOperator().start("myJob", new Properties()); } } Of course, you can change the parameters of @Schedule to start the batch job at the desired time. Use ManagedScheduledExecutorService using javax.enterprise.concurrent.Trigger as shown: @Stateless public class MyStatelessEJB { @Resource ManagedScheduledExecutorService executor;public void runJob() { executor.schedule(new MyJob(), new Trigger() {public Date getNextRunTime(LastExecution lastExecutionInfo, Date taskScheduledTime) { Calendar cal = Calendar.getInstance(); cal.setTime(taskScheduledTime); cal.add(Calendar.DATE, 1); return cal.getTime(); }public boolean skipRun(LastExecution lastExecutionInfo, Date scheduledRunTime) { return null == lastExecutionInfo; }}); }public void cancelJob() { executor.shutdown(); } } Call runJob to initiate job execution and cancelJob to terminate job execution. In this case, a new job is started a day later than the previous task. And its not started until previous one is terminated. You will need more error checks for proper execution. MyJob is very trivial: public class MyJob implements Runnable {public void run() { BatchRuntime.getJobOperator().start("myJob", new Properties()); }} Of course, you can automatically schedule it by calling this code in @PostConstruct. A slight variation of second technique allows to run the job after a fixed delay as shown: public void runJob2() { executor.scheduleWithFixedDelay(new MyJob(), 2, 3, TimeUnit.HOURS); } The first task is executed 2 hours after the runJob2 method is called. And then with a 3 hours delay between subsequent execution.This support is available to you within the Java EE platform. In addition, you can also invoke BatchRuntime.getJobOperator().start("myJob", new Properties()); from any of your Quartz-scheduled methods as well.You can try all of this on WildFly. And there are a ton of Java EE 7 samples at github.com/javaee-samples/javaee7-samples. This particular sample is available at github.com/javaee-samples/javaee7-samples/tree/master/batch/scheduling.How are you scheduling your Batch jobs ?Reference: Schedule Java EE 7 Batch Jobs from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
agile-logo

Legendary Product Development

“Brand will not save you, marketing will not save you, and account control will not save you. It’s the products.” – Marc Andreessen I believe there is a recipe for winning in product development. It requires a delicate balance between pragmatism in planning, efficient execution, and the ability to see around corners (into the future). I’ve written this post to share some ideas on how to become legendary in product development.   Idea #1: Usage First Products must be built with a ‘usage first’ mindset. Clients need to be attracted to products by their experience with the product. Clients should be begging for more, because they are so delighted with the experience and outcomes. If you build great products, clients will tell each other. The best way to make a product available for usage is through demos, freemium versions, downloads, and easy access via the cloud. “The people with really great products never say they’re great. They don’t have to. Show, don’t tell..” – Unknown “In the old world, you devoted 30% of your time to building a great service and 70% of your time to shouting about it. In the new world, that inverts. If I build a great product or service, my customers will tell each other.” – Jeff Bezos Idea #2: Simplicity and Design Although this is related to Idea #1 and is in fact a pre-requisite to #1, it has some subtle differences. This is about tapping into how a client feels when they use your product. Do they find it shockingly simple, yet highly functional, leading to an ‘ah-hah’ moment? They should. I read once that people don’t buy products; they buy better versions of themselves. When you’re trying to win customers, are you listing the attributes of a product or can you vividly describe how it will improve their lives? Clients will be attracted to the latter. “Taking a design-centric approach to product development is becoming the default. Designers, at last, have their seat at the table.”- Unknown Idea #3 Speed, Accountability, And Relentless Execution Speed drives exponential improvements and outcomes in any organization. If you complete a task in 1 hour, instead of 1 day, your mean time to a positive outcome is 500% faster. In product development, accelerating cycle times is an under-estimated force in determining winners and losers. Pixar has a company principle that states, “We start from the presumption that our people are talented and want to contribute. We accept that, without meaning to, our company is stifling that talent in myriad unseen ways. Finally, we try to identify those impediments and fix them. “ That principle really resonates with me. A product organization has to break down its own barriers, to achieve its potential. “Life is like a ten speed bicycle. Most of us have gears we never use.” – Charles Schulz Idea #4: Open Source Open source is one the most important phenomena in enterprise software. Legendary product teams will shift their approach to an overt embrace and incorporation of open source into their product development processes. The best business model in software today is utilizing open source and effectively surrounding it with proprietary solutions and features. This drives the cycle time improvements alluded to above. “There are no silver bullets for this, only lead bullets.” – Ben Horowitz Idea #5: Product Management Product management, and its interplay with development, is a critical function in a product organization. Development must work with product management to develop forward-looking, client-based insights, and use that insight to push clients faster than they may normally want to move. If you want to learn about product management and how product development should play a role, I recommend 2 things: 1) read every Amazon.com Annual Report and 2) read “Good Product Manager, Bad Product Manager” (you can find it on the web). Great product organizations obsess over feedback and ideas from all constituents. They prefer feedback that challenges their views, instead of reinforcing their views. That enables you to reach the best answer as an organization. “If you’re doing things right, something will always be a little bit broken.” – Unknown Idea #6: D-Teams I believe legendary product development teams need D-teams in the organization. The D stands for Disruption. The role of the D-teams is to disrupt from within. D-teams assess what the organization is working on, identify opportunities, rapidly assemble a team and disrupt. This type of competitive fire will makes the whole team better. Idea #7: Resources One of the most common refrains in every organization today is, “We don’t have enough resources.” Or, “We know what to do, but don’t have the time or money.” This is a choice, not an issue. If something does not have the right resourcing, it is because the organization is choosing that. If you are asking for resources and not getting them, its because you have not prepared a convincing argument. Sometimes, this means you have to “Take the Horse off The Chart”. “Deciding what not to do, is as important as deciding what to do.” – Steve Jobs Idea #8: Client Satisfaction Quality is the taste you leave in a client’s mouth. Most organizations underestimate the negative impact of quality on their business. Its underestimated because it’s hard to quantify. Clients no longer have to buy inferior goods and services since information and alternatives are so easy to obtain. It’s that simple. “What can a sales person say to somebody to get them to buy a product that they already use every day if they don’t like it? Nothing.” -Larry Ellison Idea #9: Clients, Developers, and Users Some product development organizations spend most of their time focused internally. Some take a reprieve from that and think about clients (which is great). But clients are only one of the three constituents that should drive thinking and behavior. Product development organizations will live and die by how they treat, communicate with, and interact with their constituents. They are:Clients Developers UsersThey are all equally important. How do you make it easy for each of them to work with your products and with you? The organization should obsess over answering that question. With each new product idea, you must be able to articulate the “must have” experience and the target of that experience (clients, users, or developers), before debating how and why a product or feature would be useful. This requires a rigorous process for identifying the most passionate stakeholders and getting their unstructured feedback. Idea #10: At the Service of the Sales Team If a product development team spends all their time in the field, then they lose focus on developing outstanding products. On the other hand, a product development team cannot build outstanding products without an intimate understanding of clients, developers, and users. This is the paradox that every product development team faces. It is incumbent upon each team to figure out how to balance this, with a priority placed on being at the service of sales and constituents. “The key is not spending time, but in investing it.’ –Stephen Covey Idea #11: Innovation on the Edge You cannot be a leader in innovation without dedicating resources to explore and try things that, by definition, are likely to fail. In strategy speak; this would be a Horizon 3 project. There are many other areas to explore. Identifying the important waves to ride is important. It’s equally important to actually ride the wave (i.e. execute on it). “If you only do things where you know the answer in advance, your company goes away.” –Jeff Bezos Idea #12: Product Releases Per Benedict Evans, there is a distinct pattern in Apple’s product releases and announcements. In almost every case, they are sure to have:Cool, incremental improvements, which cater to existing users ‘Tent-pole’ features, which become focus points for marketing campaigns Fundamental strategic moves that widen the moat around their competitive advantageThis is a very thoughtful approach to product releases. Every organization can learn something from this. Leading in product development is much more about culture, than it is about management and hierarchy. At times, management and hierarchy encumber product development teams. Sometimes the best way to understand how you need to change is by looking at companies or organizations on the other end of the spectrum. GitHub is one of those companies. GitHub has no managers. The sole focus of the organizational design is on developer productivity. Steve Jobs once said, ‘you have to be run by ideas, not hierarchy.” There is latent talent and creativity in every development organization. Being Legendary is about finding a way to unleash that talent.Reference: Legendary Product Development from our JCG partner Rob Thomas at the Rob’s Blog blog....
java-interview-questions-answers

Defend your Application with Hystrix

In previous post http://www.javacodegeeks.com/2014/07/rxjava-java8-java-ee-7-arquillian-bliss.html we talked about microservices and how to orchestrate them using Reactive Extensions using (RxJava). But what’s happen when one or many services fail because they have been halted or they throw an exception? In a distributed system like microservices architecture it is normal that a remote service may fail so communication between them should be fault tolerant and manage the latency in network calls properly. And this is exactly what Hystrix does. Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable. In a distributed architecture like microservices, one service may require to use other services as dependencies to accomplish his work. Every point in an application that reaches out over the network or into a client library that can potentially result in network requests is a source of failure. Worse than failures, these applications can also result in increased latencies between services. And this leaves us to another big problem, suppose you are developing a service on a Tomcat which will open two connections to two services, if one of this service takes more time than expected to send back a response, you will be spending one thread of Tomcat pool (the one of current request) doing nothing rather than waiting an answer. If you don’t have a high traffic site this may be acceptable, but if you have a considerable amount of traffic all resources may become saturated and and block the whole server. An schema from this scenario is provided on Hystrix wiki:The way to avoid previous problem is to add a thread layer which isolates each dependency from each other. So each dependency (service) may contain a thread pool to execute that service. In Hystrix this layer is implemented by HystricxCommand object, so each call to an external service is wrapped to be executed within a different thread. An schema of this scenario is provided on Hystrix wiki:But also Hystrix provides other features:Each thread has a timeout so a call may not be infinity waiting for a response. Perform fallbacks wherever feasible to protect users from failure. Measure success, failures (exceptions thrown by client), timeouts, and thread rejections and allows monitorizations. Implements a circuit-breaker pattern which automatically or manually to stop all requests to an external service for a period of time if error percentage passes a threshold.So let’s start with a very simple example: public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String run() throws Exception { return "Hello World"; } } And then we can execute that command in a synchronous way by using execute method. new HelloWorldCommand().execute(); Although this command is synchronous, it is executed in a different thread. By default Hystrix creates a thread pool for each command defined inside the same HystrixCommandGroupKey. In our example Hystrix creates a thread pool linked to all commands grouped to HelloWorld thread pool. Then for every execution, one thread is get from pool for executing the command. But of course we can execute a command asynchornously (which perfectly fits to asynchronous JAX-RS 2.0 or Servlet 3.0 specifications). To do it simply run: Future<String> helloWorldResult = new HelloWorldCommand().queue(); //some more work Stirng message = helloWorldResult.get(); In fact synchronous calls are implemented internally by Hystrix as return new HelloWorldCommand().queue().get(); internally. We have seen that we can execute a command synchronously and asynchronously, but there is a third method which is reactive execution using RxJava (you can read more about RxJava in my previous post http://www.javacodegeeks.com/2014/07/rxjava-java8-java-ee-7-arquillian-bliss.html). To do it you simply need to call observe method: Observable<String> obs = new HelloWorldCommand().observe(); obs.subscribe((v) -> { System.out.println("onNext: " + v); } But sometimes things can go wrong and execution of command may throw an exception. All exceptions thrown from the run() method except for HystrixBadRequestException count as failures and trigger getFallback() and circuit-breaker logic (more to come about circuit-breaker). Any business exception that you don’t want to count as service failure (for example illegal arguments) must be wrapped in HystrixBadRequestException. But what happens with service failures, what Hystrix can do for us? In summary Hystrix can offer two things:A method to do something in case of a service failure. This method may return an empty, default value or stubbed value, or for example can invoke another service that can accomplish the same logic as the failing one. Some kind of logic to open and close the circuit automatically.Fallback The method that is called when an exception occurs (except for HystrixBadRequestException) is getFallback(). You can override this method and provide your own implementation. public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String getFallback() { return "Good Bye"; }@Override protected String run() throws Exception { //return "Hello World"; throw new IllegalArgumentException(); } } Circuit breaker Circuit breaker is a software pattern to detect failures and avoid receiving the same error constantly. But also if the service is remote you can throw an error without waiting for TCP connection timeout. Suppose next typical example: A system need to access database like 100 times per second and it fails. Same error will be thrown 100 times per second and because connection to remote Database implies a TCP connection, each client will wait until TCP timeout expires. So it would be much useful if system could detect that a service is failing and avoid clients do more requests until some period of time. And this is what circuit breaker does. For each execution check if the circuit is open (tripped) which means that an error has occurred and the request will be not sent to service and fallback logic will be executed. But if the circuit is closed then the request is processed and may work. Hystrix maintains an statistical database of number of success request vs failed requests. When Hystrix detects that in a defined spare of time, a threshold of failed commands has reached, it will open the circuit so future request will be able to return the error as soon as possible without having to consume resources to a service which probably is offline. But the good news is that Hystrix is also the responsible of closing the circuit. After elapsed time Hystrix will try to run again an incoming request, if this request is successful, then it will close the circuit and if not it will maintain the circuit opened. In next diagram from Hystrix website you can see the interaction between Hystrix and circuit.Now that we have seen the basics of Hystrix, let’s see how to write tests to check that Hystrix works as expected. Last thing before test. In Hystrix there is an special class called HystrixRequestContext. This class contains the state and manages the lifecycle of a request. You need to initialize this class if for example you want to Hystrix manages caching results or for logging purposes. Typically this class is initialized just before starting the business logic (for example in a Servlet Filter), and finished after request is processed. Let’s use previous HelloWorldComand to validate that fallback method is called when circuit is open. public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String getFallback() { return "Good Bye"; }@Override protected String run() throws Exception { return "Hello World"; } } And the test. Keep in mind that I have added a lot of asserts in the test for academic purpose. @Test public void should_execute_fallback_method_when_circuit_is_open() { //Initialize HystrixRequestContext to be able to get some metrics HystrixRequestContext context = HystrixRequestContext.initializeContext(); HystrixCommandMetrics creditCardMetrics = HystrixCommandMetrics.getInstance(HystrixCommandKey.Factory.asKey(HelloWorldRestCommand.class.getSimpleName())); //We use Archaius to set the circuit as closed. ConfigurationManager.getConfigInstance().setProperty("hystrix.command.default.circuitBreaker.forceOpen", false); String successMessage = new HelloWorldRestCommand().execute(); assertThat(successMessage, is("Hello World")); //We use Archaius to open the circuit ConfigurationManager.getConfigInstance().setProperty("hystrix.command.default.circuitBreaker.forceOpen", true); String failMessage = new HelloWorldRestCommand().execute(); assertThat(failMessage, is("Good Bye")); //Prints Request => HelloWorldRestCommand[SUCCESS][19ms], HelloWorldRestCommand[SHORT_CIRCUITED, FALLBACK_SUCCESS][0ms] System.out.println("Request => " + HystrixRequestLog.getCurrentRequest().getExecutedCommandsAsString()); assertThat(creditCardMetrics.getHealthCounts().getTotalRequests(), is(2)); assertThat(creditCardMetrics.getHealthCounts().getErrorCount(), is(1));} This is a very simple example, because execute method and fallback method are pretty simple, but if you think that execute method may contain complex logic and fallback method can be as complex too (for example retrieving data from another server, generate some kind of stubbed data, …), then writing integration or functional tests that validates all this flow it starts having sense. Keep in mind that sometimes your fallback logic may depends on previous calls from current user or other users. Hystrix also offers other features like cashing results so any command already executed within same HystrixRequestContext may return a cache result (https://github.com/Netflix/Hystrix/wiki/How-To-Use#Caching). Another feature it offers is collapsing. It enables automated batching of requests into a single HystrixCommand instance execution. It can use batch size and time as the triggers for executing a batch. As you may see Hystrix is a really simple yet powerful library, that you should take under consideration if your applications call external services. We keep learning, Alex.Sing us a song, you’re the piano man, Sing us a song tonight , Well, we’re all in the mood for a melody , And you’ve got us feelin’ alright (Piano Man – Billy Joel)Music:  https://www.youtube.com/watch?v=gxEPV4kolz0Reference: Defend your Application with Hystrix from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
java-interview-questions-answers

Developing a top-down Web Service project

This is a sample chapter taken from the Advanced JAX-WS Web Services book edited by Alessio Soldano. The bottom-up approach for creating a Web Service endpoint has been introduced in the first chapter. It allows exposing existing beans as Web Service endpoints very quickly: in most cases, turning the classes into endpoints is a matter of simply adding few annotations in the code. However, when developing a service with an already defined contract, it is far simpler (and effective) to use the top-down approach, since a wsdl-to-java tool can generate the annotated code matching the WSDL. This is the preferred solution in multiple scenarios such as the following ones:Creating a service that adheres to the XML Schema and WSDL that have been developed by hand up front; Exposing a service that conforms to a contract specified by a third party (e.g. a vendor that calls the service using an already defined set of messages); Replacing the implementation of an existing Web Service while keeping compatibility with older clients (the contract must not change).In the next sections, an example of top-down Web Service endpoint development is provided, as well as some details on constraints the developer has to be aware of when coding, regardless of the chosen approach. Creating a Web Service using the top-down approach In order to set up a full project which includes a Web Service endpoint and a JAX-WS client we will use two Maven projects. The first one will be a standard webapp-javaee7 project, which will contain the Web Service Endpoint. The second one, will be just a quickstart Maven project that will execute a Test case against the Web Service. Let’s start creating the server project as usual with: mvn -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee7 -DarchetypeVersion=0.4-SNAPSHOT -DarchetypeRepository=https://nexus.codehaus.org/content/repositories/snapshots -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=ws-demo2 -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Darchetype.interactive=false --batch-mode --update-snapshots archetype:generate Next step will be creating the Web Service interface and stubs from a WSDL contract. The steps are similar to those for building up a client for the same contract. The only difference is that the wsconsume script will output the generated source files into our Maven project: $ wsconsume.bat -k CustomerService.wsdl -o ws-demo-wsdl\src\main\java In addition to the generated classes, which we have discussed at the beginning of the chapter, we need to provide a Service Endpoint Implementation that contains the Web Service functionalities: @WebService(endpointInterface="org.jboss.test.ws.jaxws.samples.webresult.Customer") public class CustomerImpl implements Customer { public CustomerRecord locateCustomer(String firstName, String lastName, USAddress address) { CustomerRecord cr = new CustomerRecord(); cr.setFirstName(firstName); cr.setLastName(lastName); return cr; } }The endpoint implementation class implements the endpoint interface and references it through the @WebService annotation. Our WebService class does nothing fancy, just create a CustomerRecord object using the parameters received as input. In a real world example, you would collect the CustomerRecord using the Persistence Layer for example. Once the implementation class has been included in the project, the project needs to be packaged and deployed to the target container, which will expose the service endpoint with the same contract that was consumed by the tool. It is also possible to reference a local WSDL file in the @WebService wsdlLocation attribute in the Service Interface and include the file in the deployment. That would make the exact provided document be published. If you are deploying the Web Service to WildFly application server, then you can check from a management instrument like the Admin Console that the endpoint is now available. Select the Upper Runtime tab and click on the Web Services link contained in the left Subsystem left option:Requirements of a JAX-WS endpoint Regardless of the approach chosen for developing a JAX-WS endpoint, the actual implementation needs to satisfy some requirements:The implementing class must be annotated with either the javax.jws.WebService or the javax.jws.WebServiceProvider annotation. The implementing class may explicitly reference a service endpoint interface through the endpointInterface element of the @WebService annotation but is not required to do so. If no endpointInterface is specified in @WebService, the service endpoint interface is implicitly defined for the implementing class. The business methods of the implementing class must be public and must not be declared static or final. The javax.jws.WebMethod annotation is to be used on business methods to be exposed to web service clients; if no method is annotated with @WebMethod, all business methods are exposed. Business methods that are exposed to web service clients must have JAXB-compatible parameters and return types. The implementing class must not be declared final and must not be abstract. The implementing class must have a default public constructor and must not define the finalize method. The implementing class may use the javax.annotation.PostConstruct or the javax.annotation.PreDestroy annotations on its methods for lifecycle event callbacks.Requirements for building and running a JAX-WS client A JAX-WS client can be part of any Java project and is not explicitly required to be part of a JAR/WAR archive deployed on a JavaEE container. For instance, the client might simply be contained in a quickstart Maven project as follows: mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=client-demo-wsdl -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Dversion=1.0 -Darchetype.interactive=false --batch-mode As your client needs to reference the endpoint interface and stubs, you need to provide them either copying them from the server project or generating them again using wsconsume: $ wsconsume.bat -k CustomerService.wsdl -o client-demo-wsdl\src\main\java Now include a minimal Client Test application, which is part of a JUnit test case: public class AppTest extends TestCase { public void testApp() { CustomerService service = new CustomerService(); Customer port = service.getCustomerPort(); CustomerRecord record = port.locateCustomer("John", "Li", new USAddress()); System.out.println("Customer record is " +record); assertNotNull(record); } }Compiling and running the test In order to run successfully running a WS client application, a classloader needs to be properly setup to include the desired JAX-WS implementation libraries (and the required transitive dependencies, if any). Depending on the environment the client is meant to be run in, this might imply adding some jars to the classpath, or adding some artifact dependencies to the Maven dependency tree, setting the IDE properly, etc. Since Maven is used to build the application containing the client, you can configure your pom.xml as follows so that it includes a dependency to the JBossWS: <dependency> <groupId>org.jboss.ws.cxf</groupId> <artifactId>jbossws-cxf-client</artifactId> <version>4.2.3.Final</version> <scope>provided</scope> </dependency>Now, you can execute the testcase which will call the JAX-WS API to serve the client invocation using JBossWS. mvn clean package test Focus on the JAX-WS implementation used by the client The JAX-WS implementation to be used for running a JAX-WS client is selected at runtime by looking for META-INF/services/javax.xml.ws.spi.Provider resources through the application classloader. Each JAX-WS implementation has a library (jar) including that resource file which internally references the proper class implementing the JAX-WS SPI Provider. On WildFly 8.0.0.Final application server the JAX-WS implementation is contained in the META-INF/services/javax.xml.ws.spi.Provider of the file jbossws-cxf-factories-4.2.3.Final: org.jboss.wsf.stack.cxf.client.ProviderImpl Therefore, it is extremely important to control which artifacts or jar libraries are included in the classpath the application classloader is constructed from. If multiple implementations are found, order matters, hence the first implementation in the classpath will be used. The safest way to avoid any classpath issue (and thus load another JAX-WS implementation) is to set the java.endorsed.dirs system property to include the jbossws-cxf-factories.jar; if you don’t do that, make sure you don’t include ahead of your classpath other META-INF/services/javax.xml.ws.spi.Provider resources which will trigger another JAX-WS implementation. Finally, if the JAX-WS client is meant to run on WildFly as part of a JavaEE application, the JBossWS JAX-WS implementation will be automatically selected for serving the client. This excerpt has been taken from the “Advanced JAX-WS Web Services” book in which you’ll learn the concepts of SOAP based Web services architecture and get practical advice on building and deploying Web services in the enterprise. Starting from the basics and the best practices for setting up a development environment, this book enters into the inner details of the JAX-WS in a clear and concise way. You will also learn about the major toolkits available for creating, compiling and testing SOAP Web services and how to address common issues such as debugging data and securing its content. What you will learn from this book:Move your first steps with SOAP Web services. Installing the tools required for developing and testing applications. Developing Web services using top-down and bottom-up approach. Using Maven archetypes to speed up Web services creation. Getting into the details of JAX-WS types: Java to XML mapping and XML to Java Developing SOAP Web services on WildFly 8 and Tomcat. Running native Apache CXF on WildFly. Securing Web services. Applying authentication policies to your services. Encrypting the communication....
junit-logo

Some more unit test tips

In my previous post I showed some tips on unit testing JavaBeans. In this blog entry I will give two more tips on unit testing some fairly common Java code, namely utility classes and Log4J logging statements. Testing Utility classes If your utility classes follow the same basic design as the ones I tend to write, they consist of a final class with a private constructor and all static methods.       Utility class tester package it.jdev.example;import static org.junit.Assert.*;import java.lang.reflect.*;import org.junit.Test;/** * Tests that a utility class is final, contains one private constructor, and * all methods are static. */ public final class UtilityClassTester {private UtilityClassTester() { super(); }/** * Verifies that a utility class is well defined. * * @param clazz * @throws Exception */ @Test public static void test(final Class<?> clazz) throws Exception { // Utility classes must be final. assertTrue("Class must be final.", Modifier.isFinal(clazz.getModifiers()));// Only one constructor is allowed and it has to be private. assertTrue("Only one constructor is allowed.", clazz.getDeclaredConstructors().length == 1); final Constructor<?> constructor = clazz.getDeclaredConstructor(); assertFalse("Constructor must be private.", constructor.isAccessible()); assertTrue("Constructor must be private.", Modifier.isPrivate(constructor.getModifiers()));// All methods must be static. for (final Method method : clazz.getMethods()) { if (!Modifier.isStatic(method.getModifiers()) && method.getDeclaringClass().equals(clazz)) { fail("Non-static method found: " + method + "."); } } }} This UtilityClassTester itself also follows the utility class constraints noted above, so what better way to demonstrate its use by using it to test itself: Test case for the UtilityClassTester package it.jdev.example;import org.junit.Test;public class UtilityClassTesterTest {@Test public void test() throws Exception { UtilityClassTester.test(UtilityClassTester.class); }} Testing Log4J logging events When calling a method that declares an exception you’ll either re-declare that same exception, or you’ll try to deal with it within a try-catch block. In the latter case, the very least you will do is log the caught exception. A very simplistic example is the following: MyService example package it.jdev.example;import java.lang.invoke.MethodHandles;import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class MyService {private static final Logger LOGGER = Logger.getLogger(MethodHandles.Lookup.class);@Autowired private MyRepository myRepository;public void doSomethingUseful() { try { myRepository.doSomethingVeryUseful(); } catch (SomeException e) { LOGGER.error("Some very informative error logging.", e); } }} Of course, you will want to test that the exception is logged appropriately. Something along the line of the following: Test case for MyService logging event package it.jdev.example;import static org.junit.Assert.*;import org.apache.log4j.spi.LoggingEvent; import org.junit.*; import org.mockito.*;public class MyServiceTest {@Mock private MyRepository myRepository;@InjectMocks private MyService myService = new MyService();@Before public void setup() { MockitoAnnotations.initMocks(this); }@Test public void thatSomeExceptionIsLogged() throws Exception { TestAppender testAppender = new TestAppender();Mockito.doThrow(SomeException.class).when(myRepository).doSomethingVeryUseful(); myService.doSomethingUseful();assertTrue(testAppender.getEvents().size() == 1); final LoggingEvent loggingEvent = testAppender.getEvents().get(0); assertEquals("Some very informative error logging.", loggingEvent.getMessage().toString()); }} But how can you go about to achieve this? As it turns out it is very easy to add a new LogAppender to the Log4J RootLogger. TestAppender for Log4J package it.jdev.example;import java.util.*;import org.apache.log4j.*; import org.apache.log4j.spi.*;/** * Utility for testing Log4j logging events. * <p> * Usage:<br /> * <code> * TestAppender testAppender = new TestAppender();<br /> * classUnderTest.methodThatWillLog();<br /><br /> * LoggingEvent loggingEvent = testAppender.getEvents().get(0);<br /><br /> * assertEquals()...<br /><br /> * </code> */ public class TestAppender extends AppenderSkeleton {private final List<LoggingEvent> events = new ArrayList<LoggingEvent>();public TestAppender() { this(Level.ERROR); }public TestAppender(final Level level) { super(); Logger.getRootLogger().addAppender(this); this.addFilter(new LogLevelFilter(level)); }@Override protected void append(final LoggingEvent event) { events.add(event); }@Override public void close() { }@Override public boolean requiresLayout() { return false; }public List<LoggingEvent> getEvents() { return events; }/** * Filter that decides whether to accept or deny a logging event based on * the logging level. */ protected class LogLevelFilter extends Filter {private final Level level;public LogLevelFilter(final Level level) { super(); this.level = level; }@Override public int decide(final LoggingEvent event) { if (event.getLevel().isGreaterOrEqual(level)) { return ACCEPT; } else { return DENY; } }}}Reference: Some more unit test tips from our JCG partner Wim van Haaren at the JDev blog....
java-logo

Custom JSR 303 Bean Validation constraints for the JSR 310 New Date/Time API

With JSR 310 Java 8 finally brought us a decent date and time API. For those of you that are still using Java 7 – like I am at my current project – there is an excellent backport available, see www.threeten.org for more details. However, I’m not going to go into any details about using the new API since there are already a ton of blog posts out there about the topic. What I am going to show you in this post is how you can use the Date/Time API in conjunction with the JSR 303 Bean Validation API by writing your own custom annotations. If you’re using both bean validation and the new date/time API you’ll probably want to use them in conjunction. The API and an implementation like Hibernate Validator only provide a handful of constraints, e.g. NotEmpty or @Pattern. However, as of yet there are no out-of-the-box constraints for JSR 310. Fortunately it is very easy to create your own constraints. As an example I will demonstrate how you can write your own @Past annotation for validating java.time.LocalDate fields. For testing purposes we’ll start off with a very simple class that holds a date and a dateTime. These fields are supposed to represent dates in the past. Therefore they are annotated with the @Past anootation: ClassWithPastDates package it.jdev.example.jsr310.validator;import java.time.LocalDate; import java.time.LocalDateTime;public class ClassWithPastDates {@Past private LocalDate date;@Past private LocalDateTime dateTime; public LocalDate getDate() { return date; } public void setDate(LocalDate date) { this.date = date; } public LocalDateTime getDateTime() { return dateTime; } public void setDateTime(LocalDateTime dateTime) { this.dateTime = dateTime; }} Next, we’ll write a very basic unit test for the @Past constraint that demonstrates our intentions: obviously besides dates that lie in the past, we’ll also want a null reference to be valid but dates in the future to be invalid, and even today should count as invalid. PastTest package it.jdev.example.jsr310.validator;import static org.junit.Assert.assertEquals;import java.time.LocalDate; import java.time.LocalDateTime; import java.util.Set;import javax.validation.ConstraintViolation; import javax.validation.Validation; import javax.validation.Validator; import javax.validation.ValidatorFactory;import org.junit.Before; import org.junit.Test;public class PastTest { private ClassWithPastDates classUnderTest;@Before public void setup() { classUnderTest = new ClassWithPastDates(); }@Test public void thatNullIsValid() { Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 0); }@Test public void thatYesterdayIsValid() throws Exception { classUnderTest.setDate(LocalDate.now().minusDays(1)); classUnderTest.setDateTime(LocalDateTime.now().minusDays(1)); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 0); }@Test public void thatTodayIsInvalid() throws Exception { classUnderTest.setDate(LocalDate.now()); classUnderTest.setDateTime(LocalDateTime.now()); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 2); }@Test public void thatTomorrowIsInvalid() throws Exception { classUnderTest.setDate(LocalDate.now().plusDays(1)); classUnderTest.setDateTime(LocalDateTime.now().plusDays(1)); Set<ConstraintViolation<ClassWithPastDates>> violations = validateClass(classUnderTest); assertEquals(violations.size(), 2); }private Set<ConstraintViolation<ClassWithPastDates>> validateClass(ClassWithPastDates myClass) { ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); Set<ConstraintViolation<ClassWithPastDates>> violations = validator.validate(myClass); return violations; }} Now that we’ve got the basic test set up, we can implement the constraint itself. This consists of two steps. First we’ll have to write the annotation, and then we’ll have to implement a ConstraintValidator. To start with the annotation: @interface Past package it.jdev.example.jsr310.validator;import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;import javax.validation.Constraint; import javax.validation.Payload;@Target({ ElementType.FIELD }) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy = PastValidator.class) @Documented public @interface Past {String message() default "it.jdev.example.jsr310.validator.Past.message";Class<?>[] groups() default {};Class<? extends Payload>[] payload() default {};} As you can see, the @Past annotation is not very spectacular. The main thing to notice is the @Constraint annotations where we specify which class will be used to perform the actual validation. PastValidator package it.jdev.example.jsr310.validator;import java.time.LocalDate; import java.time.temporal.Temporal;import javax.validation.ConstraintValidator; import javax.validation.ConstraintValidatorContext;public class PastValidator implements ConstraintValidator<Past, Temporal> {@Override public void initialize(Past constraintAnnotation) { }@Override public boolean isValid(Temporal value, ConstraintValidatorContext context) { if (value == null) { return true; } LocalDate ld = LocalDate.from(value); if (ld.isBefore(LocalDate.now())) { return true; } return false; }} The PastValidator is where all the magic happens. By implementing the ConstraintValidator interface we’re obliged to provide two methods but for our example only the isValid() method is of use, this is where we’ll perform the actual validation. Note that we’ve used the java.time.temporal.Temporal as the type because it is the interface that both the LocalDate and LocalDateTime classes have in common. This allows us to use the same @Past for both LocalDate and LocalDateTime fields. And that really is all there is to it. With this very basic example I’ve shown how easy it is to create your own custom JSR 303 bean validation constraint.Reference: Custom JSR 303 Bean Validation constraints for the JSR 310 New Date/Time API from our JCG partner Wim van Haaren at the JDev blog....
java-interview-questions-answers

2 Ways of Passing Properties / Parameters in Java EE 7 Batch

When it comes to the Java EE 7 Batch Processing facility, there are 2 ways in passing properties / parameters to the chunks and batchlets. This quick guide shows you the 2 ways, which could be use very frequently when developing batch processing the Java EE 7 way. 1. Pre-Defined Properties / Parameters Before Runtime Pre-Defined properties are properties (name value pairs) which you define before deploying the application. In other words, it is fix and static, never dynamic and the values will always stay the same when you retrieve it. This is done through the job descriptor XML file, which resides in e.g. META-INF/batch-jobs/demo-job.xml. For example: <?xml version="1.0" encoding="UTF-8"?> <job id="demoJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <properties> <property name="staticParamName1" value="staticParamValue1" /> <property name="staticParamName2" value="staticParamValue2" /> </properties>   <!-- Then, the rest of the steps definition --> </job>All that it takes it is to have each pre-defined properties placed within the <properties /> tag. After the app is deployed, these properties will be made available to the objects of ItemReader, ItemProcessor, ItemWriter and Batchlet defined in the XML file during runtime. Here’s an example of how to retrieve the pre-defined properties / parameters during runtime. @Dependent @Named( "DemoReader" ) public class DemoReader extends AbstractItemReader { @Inject private JobContext jobCtx;   @Override public void open( Serializable ckpt ) throws Exception {   // Retrieve the value of staticParamName1 defined in job descriptor XML String staticParamValue1 = jobCtx.getProperties().getProperty( "staticParamName1" );   // The rest of the implementation }   // The rest of the overridden methods }The down side of this that the properties’ value will always stay the same throughout the runtime. If you need to pass a dynamic value to the batch step objects, read on… 2. Passing Properties / Parameters Dynamically During Runtime There are situations when dynamic property / parameters values are desired during batch run. To do this, first, the properties / parameters would have to be defined and have the job operator pass to the batch job. For example, I have a JobOperator (Singleton EJB) which will start the batch job through the method runBatchJob() with two dynamic properties / parameters to be passed to the batch job objects: @Singleton public class BatchJobOperator implements Serializable {   public void runBatchJob() { Properties runtimeParameters = new Properties(); runtimeParameters.setProperty( "dynamicPropertyName1", "dynamicPropertyValue1" ); runtimeParameters.setProperty( "dynamicPropertyName2", "dynamicPropertyValue2" );   JobOperator jo = BatchRuntime.getJobOperator();   // Run the batch job with the runtimeParameters passed jo.start( "name-of-job-xml-file-without-dot-xml", runtimeParameters ); } }Once when the application server has the jobs running, the objects involved in the job (ItemReader, ItemProcessor, ItemsWriter and Batchlet) could retrieve the properties set in runtimeParameters, but in a different way. Here’s how to do it in an ItemReader (the same goes for the rest of the batch job step objects): @Dependent @Named( "DemoReader" ) public class DemoReader extends AbstractItemReader { @Inject private JobContext jobCtx;   @Override public void open( Serializable ckpt ) throws Exception {   // Here's how to retrieve dynamic runtime properties / parameters Properties runtimeParams = BatchRuntime.getJobOperator().getParameters( jobCtx.getExecutionId() ); String dynamicPropertyValue1 = runtimeParams.getProperty( "dynamicPropertyName1" ); String dynamicPropertyValue2 = runtimeParams.getProperty( "dynamicPropertyName2" );   // The rest of the implementation }   // The rest of the overridden methods }Notice the difference, instead of getting the properties from the JobContext, the dynamic runtime defined properties has to be gotten from the BatchRuntime’s JobOperator, by passing the Job Context’s execution ID. Hope this is useful.Reference: 2 Ways of Passing Properties / Parameters in Java EE 7 Batch from our JCG partner Max Lam at the A Developer’s Scrappad blog....
devops-logo

Using rlimit (And Why You Should)

I’ve been going through some old notes and came across a reminder of setrlimit(2). This is a C system call that allows an application to specify resource limitations on a number of important parameters:              RLIMIT_AS – The maximum size of the process’s virtual memory (address space) in bytes. RLIMIT_CORE – Maximum size of core file. RLIMIT_CPU – CPU time limit in seconds. RLIMIT_DATA – The maximum size of the process’s data segment (initialized data, uninitialized data, and heap). RLIMIT_FSIZE – The maximum size of files that the process may create. RLIMIT_MEMLOCK – The maximum number of bytes of memory that may be locked into RAM. RLIMIT_MSGQUEUE – Specifies the limit on the number of bytes that can be allocated for POSIX message queues for the real user ID of the calling process. RLIMIT_NICE – Specifies a ceiling to which the process’s nice value can be raised using setpriority(2) or nice(2). RLIMIT_NOFILE – Specifies a value one greater than the maximum file descriptor number that can be opened by this process. RLIMIT_NPROC – The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. RLIMIT_RSS – Specifies the limit (in pages) of the process’s resident set (the number of virtual pages resident in RAM). RLIMIT_RTPRIO – Specifies a ceiling on the real-time priority that may be set for this process using sched_setscheduler(2) and sched_setparam(2). RLIMIT_RTTIME – Specifies a limit (in microseconds) on the amount of CPU time that a process scheduled under a real-time scheduling policy may consume without making a blocking system call. RLIMIT_SIGPENDING – Specifies the limit on the number of signals that may be queued for the real user ID of the calling process. RLIMIT_STACK – The maximum size of the process stack, in bytes.The limits for all programs are specified in configuration files (/etc/security/limits.conf and /etc/security/limits.d), or can be set in an individual shell and its processes via the ‘ulimit’ shell function. Under Linux the current resource limits for a process are visible at /proc/[pid]/limits. The limits can also be set programmatically, via setrlimit(2). Any process can give itself more restrictive limits. Any privileged process (running as root or with the correct capability) can give itself more permissive limits. I believe most systems default to unlimited or very high limits and it is the responsibility of the application to specify tighter limits. Better secured systems will do the reverse – they’ll have much tighter restrictions and use a privileged loader to grant more resources to specific programs. Why do we care? Security in depth. First, people make mistakes. Setting reasonable limits keeps a runaway process from taking down the system. Second, attackers will take advantage of any opportunity they can find. A buffer overflow isn’t an abstract concern – they are real and often allow an attacker to execute arbitrary code. Reasonable limits may be enough to sharply curtail the damage caused by an exploit. Here are some concrete examples: First, setting RLIMIT_NPROC to zero means that the process cannot fork/exec a new process – an attacker cannot execute arbitrary code as the current user. (Note: the man pages suggests this may limit the total number of processes for the user, not just in this process and its children. This should be double-checked.) It also prevents a more subtle attack where a process is repeatedly forked until a desired PID is acquired. PIDs should be unique but apparently some kernels now support a larger PID space than the traditional pid_t. That means legacy system calls may be ambiguous. Second, setting RLIMIT_AS, RLIMIT_DATA, and RLIMIT_MEMLOCK to reasonable values prevents a process from forcing the system to thrash by limiting available memory. Third, setting RLIMIT_CORE to a reasonable value (or disabling core dumps entirely) has historically been used to prevent denial of service attacks by filling the disk with core dumps. Today core dumps are often disabled to ensure sensitive information such as encryption keys are not inadvertently written to disk where an attacker can later retrieve them. Sensitive information should also be memlock()ed to prevent it from being written to the swap disk. What about java? Does this impact java? Yes. The standard classloader maintains an open ‘file handle’ for every loaded class. This can be thousands of open file handles for application servers. I’ve seen real-world failures that were ultimately tracked down to hitting the RLIMIT_NOFILE limit. There are three solutions. The first is to increase the number of permitted open files for everyone via the limits.conf file. This is undesirable – we want applications and users to have enough resources to do their job but not much more. The second is to increase the number of permitted open files for just the developers and application servers. This is better than the first option but can still let a rogue process cause a lot of damage. The third is to write a simple launcher app that sets a higher limit before doing an exec() to launch the application server or developer’s IDE. This ensures that only the authorized applications get the additional resources. (Java’s SecurityManager can also be used to limit resource usage but that’s beyond the scope of this discussion.) Sample code Finally some sample code from the prlimit man page. The setrlimit version is similar. #define _GNU_SOURCE #define _FILE_OFFSET_BITS 64 #include <stdio.h> #include <time.h> #include <stdlib.h> #include <unistd.h> #include <sys/resource.h>#define errExit(msg) do { perror(msg); exit(EXIT_FAILURE); } while (0)int main(int argc, char *argv[]) { struct rlimit old, new; struct rlimit *newp; pid_t pid;if (!(argc == 2 || argc == 4)) { fprintf(stderr, "Usage: %s [<new-soft-limit> <new-hard-limit>]\n", argv[0]); exit(EXIT_FAILURE); }pid = atoi(argv[1]); /* PID of target process */newp = NULL; if (argc == 4) { new.rlim_cur = atoi(argv[2]); new.rlim_max = atoi(argv[3]); newp = ≠w; }/* Set CPU time limit of target process; retrieve and display previous limit */ if (prlimit(pid, RLIMIT_CPU, newp, &old) == -1) errExit("prlimit-1"); printf("Previous limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max);/* Retrieve and display new CPU time limit */ if (prlimit(pid, RLIMIT_CPU, NULL, &old) == -1) errExit("prlimit-2"); printf("New limits: soft=%lld; hard=%lld\n", (long long) old.rlim_cur, (long long) old.rlim_max);exit(EXIT_FAILURE); } Usage in practice It should not be hard to write a function that sets limitations as part of the program startup, perhaps as the final step in program initialization but before reading anything provided by the user. In many cases we can just take the existing resource usage and add just enough to cover what we’ll need to support the user’s request. E.g., perhaps two additional file handles, one for input and one for output. In other cases it’s harder to identify good limits but there are three approaches. The first is to focus on what’s critical. E.g., many applications know that they should never launch a subprocess so RLIMIT_NPROC can be set to zero. (Again, after verifying that this is the limit of processes under the current process, not all processes for the user.) They know that the should never need to open more than a handful of additional files so RLIMIT_NOFILE can be set to allow a few more open files but no more. Even these modest restrictions can go a long way towards limiting damage. The second is to simply pick some large value that you are sure will be adequate for limits on memory or processor usage. Maybe 100 MB is an order of magnitude too large – but it’s an order of magnitude smaller than it was before. This approach can be especially useful for subprocesses in a boss/worker architecture where the amount of resources required by any individual worker can be well-estimated. The final approach requires more work but will give you the best numbers. During development you’ll add a little bit of additional scaffolding:Run the program as setuid root but immediately change the effective user to an unprivileged user. Set a high hard limit and low soft limit. Check whether the soft limit is hit on every system call. (You should already checking for errors.) On soft limit hits change the effective user to root, bump the soft limit, restore the original effective user, and retry the operation. Log it every time you must bump the soft limit. Variant – have an external process poll the /proc/[pid]/limits file.With good functional and acceptance tests you should have a solid idea about the resources required by the program. You’ll still want to be generous with the final resource limits but it should give you a good ‘order of magnitude’ estimate for what you need, e.g., 10 MB vs 2 GB. On a final note: disk quotas We’ve been discussing resource limitations on an individual process but sometimes the problem is resource exhaustion over time. Specifically disk usage – an application could inadvertently cause a denial of service attack by filling the disk. There’s an easy solution to this – enabling disk quotas. We normally think of disk quotas as being used to make sure users on a multi-user system play well together but they can also be used as a security measure to constrain compromised servers.Reference: Using rlimit (And Why You Should) from our JCG partner Bear Giles at the Invariant Properties blog....
spring-interview-questions-answers

Customizing HttpMessageConverters with Spring Boot and Spring MVC

Exposing a REST based endpoint for a Spring Boot application or for that matter a straight Spring MVC application is straightforward, the following is a controller exposing an endpoint to create an entity based on the content POST’ed to it:                 @RestController @RequestMapping("/rest/hotels") public class RestHotelController { .... @RequestMapping(method=RequestMethod.POST) public Hotel create(@RequestBody @Valid Hotel hotel) { return this.hotelRepository.save(hotel); } } Internally Spring MVC uses a component called a HttpMessageConverter to convert the Http request to an object representation and back. A set of default converters are automatically registered which supports a whole range of different resource representation formats – json, xml for instance. Now, if there is a need to customize the message converters in some way, Spring Boot makes it simple. As an example consider if the POST method in the sample above needs to be little more flexible and should ignore properties which are not present in the Hotel entity – typically this can be done by configuring the Jackson ObjectMapper, all that needs to be done with Spring Boot is to create a new HttpMessageConverter bean and that would end up overriding all the default message converters, this way: @Bean public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() { MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter(); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); jsonConverter.setObjectMapper(objectMapper); return jsonConverter; } This works well for a Spring-Boot application, however for straight Spring MVC applications which do not make use of Spring-Boot, configuring a custom converter is a little more complicated – the default converters are not registered by default and an end user has to be explicit about registering the defaults – the following is the relevant code for Spring 4 based applications: @Configuration public class WebConfig extends WebMvcConfigurationSupport {@Bean public MappingJackson2HttpMessageConverter customJackson2HttpMessageConverter() { MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter(); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); jsonConverter.setObjectMapper(objectMapper); return jsonConverter; } @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { converters.add(customJackson2HttpMessageConverter()); super.addDefaultHttpMessageConverters(); } } Here WebMvcConfigurationSupport provides a way to more finely tune the MVC tier configuration of a Spring based application. In the configureMessageConverters method, the custom converter is being registered and then an explicit call is being made to ensure that the defaults are registered also. A little more work than for a Spring-Boot based application.Reference: Customizing HttpMessageConverters with Spring Boot and Spring MVC from our JCG partner Biju Kunjummen at the all and sundry blog....
jboss-infinispan-logo

Using Infinispan as a persistency solution

Cross-posted from https://vaadin.com/blog/-/blogs/using-infinispan-as-a-persistency-solution. Thanks Fredrik and Matti for your permission! Various RDBMSs are the de-facto standard for persistency. Using them is such a safe bet by architects that I dare say they are used in too many places nowadays. To fight against this, I have recently been exploring with alternative persistency options, like graph databases. This time I played with Infinispan. In case you are not familiar with Infinispan, or distributed key/value data stores in general, you could think of it as a HashMap on steroids. Most essentially, the map is shared among all your cluster nodes. With clustering you can gain huge size, blazing fast access and redundancy, depending on how you configure it. There are several products that compete with Infinispan, like Ehcache and Hazelcast from OS world and Oracle Coherence from the commercial side. Actually, Infinispan is a technology that you might have used without noticing it at all. For example high availability features of Wildfly heavily rely on Infinispan caches. It is also often used as a second level cache for ORM libraries. But it can also be used directly as a persistency library as such. Why would you consider it as your persistency solution:It is a lightning fast in-memory data storage The stored value can be any serializable object, no complex mapping libraries needed It is built from the ground up for a clustered environment – your data is safer and faster to access. It is very easy for horizontal scaling It has multiple optional cache store alternatives, for writing the state to e.g. disk for cluster wide reboots Not all data needs to be stored forever, Infinispan has built-in sophisticated evict rules Possibility to use transactional access for ACID changesSounds pretty amazing, doesn’t it? And it sure is for certain use cases, but all technologies have their weaknesses and so do key/value data stores. When comparing to RDBMSs, the largest drawback is with relations to other entities. You’ll have to come up with a strategy for how to store references to other entities and searching based on related features must also be tackled. If you end up wondering these questions, be sure to check if Hibernate OGM could help you. Also, doing some analysis on the data can be considered simpler, or at least more familiar, with traditional SQL queries. Especially if you end up having a lot of data, distributed on multiple nodes, you’ll have to learn the basics of MapReduce programming model to do any non trivial queries. Using Infinispan in a web application Although Infinispan is not tied to Wildfly, I decided to base my experiments on Wildfly. Its built in version is available for web applications, if you explicitly request it. The easiest method to do this is to add the following MANIFEST.MF entry to your war file. If you don’t want to spoil your project with obsolete files, just add it using a small war plugin config. Dependencies: org.infinispan export Naturally you’ll still want to add an Infinispan dependency to your application, but you can leave it to provided. Be sure to use the same version provided by your server, in Wildlfy 8, Infinispan version is 6.0.2. In a Maven project, add this kind of dependency declaration: <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <version>6.0.2.Final</version> <!-- Provided as we use the Infinispan provided by Wildfly --> <scope>provided</scope> </dependency> Before accessing Infinispan “caches”, you need to configure them. There are both programmatic and xml configurations available. With Wildfly, it is most natural to configure the Infinispan data store right into the server config. The “right” config file depends on how you are launching your Wildfly server. If you are testing clustering locally, you probably want to add something like this into your domain.xml, under the <subsystem xmlns="urn:jboss:domain:infinispan:2.0"> section. <cache-container name="myCache" default-cache="cachedb"> <transport lock-timeout="60000"/> <replicated-cache name="cachedb" batching="true" mode="SYNC"/> </cache-container> Note that with this config, the data is only stored within the memory of cluster nodes. To learn how to tweak cache settings or to set up disk “backup”, refer to the extensive Infinispan documentation. To remove all Infinispan references from the UI code, I created an EJB that does all the data access. There I inject the CacheContainer provided by Wildfly and fetch the default cache in an init method. @Resource(lookup = "java:jboss/infinispan/container/myCache") CacheContainer cc;Map<String, MyEntity> cache;@PostConstruct void init() { this.cache = cc.getCache(); } I guess you are already wondering it: yes, the Map is the very familiar java.util.Map interface and the rest of the implementation is trivial to any Java developer. Infinispan caches extend the basic Map interface, but in case you need some more advanced features, you can also use Cache or AdvancedCache types. The MyEntity in the previous code snippet is just a very simple POJO I created for the example. With Vaadin CDI usage, I can then inject the EJB to my UI class and do pretty much anything with it. The actual Vaadin code has no special tricks, just normal CDI spiced Vaadin code. Based on this exercise, would I use Infinispan directly for persistency in my next project? Probably not, but for certain apps, without hesitation. I can also imagine certain hybrid models where some of the data is only in an Infinispan cache and some in traditional RDBMS, naturally behind ORM, taking the best of both worlds. We’ll also be using Infinispan in our upcoming joint webinar with Arun Gupta from RedHat on September 8th, 2014. There we’ll show you a simple Vaadin application and how easy it can be to cluster it using Wildfly.Reference: Using Infinispan as a persistency solution from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close