Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

Exploring the SwitchYard 2.0.0.Alpha2 Quickstarts

In one of my last posts I explained how you get started with SwitchYard on WildFly 8.1. In the meantime the project was busy and released another Alpha2. A very good opportunity to explore the quickstarts here and refresh your memory about it. Beside the version change, you can still use the earlier blog to setup you local WildFly 8 server with latest Switchyard. As with all frameworks there is plenty of stuff to explore and a prerequisite for doing this is to have a working development environment to make this easier.       Setting up JBoss Developer StudioFirst things first. Download a copy of the latest JBoss Developer Studio (JBDS) 7.1.1.GA for your operating system and install it. You should already have a JDK in place so a simple:               java -jar jbdevstudio-product-eap-universal-7.1.1.GA-v20140314-2145-B688.jar will work. A simply 9 step installer will guide you through the steps necessary. Make sure to select the suitable JDK installation. JBDS works and has been tested with Java SE 6.x and 7.x. If you like to, install the complete EAP but it’s not a requirement for this little how-to. A basic setup without EAP requires roughly 400 MB disc space and shouldn’t take longer than a couple of minutes. If you’re done with that part launch the IDE and go on and configure the tooling. We need the JBoss Tools Integration Stack (JBTIS). Configure them by visiting “Help -> Install New Software” and add a new Update Site with the “Add” button. Call it SY-Development and point it to: “http://download.jboss.org/jbosstools/updates/development/kepler/integration-stack/” Wait for the list to refresh and expand the JBoss Integration and SOA Development and select all three SwitchYard entries. Click your way through the wizards and you’re ready for a re-start.Please make sure to disable Honour all XML schema locations in preferences, XML→XML Files→Validation after installation.  This will prevent erroneous XML validation errors from appearing on switchyard.xml files.That’s it for sure. Go ahead and import the bean-service example from the earlier blog-post (Import -> Maven -> Existing Maven Projects) General Information about SwitchYard Projects Lets find out more about the general SwitchYard project layout before we dive into the bean-service example.  A SwitchYard project is a Maven based project with the following characteristics:a switchyard.xml file in the project’s META-INF folder one or more SwitchYard runtime dependencies declared in the pom.xml file org.switchyard:switchyard-plugin mojo configured in the pom.xml fileGenerally, a SwitchYard project may also contain a variety of other resources used to implement the application, for example: Java, BPMN2, DRL, BPEL, WSDL, XSD, and XML files. The tooling supports you with creating, changing and developing your SY projects. You can also add SY capabilities to existing Maven projects. More details can be found in the documentation for the Eclipse tooling. Exploring the Bean-Service Example The Bean-Service example is one of the more simpler ones to get a first impression about SY. All of the example applications in the Quickstarts repository are included in quickstarts/ directory of your installation and also available on GitHub. The bean-service quickstart demonstrates the usage of the bean component. The scenario is easy: An OrderService, which is provided through the OrderServiceBean, and an InventoryService which is provided through the InventoryServiceBean implementation take care of orders. Orders are submitted through the OrderService.submitOrder, and the OrderService then looks up items in the InventoryService to see if they are in stock and the order can be processed. Up to here it is basically a simple CDI based Java EE application. In this application the simple process is invoked through a SOAP gateway binding (Which is indicated by the little envelope).Let’s dive into the implementation a bit. Looking at the OrderServiceBean reveals some more details. It is the implementation of the OrderService interface which defines the operations. The OrderServiceBean is just a bean class few extra CDI annotations. Most notably is the: @org.switchyard.component.bean.Service(OrderService.class) The @Service annotation allows the SwitchYard CDI Extension to discover your bean at runtime and register it as a service. Every bean service must have an @Service annotation with a value identifying the service interface for the service. In addition to providing a service in SwitchYard, beans can also consume other services. Those references need to be injected. In this example the InventoryService is injected: @Inject @org.switchyard.component.bean.Reference private InventoryService _inventory; Finally, all you need is the switchyard.xml configuration file where your Service, Components, Types and implementations are described. <composite name="orders" > <component name="OrderService"> <implementation.bean class="org.switchyard.quickstarts.bean.service.OrderServiceBean"/> <service name="OrderService"> <interface.java interface="org.switchyard.quickstarts.bean.service.OrderService"/> </service> </component> </composite> That was a very quick rundown. We’ve not touched the webservice endpoints, the WSDL and the Transformer configuration and implementation. Have a look at the SwitchYard tutorial which was published by mastertheboss and take the chance to read more about SY at the following links:SwitchYard Project Documentation SwitchYard Homepage Community Pages on JBoss.org SwitchYard is part of Fuse ServiceWorks, give it a try in a full fledged SOA Suite.Reference: Exploring the SwitchYard 2.0.0.Alpha2 Quickstarts from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
software-development-2-logo

Men in Tech

Background Between my partner and I, we have six daughters, and as they have grown I have thought more interested in their long term future, the role of women in society, the way technology will change our lives and in particular the role of women in technology. On the last topic, all the articles I have read have been written my women. In this post, I hope to outline my experiences in this regard. Women face many challenges, most typical of male dominated professions as well as some specific to the technology field. I believe that there will be a time when IT, like teaching and accounting, will have more women than men. I can understand it is frustrating as a woman in technology to be treated as a novelty.  Perhaps my experience can illustrate why that might be. One thing I have learnt over the years is that while I am happy to talk technology all day, there is one time to stop, in social situations when women are present. This doesn’t come from men, but women overwhelmingly. Men in Tech In high school, those interested in computers were not generally the most popular. In fact, I hung out with a large circle of friends which were some of the smartest, creative, kindest people I have known, but being popular was not priority for them. The first time I met a women who was interested in computers was at university. Those I got to know were at a disadvantage, as I saw it, as none of them had programmed a computer before university whereas the male friends of mine had been programming for 4 – 6 years (in my case, an average of 5 days of the week). When I tutored at university, all my students in computing were female, one has retired many years previous and had gone back to university. While they were all A-level students, each had plans other than IT after they left university. My first serious relationships also started at university. Some of my girlfriends studied computer science courses but dropped them after the first year. The C-word When I went out socially with my girlfriend, and I found someone interested in computers, I would happily talk to them about what interesting topics we could think of. At some point it would be made clear that it was time to talk about something else, usually by my significant other who was a bit embarrassed or bored with the topic by this stage. Usually this would start with hints, but as I am not so good with hints, she would make it really clear, resulting in her telling me repeatedly before we got there; Don’t talk about the “C-word” at all. And thus she taught me to not mention Computers when there were ladies present. I am not referred to one woman in particular, but all my other halves preferred I not talk about computers. I can remember a number of occasions when a group of men were talking about IT and a girlfriend or woman would join the group, we would immediately change the subject of conversation. We all understood while our significant others were happy to go out with computer geeks, they didn’t want to hear about it. I was at a conference recently where there was a large proportion of partners attending, joining the lunches, dinners and field trips. Even though it was an IT conference, it was generally understood that we minimise discussion of computers when women were around. I can be confident that if I am asked what I did for a living, this would be a conversation stopper and a change of subject would follow. Most men are not interested in IT, but almost all women I have met have nothing to follow such a revelation. There have been exceptions and a few women were polite enough to carry on talking about it, and from my point have view these conversations have been memorable in terms of their rarity, no matter how brief. I regularly attend conferences and give regular training sessions and I would say I can remember every conversation I had about IT with a woman for the last four years or more, with whom and broadly what we discussed. I have found that being an IT Consultant is more interesting because involves a lot of travel and I can talk about where I have been. Certainly I wouldn’t volunteer that I work in IT as an ice breaker. Conclusion I believe that having more women in technology is essential for the industry and society as a whole. I also believe while it is inevitable, we should try to ensure it happens soon rather than later especially as technology is growing at such a rapid pace. For this to happen, the attitudes of both men and women need to change.Reference: Men in Tech from our JCG partner Peter Lawrey at the Vanilla Java blog....
software-development-2-logo

New programming techniques and the productivity curve

Though I love learning new programming techniques and technologies, I often struggle to make them a part of my normal development processes. For example, it took years before I finally started using regular expressions on a normal basis. The reason? The productivity curve:              You may have seen a chart like this before; the productivity curve is used to show one of the challenges of software delivery. The general idea is that when you first use a new product (or technology), your productivity always plummets in the short term. No matter how much better the new stuff is, you will be less productive using it at first, because you were more familiar with the old stuff. However, as you become more proficient with the new technology, your productivity gradually increases, and eventually your productivity will be greater than it was with the old technology (assuming the new technology is actually better). So the key is to push through that initial dip, and eventually you get to the increased productivity that comes with mastering the new technology. Easy, right? Well, if your company just replaced your HR system, you have little choice in the matter; you’ll get through that productivity dip because you have no other choice. However, if you’ve just decided you want to pick up a new programming technique, it’s a little more difficult. Before I got the hang of regular expressions, it was simply far faster for me to whip together something ugly using things like StringTokenizers and substring. Nobody cared if I was using regular expressions, but they did care if I took 10 times as long to complete a task. On the rare occasions where I needed to make a change to existing regular expressions, I’d end up slogging through references and online tutorials to try and make sense out of it. Each time, after finally figuring it out, I’d tell myself that I was going to remember how these worked next time. But since I hadn’t made regular expressions a part of my standard toolchain, I’d forget everything I learned, and I would be lost again next time I needed to use them. I eventually got the hang of regular expressions through forced practice and forced application. Forced practice was straightforward enough: I dedicated time to reading references, running through tutorials, and solving regex challenges. This practice was an important step in building the skills, but I had tried this to a certain degree before. I’d spend a few hours practicing regular expressions, and feel like I was getting the hang of it. But time would pass, and the next time I needed to fix a regex, I’d have forgotten most of it again. The key for me was forced application of regular expressions. Like the poor HR staff who learn a new HR system because they have no other choice, I forced myself to use regular expressions any time they could be useful, even though it would take me longer than a quick hack with stuff I was already familiar with. Good unit tests helped a lot; though I was not confident in my ability to write bug-free regular expressions, I was confident in my ability to write thorough tests. As the productivity curve predicted, I was definitely less efficient at first. But as I forced myself to use regular expressions instead of hacking something together more quickly, I gradually improved. And before long, I was past the productivity dip, having finally gotten the hang of regular expressions. Now they are an essential part of my development toolchain, and I am a better programmer. I know this isn’t really earth shattering advice; my five year old son has figured out that you get better at something the more you do it. But when you find some new technology or technique you want to make part of your development process, I think it’s important to go into it recognizing that not only will it take time to master it, but you will almost certainly be less productive in the short-term than you were before. And I think that if you are prepared for the productivity dip and are willing to accept it, you can push through the difficult times and master it. In the long run, you will be a better developer for it.Reference: New programming techniques and the productivity curve from our JCG partner Jerry Orr at the Jerry on Java blog....
software-development-2-logo

Using Your RDBMS for Messaging is Totally OK

Controversial database topics are a guaranteed success on reddit, because everyone has an opinion on those topics. More importantly, many people have a dogmatic opinion, which always triggers more debate than pragmatism. So, recently, I posted a link to an older article titled The Database As Queue Anti-Pattern by Mike Hadlow, and it got decent results on /r/programming:Mike’s post was pretty much against all sorts of queueing in the database, which matches the opinions I have heard from a couple of JavaZone speakers in a recent discussion, who all agreed that messaging in the database is “evil.” … and I’m saying: No. Messaging in the database is not an anti-pattern, it is (can be) totally OK. Let’s consider why: KISS and YAGNI First off, if you don’t plan on deploying thousands of message types and millions of messages per hour, then you might have a very simple messaging problem. Since you’re already using an RDBMS, using that RDBMS also for messaging is definitely an option. Obviously, many of you will now think: If all you have is a hammer, everything looks like a nail … and you’re right! But guess what, the reasons for only having a hammer can be any of:You don’t have the time to equip with a sophisticated tool box You don’t have the money to equip with a sophisticated tool box You actually don’t really need the sophisticated tool boxThink about these arguments. Yes, the solution might not be perfect, or even ugly…But we’re engineers and as such, we’re not here to debate perfection. We’re here to deliver value to our customers and if we can get the job done with the hammer, why not just forget our vanity and simply use that hammer to get the job done. Transactional queues In a perfect world, queues are transactional and guarantee (to the extent allowed by underlying theory) atomic message delivery or failure – something that we’ve been taking for granted with JMS forever. At GeeCON Krakow, I had a very interesting discussion with Konrad Malawski regarding his talk about Akka Persistence. If we remove all the hype and buzzwords (e.g. reactive programming, etc.) from Akka, we can see that Akka is just a proprietary alternative to JMS, which looks more modern but is lacking tons of features that we’re used to having in JMS (e.g. like transactional queue persistence). One of the interesting aspects of that discussion with Konrad Malawski is the fact that a 100% message delivery guarantee is a myth (details here). Which leads to the conclusion: Messaging is really hard It is, indeed! So if you really think you need to embed a sophisticated MQ system, beware of the fact that you will have to learn how it really works and how to correctly operate it. If you’re using RDBMS-backed queues, you can get rid of this additional transactional complexity, because your queue operations participate in the transactions that you already have with your database. You get ACID for free! No additional operations efforts What developers very often underestimate (we can’t say this enough) are the costs incurring to your operations team when you add new external systems to yours. Having just one simple RDBMS (and your own application) is a very very lean and simple architecture. Having an RDBMS, an MQ, and your application is already more complex. There are a lot of excellent DBA out there who know what they’re doing when operating productive databases. Finding excellent “MQA” is much harder. If you’re using Oracle: Use Oracle AQ Oracle has a very sophisticated built-in queueing API called Oracle AQ, which can interoperate with JMS. Queues in AQ are essentially just tables that contain a serialised version of your message type. If you’re using jOOQ, we’ve blogged about how to integrate Oracle AQ with jOOQ, recently. RDBMS-centric applications can be much easier We’ve blogged about that before as well: Why Your Boring Data Will Outlast Your Sexy New Technology. Your data might just survive your application. Consider Paypal replacing Java with JavaScript (it could also have gone the other way round). In the end, however, do you think that Paypal also replaced all their databases? I don’t. Migrating from Oracle to DB2 (different vendor), or from Oracle to MongoDB (different DBMS type) is mostly motivated by political decisions rather than technical ones. Specifically, people don’t migrate from RDBMS to NoSQL databases entirely. They usually just implement a specific domain with NoSQL (e.g. document storage, or graph traversal) Assuming that the above really applies to you (it may, of course, not apply): If your RDBMS is in the middle of your system, then running queues in your RDBMS to communicate between system components is quite an obvious choice, isn’t it? All system parts are already connected to the database. Why not keep it that way? Conclusion The arguments listed here are all pretty obvious and pragmatic. At some point, they no longer hold true, as your messaging demands are really big enough to justify the integration with a sophisticated MQ system. But many people have strong opinions about the “hammer / nail” argument. Those opinions may be correct but premature. Very often in software engineering, it is entirely acceptable and sufficient to work with just one tool. The hammer of software: The RDBMS.Reference: Using Your RDBMS for Messaging is Totally OK from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

WORA Is Better Than Native

When “Write Once Run Anywhere” is done right it can produce applications that are “better” than native apps by targeting the highest common denominator. Some would claim that native is the best approach, but that looks at existing WORA tools/communities, which mostly target cost saving. In fact, even native Android/iOS tools produce rather bad results without deep platform familiarity. Native is very difficult to properly maintain in the real world and this is easily noticeable by inspecting the difficulties we have with the ports of Codename One, this problem is getting worse rather better as platforms evolve and fragment. E.g. Some devices crash when you take more than one photo in a row, some devices have complex issues with http headers, and many have issues when editing text fields in the “wrong position”. There are workarounds for everything, but you need to do extensive testing to become aware of the problem in the first place. WORA solutions bring all the workarounds and the “ugly” code into their porting layer, allowing developers to focus on their business logic. This is similar to Spring/Java EE approaches that addressed complexities of application server fragmentation. If you review most user feedback on mobile applications you will quickly notice that a company that releases applications often and fixes issues gets consistent 5 star reviews. The ability to provide a stable app across platforms is a much tougher nut to crack than most developers assume, bugs are remarkably portable even between programming languages. Being able to scale your development team to give fast response times to user requirements is a huge deal. Being able to use Google Play as a beta site for the more conservative (and slower to approve) iTunes is a remarkable benefit for any developer. Codename One is relatively young, so we don’t have a blockbuster, multi-million downloads app that contains a large enough representative segment (we do have some huge customers and great apps so its a matter of time). However WAZE (recently acquired by Google for over 1b USD) was constructed with a WORA solution, has multi-million downloads and 4.5+ star reviews on iOS/Android. I’m not crazy about their UI (as a matter of personal taste) but I think it proves that a HUGE majority of users want a functional, stable, and fast app. The word native and its various interpretations are only a part of the developer lexicon and don’t matter as much to actual users. Using the native platform has a huge conflict of interest problem, e.g. Google is very keen on developers building Android applications. However, it has slowly migrated new API’s into the Market app, which only enables them for Google sanctioned devices. This makes sense since vendors have been very slow at migrating to newer Android OS versions. However, this also means that non-Google devices such as the Kindle (and future Amazon phone etc.) wouldn’t work with those API’s. Thus, we have a situation where the platform vendor wants us to develop to his platform alone while, as developers, we want to have as many users as possible. Yes, iOS users provide better monetization but that might change in the future and as developers we have an interest in agility. As a side-note, Google itself uses a relatively simple WORA solution when targeting iOS named J2ObjC its a great solution for porting libraries from Java to Objective-C. In the past, the conflict of interest between the platform developer and the tool developer pushed many developers from Microsoft tools to Borland tools and later on to Java/Swing. I believe we will see similar trends as mobile platforms migrate towards the enterprise and we are already seeing such development with PhoneGap despite all its faults. The Native Is Easy Myth Often when I bring these things up, I get the strong knee-jerk reaction from the fans of native claiming that native is the best approach. This completely ignores the long-term complexities of building/maintaining complex applications across platforms. For example, a well known Google developer asked in a recent Google IO who in the audience understood the activity lifecycle, he then made fun of the audience members who raised their hands claiming that despite all his years in Google he still doesn’t get the activity lifecycle (I built a few VM’s in my life and I can attest that this is indeed a complex subject with many edge cases). iOS is not much better, the compiler/runtime doesn’t protect you from basic mistakes such as wrong function arguments (function, not message), memory access issues etc. You need to maintain around seven splash screens for a typical universal app, which you need to update with every change to the first view and don’t get me started on provisioning profiles… The main issue isn’t writing the code, it’s maintaining it in the long term, scaling it for additional team members, and working in a corporate environment. As the mobile market matures from app startups, who have relatively small teams of hackers, and moves to the enterprise the need to scale the development becomes far greater. Enterprise developers need the ease of Swing, Visual Basic or Powerbuilder when it comes to mobile development. They also usually prefer vendor independence when possible. Just like Java EE, SOA or ESB took over backend development we will see mobile services take over the client side. Developers who didn’t work in demanding Enterprise environments or worked in a very specific niche in an enterprise sometimes claim, “well they should hire better developers”. But that isn’t really the issue, projects such as these are maintained over years when developers hand them off from one person to the next as manpower/positions change. In such circumstances tools must be very simple to enable efficiency. Speed & Flexibility Performance is a big issue with WORA solutions, unfortunately we are sometimes judged by the performance of a specific application which might not always be under our control. The Pareto principle applies here: by improving a very small section of the code we can improve everything. Most of our rendering code is native and if you have mission critical code you can always write it natively to reach the performance level you would expect from any solution. Unlike HTML5, which is effectively throttled by how fast webkit can reflow, in a proper WORA solution you are throttled only by the performance of the native OS. Since we can invoke native code we can pretty much do anything we want within an application e.g. integrate with the Dropbox API via OAuth while using a native intent to invoke Dropbox on Android. This is actually implemented by some applications to enable easy usage of files within a mobile app, which is a painful experience on all mobile platforms. Final Thoughts My goal here isn’t so much to convince native developers to abandon native programming, but rather to change the basic “WORA as a compromise” attitude where native is considered as the best approach. I think this approach rose from sub par WORA solutions and implementations, such as the much maligned Facebook app. Despite the fact that I think embedding HTML5 (as is done in PhoneGap) isn’t a very good approach, I think that in the right hands it can produce a decent app. As a developer I try not to blame my tools but rather look at how the tool works and try to understand if its limitations are inherent or temporary. I think most of the issues we see in WORA tools are being resolved and we will see far more WORA apps on the market in the coming years. ...
software-development-2-logo

The fastest way of drawing UML class diagrams

A picture is worth a thousand words Understanding a software design proposal is so much easier once you can actually visualize it. While writing diagrams might take you an extra effort, the small time investment will pay off when others will require less time understanding your proposal. Software is a means, not a goal We are writing software to supports other people business requirements. Understanding business goals is the first step towards coming up with an effective design proposal. After gathering input from your product owner, you should write down the business story. Writing it makes you reason more about the business goal and the product owner can validate your comprehension. After the business goals are clear you need to move to technical challenges. A software design proposal is derived from both business and technical requirements. The quality of service may pose certain challenges that are better addressed by a specific design pattern or software architecture. The class diagram drawing hassle My ideal diagram drawing tool will simply transpose my hand-drawing sketches to a digital format. Unfortunately I haven’t yet found such tool, so this is how I do it:I hand draw all concepts and interactions on a piece of paper. That’s the most rapid way of design prototyping. While I could use a UML drawing tool, I prefer the paper-and-pencil approach, because changes require much less effort Once I settle for a design proposal, I start writing down the interfaces and request/response objects in plain Java classes. Changing the classes is pretty easy, thanks to IntelliJ IDEA refactoring tools. When all Java classes are ready, I simply delegate the class diagram drawing to IntelliJ IDEAIn the end, this is what you end up with:Reference: The fastest way of drawing UML class diagrams from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
spring-interview-questions-answers

Embedded Jetty and Apache CXF: secure REST services with Spring Security

Recently I run into very interesting problem which I thought would take me just a couple of minutes to solve: protecting Apache CXF (current release 3.0.1)/ JAX-RS REST services with Spring Security (current stable version 3.2.5) in the application running inside embedded Jetty container (current release 9.2). At the end, it turns out to be very easy, once you understand how things work together and known subtle intrinsic details. This blog post will try to reveal that. Our example application is going to expose a simple JAX-RS / REST service to manage people. However, we do not want everyone to be allowed to do that so the HTTP basic authentication will be required in order to access our endpoint, deployed at http://localhost:8080/api/rest/people. Let us take a look on the PeopleRestService class: package com.example.rs;import javax.json.Json; import javax.json.JsonArray; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces;@Path( "/people" ) public class PeopleRestService { @Produces( { "application/json" } ) @GET public JsonArray getPeople() { return Json.createArrayBuilder() .add( Json.createObjectBuilder() .add( "firstName", "Tom" ) .add( "lastName", "Tommyknocker" ) .add( "email", "a@b.com" ) ) .build(); } } As you can see in the snippet above, nothing is pointing out to the fact that this REST service is secured, just couple of familiar JAX-RS annotations. Now, let us declare the desired security configuration following excellent Spring Security documentation. There are many ways to configure Spring Security but we are going to show off two of them: using in-memory authentication and using user details service, both built on top of WebSecurityConfigurerAdapter. Let us start with in-memory authentication as it is the simplest one: package com.example.config;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy;@Configuration @EnableWebSecurity @EnableGlobalMethodSecurity( securedEnabled = true ) public class InMemorySecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser( "user" ).password( "password" ).roles( "USER" ).and() .withUser( "admin" ).password( "password" ).roles( "USER", "ADMIN" ); }@Override protected void configure( HttpSecurity http ) throws Exception { http.httpBasic().and() .sessionManagement().sessionCreationPolicy( SessionCreationPolicy.STATELESS ).and() .authorizeRequests().antMatchers("/**").hasRole( "USER" ); } } In the snippet above there two users defined: user with the role USER and admin with the roles USER, ADMIN. We also protecting all URLs (/**) by setting authorization policy to allow access only users with role USER. Being just a part of the application configuration, let us plug it into the AppConfig class using @Import annotation. package com.example.config;import java.util.Arrays;import javax.ws.rs.ext.RuntimeDelegate;import org.apache.cxf.bus.spring.SpringBus; import org.apache.cxf.endpoint.Server; import org.apache.cxf.jaxrs.JAXRSServerFactoryBean; import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.DependsOn; import org.springframework.context.annotation.Import;import com.example.rs.JaxRsApiApplication; import com.example.rs.PeopleRestService;@Configuration @Import( InMemorySecurityConfig.class ) public class AppConfig { @Bean( destroyMethod = "shutdown" ) public SpringBus cxf() { return new SpringBus(); } @Bean @DependsOn ( "cxf" ) public Server jaxRsServer() { JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class ); factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) ); factory.setAddress( factory.getAddress() ); factory.setProviders( Arrays.< Object >asList( new JsrJsonpProvider() ) ); return factory.create(); } @Bean public JaxRsApiApplication jaxRsApiApplication() { return new JaxRsApiApplication(); } @Bean public PeopleRestService peopleRestService() { return new PeopleRestService(); } } At this point we have all the pieces except the most interesting one: the code which runs embedded Jetty instance and creates proper servlet mappings, listeners, passing down the configuration we have created. package com.example;import java.util.EnumSet;import javax.servlet.DispatcherType;import org.apache.cxf.transport.servlet.CXFServlet; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.servlet.FilterHolder; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import org.springframework.web.filter.DelegatingFilterProxy;import com.example.config.AppConfig;public class Starter { public static void main( final String[] args ) throws Exception { Server server = new Server( 8080 ); // Register and map the dispatcher servlet final ServletHolder servletHolder = new ServletHolder( new CXFServlet() ); final ServletContextHandler context = new ServletContextHandler(); context.setContextPath( "/" ); context.addServlet( servletHolder, "/rest/*" ); context.addEventListener( new ContextLoaderListener() ); context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() ); context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() ); // Add Spring Security Filter by the name context.addFilter( new FilterHolder( new DelegatingFilterProxy( "springSecurityFilterChain" ) ), "/*", EnumSet.allOf( DispatcherType.class ) ); server.setHandler( context ); server.start(); server.join(); } } Most of the code does not require any explanation except the the filter part. This is what I meant by subtle intrinsic detail: the DelegatingFilterProxy should be configured with the filter name which must be exactly springSecurityFilterChain, as Spring Security names it. With that, the security rules we have configured are going to apply to any JAX-RS service call (the security filter is executed before the Apache CXF servlet), requiring the full authentication. Let us quickly check that by building and running the project: mvn clean package java -jar target/jax-rs-2.0-spring-security-0.0.1-SNAPSHOT.jar Issuing the HTTP GET call without providing username and password does not succeed and returns HTTP status code 401. > curl -i http://localhost:8080/rest/api/peopleHTTP/1.1 401 Full authentication is required to access this resource WWW-Authenticate: Basic realm="Realm" Cache-Control: must-revalidate,no-cache,no-store Content-Type: text/html; charset=ISO-8859-1 Content-Length: 339 Server: Jetty(9.2.2.v20140723) The same HTTP GET call with username and password provided returns successful response (with some JSON generated by the server). > curl -i -u user:password http://localhost:8080/rest/api/peopleHTTP/1.1 200 OK Date: Sun, 28 Sep 2014 20:07:35 GMT Content-Type: application/json Content-Length: 65 Server: Jetty(9.2.2.v20140723)[{"firstName":"Tom","lastName":"Tommyknocker","email":"a@b.com"}] Excellent, it works like a charm! Turns out, it is really very easy. Also, as it was mentioned before, the in-memory authentication could be replaced with user details service, here is an example how it could be done: package com.example.config;import java.util.Arrays;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException;@Configuration @EnableWebSecurity @EnableGlobalMethodSecurity(securedEnabled = true) public class UserDetailsSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService( userDetailsService() ); } @Bean public UserDetailsService userDetailsService() { return new UserDetailsService() { @Override public UserDetails loadUserByUsername( final String username ) throws UsernameNotFoundException { if( username.equals( "admin" ) ) { return new User( username, "password", true, true, true, true, Arrays.asList( new SimpleGrantedAuthority( "ROLE_USER" ), new SimpleGrantedAuthority( "ROLE_ADMIN" ) ) ); } else if ( username.equals( "user" ) ) { return new User( username, "password", true, true, true, true, Arrays.asList( new SimpleGrantedAuthority( "ROLE_USER" ) ) ); } return null; } }; }@Override protected void configure( HttpSecurity http ) throws Exception { http .httpBasic().and() .sessionManagement().sessionCreationPolicy( SessionCreationPolicy.STATELESS ).and() .authorizeRequests().antMatchers("/**").hasRole( "USER" ); } } Replacing the @Import( InMemorySecurityConfig.class ) with @Import( UserDetailsSecurityConfig.class ) in the AppConfig class leads to the same results, as both security configurations define the identical sets of users and their roles. I hope, this blog post will save you some time and gives a good starting point, as Apache CXF and Spring Security are getting along very well under Jetty umbrella!The complete source code is available on GitHub.Reference: Embedded Jetty and Apache CXF: secure REST services with Spring Security from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
java-logo

Optional and Objects: Null Pointer Saviours!

No one loves Null Pointer Exceptions ! Is there a way we can get rid of them ? Maybe . . .                    Couple of techniques have been discussed in this post:Optional type (new in Java 8) Objects class (old Java 7 stuff !)Optional type in Java 8 What is it?A new type (class) introduced in Java 8 Meant to act as a ‘wrapper‘ for an object of a specific type or for scenarios where there is no object (null)In plain words, its a better substitute for handling nulls (warning: it might not be very obvious at first !) Basic Usage It’a a type (a class) – so, how do I create an instance of it? Just use three static methods in the Optional class. public static Optional<String> stringOptional(String input) { return Optional.of(input); } Plain and simple – create an Optional wrapper containing the value. Beware – will throw NPE in case the value itself is null ! public static Optional<String> stringNullableOptional(String input) { if (!new Random().nextBoolean()) { input = null; } return Optional.ofNullable(input); } Slightly better in my personal opinion. There is no risk of an NPE here – in case of a null input, an empty Optional would be returned. public static Optional<String> emptyOptional() { return Optional.empty(); } In case you want to purposefully return an ‘empty’ value. ‘empty’ does not imply null. Alright – what about consuming/using an Optional? public static void consumingOptional() { Optional<String> wrapped = Optional.of("aString"); if (wrapped.isPresent()) { System.out.println("Got string - " + wrapped.get()); } else { System.out.println("Gotcha !"); } } A simple way is to check whether or not the Optional wrapper has an actual value (use the isPresent method) – this will make you wonder if its any better than using if(myObj!=null). Don’t worry, I’ll explain that as well. public static void consumingNullableOptional() { String input = null; if (new Random().nextBoolean()) { input = "iCanBeNull"; } Optional<String> wrapped = Optional.ofNullable(input); System.out.println(wrapped.orElse("default")); } One can use the orElse which can be used to return a default value in case the wrapped value is null – the advantage is obvious. We get to avoid the the obvious verbosity of invoking ifPresent before extracting the actual value. public static void consumingEmptyOptional() { String input = null; if (new Random().nextBoolean()) { input = "iCanBeNull"; } Optional<String> wrapped = Optional.ofNullable(input); System.out.println(wrapped.orElseGet( () -> { return "defaultBySupplier"; })); } I was a little confused with this. Why two separate methods for similar goals ? orElse and orElseGet could well have been overloaded (same name, different parameter). Anyway, the only obvious difference here is the parameter itself – you have the option of providing a Lambda Expression representing instance of a Supplier<T> (a Functional Interface). How is using Optional better than regular null checks????By and large, the major benefit of using Optional is to be able to express your intent clearly – simply returning a null from a method leaves the consumer in a sea of doubt (when the actual NPE occurs) as to whether or not it was intentional and requires further introspection into the javadocs (if any). With Optional, its crystal clear ! There are ways in which you can completely avoid NPE with Optional – as mentioned in above examples, the use of Optional.ofNullable (during Optional creation) and orElse and orElseGet (during Optional consumption) shield us from NPEs altogetherAnother savior!  (in case you can’t use Java 8) Look at this code snippet. package com.abhirockzz.wordpress.npesaviors;import java.util.Map; import java.util.Objects;public class UsingObjects {String getVal(Map<String, String> aMap, String key) { return aMap.containsKey(key) ? aMap.get(key) : null; }public static void main(String[] args) { UsingObjects obj = new UsingObjects(); obj.getVal(null, "dummy"); } } What can possibly be null?The Map object The key against which the search is being executed The instance on which the method is being calledWhen a NPE is thrown in this case, we can never be sure as to What is null? Enter The Objects class package com.abhirockzz.wordpress.npesaviors;import java.util.Map; import java.util.Objects;public class UsingObjects { String getValSafe(Map<String, String> aMap, String key) { Map<String, String> safeMap = Objects.requireNonNull(aMap, "Map is null"); String safeKey = Objects.requireNonNull(key, "Key is null");return safeMap.containsKey(safeKey) ? safeMap.get(safeKey) : null; }public static void main(String[] args) { UsingObjects obj = new UsingObjects(); obj.getValSafe(null, "dummy"); } } The requireNonNull method:Simply returns the value in case its not null Throws a NPE will the specified message in case the value in nullWhy is this better than if(myObj!=null) The stack trace which you would see will clearly have the Objects.requireNonNull method call. This, along with your custom error message will help you catch bugs faster. . .much faster IMO ! You can write your user defined checks as well e.g. implementing a simple check which enforces non-emptiness import java.util.Collections; import java.util.List; import java.util.Objects; import java.util.function.Predicate;public class RandomGist {public static <T> T requireNonEmpty(T object, Predicate<T> predicate, String msgToCaller){ Objects.requireNonNull(object); Objects.requireNonNull(predicate); if (predicate.test(object)){ throw new IllegalArgumentException(msgToCaller); } return object; }public static void main(String[] args) { //Usage 1: an empty string (intentional)String s = ""; System.out.println(requireNonEmpty(Objects.requireNonNull(s), (s1) -> s1.isEmpty() , "My String is Empty!"));//Usage 2: an empty List (intentional) List list = Collections.emptyList(); System.out.println(requireNonEmpty(Objects.requireNonNull(list), (l) -> l.isEmpty(), "List is Empty!").size());//Usage 3: an empty User (intentional) User user = new User(""); System.out.println(requireNonEmpty(Objects.requireNonNull(user), (u) -> u.getName().isEmpty(), "User is Empty!")); }private static class User { private String name;public User(String name){ this.name = name; }public String getName(){ return name; } } } Don’t let NPEs be a pain in the wrong place. We have more than a decent set of tools at our disposal to better handle NPEs or eradicate them altogether! Cheers!Reference: Optional and Objects: Null Pointer Saviours! from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
junit-logo

JUnit in a Nutshell: Yet Another JUnit Tutorial

Why Another JUnit Tutorial? JUnit seems to be the most popular testing tool for developers within the Java world. So it is no wonder that there have been written some good books about this topic. But I still meet quite often programmers, who at most have a vague understanding of the tool and its proper usage. Hence I had the idea to write a couple of posts that introduce the essential techniques from my point of view. The intention is to provide a reasonable starting point, but avoid daunting information flooding à la xUnit Test Patterns1.   Instead there are pointers to in depth articles, book chapters or dissenting opinions for further reading whenever suitable. The chapters are complemented by an consistent example to clarify and deepen the topics covered by each post. So despite of the existing books and articles about testing with the tool, maybe the hands-on approach of this mini-series might be appropriate to get one or two additional developers interested in unit testing – which would make the effort worthwhile. Let the games begin! IdiomTable of ContentsHello WorldIntrocuction of the very basics of a test: how it is written, executed and evaluated.Test StructureExplanation of the four phases (setup, exercise, verify and teardown) commonly used to structure unit tests.Test IsolationIllustration of the isolation principle based on test doubles and indirect in- and outputs.Test RunnersExplanation of JUnit’s exchangable test runners architecture and introduction of some of the available implementations.JUnit RulesWhile not originally written for this JUnit tutorial the post gives an introduction to rules and explains how custom rules can be implemented.Unit Test AssertionsCoverage of various unit test assertion techniques like the built-in mechanism, Hamcrest matchers and AssertJ.In case you seek for assistance in TDD or JUnit testing in general, note that we provide profound training courses on that topic. Conclusion Although JUnit comes with an assessable amount of API, writing unit tests is anything but trivial. This JUnit tutorial explaines the basic techniques of writing well structured, isolated unit tests. It elaborates on the tool’s extensible features and introduces some useful third party supplementals. Overall it is outlined why unit tests should be developed with the highest possible coding standards one could think of. Hopefully the ongoing example is well-balanced enough to provide a comprehensible introduction without being trivial. Suggestions for improvements are of course highly appreciated. So thank you for reading that far! And if you happen to like this tutorial, don’t be shy and spread the word around on your preferred social media channel! 1. Do not get me wrong – I like the book very much, but the general purpose approach is probably not the best way for getting started: xUnit Test Patterns, Gerard Meszaros, 2007Reference: JUnit in a Nutshell: Yet Another JUnit Tutorial from our JCG partner Rudiger Herrmann at the Code Affine blog....
java-interview-questions-answers

Apache Camel for Micro­service Architectures

I’ve been using microservice architectures before I knew they were called so. I used to work with pipeline applications made up of isolated modules that interact with each other through queues. Since then a number of (ex)ThoughtWorks gurus talked about microservices. First Fred George, then James Lewis and finally Martin Fowler blogged about microservices making it the next buzzword so every company wants to have few microservices. Nowadays there are #hashtags, endorsements, likes, trainings, even 2 day conference about it. The more I read and listen about microservice architectures, the more I realize how Apache Camel (and the accompanying projects around it) fits perfectly to this style of applications. In this post we will see how Apache Camel framework can help us create microservice style applications in Java without much hussle. Microservices Characteristics There is nothing new in microservices. Many applications have already been designed and implemented as such for a long time. Microservices is just a new term that describes a style of software systems that have certain characteristics and follow certain principles. It is an architectural style where an application or software system is composed of individual standalone services communicating using lightweight protocols in event based manner. The same way as TDD helps us to create decoupled single responsibility classes, microservices principles guide us to create simple applications at system level. Here we will not discuss the principles and characteristics of such architectures or argue whether it is a way of implementing SOA in practice or a totally new approach to application design, but rather look at the most common practices used for implementing microservices and how Apache Camel can helps us accomplish that in practice. There is not definitive list (yet) but if you read around or watch the videos posted above, you will notice that the following are quite common practices for creating microservices:Small in size. The very fundamental principle of micro services says that each application is small in size and it only does one thing and does it well. It is debatable what is small or large, the number varies from 10 LOC to 1000 but form me I like the idea that it should be small enough to fit in your head. There are people with big heads, so even that is debatable, but I think as long as an application does one thing and does it well so that it is not considered a nanoservices, that is a good size. Camel applications are inherently small in size. A camel context with couple of routes with error handling and helper beans is approximately 100 LOC. Thanks to Camel DSLs and URIs for abstracting endpoints, receiving an event either through HTTP or JMS, unmarshaling it, persisting and sending a response back is around 50 LOC. That is small enough to be tested end-to-end, rewritten and even thrown away without feel any remorse. Having transaction boundaries. An application consisting of multiple microservices forms an eventually consistent system of systems where the state of the whole system is not known at any given time. This on its own creates a barrier for understanding and adopting microservices with teams who are not used to work with this kind of distributed applications. Even though the state of the whole system is not fixed, it is important to have transaction boundaries that define where a message currently belongs. Ensuring transactional behaviour across heteregenous systems is not an easy task, but Camel has great transactional capabilities. Camel has endpoints that can participate in transactions, transacted routes and error handlers, idempotent consumers and compensating actions, all of which help developers easily create services with transactional behavior. Self monitoring. This is one of my favorite areas with microservices. Services should expose information that describes the state of various resources it depends on and the service itself. These are statistics such as average, min, max time to process a message, number of successful and failed messages, being able to track a message and so forth. This is something you get OOTB with Camel without any effort. Each Camel application gathers JMX statistics by default for the whole application, individual routes, endpoints, etc. It will tell you how many messages have completed successfully, how many failed, where they failed, etc. This is not read only API, JMX allows also updating and tuning the application at run time, so based on these statistics, using the same API you can tune the application. Also the information can be accessed with tools such as jConsole, VisualVM, Hyperic HQ, exposed over HTTP using Jolokia or feed into a great web UI called hawtio.If the functionality that is available OOTB doesn’t fit your custom requirements, there multiple extension points such as the nagios, jmx, amazon cloudwatch and the new metrics components, or use Event Notifiers for custom events. Logging in messaging applications is another challenge, but Camel’s MDC logging combined with Throughput logger makes it easy to track individual messages or get aggregated statistics as part of the logging output. Designed for failure – Each of the microservices can be down or unresponsive for some time but that should not bring the whole system down. Thus microservices should be fault tolerant and be able to recover when that is possible. Camel has lots of helpful tools and patterns to cope with these scenarios too. Dead Letter Channel can make sure messages are not lost in case of failure, the retry policy can retry to send a message couple of times for certain error conditions using custom backoff method and collision avoidance. Patterns such as Load balancer which supports Circuit breaker, Failover and other policies, Throttler to make sure certain endpoints do not get overload, Detour, Sampler, are all needed in various failure scenarios. So why not use them rather than reinventing the wheel in each service. Highly Configurable - It should be easy to configure the same application for high availability, scale it for reliability or throughput, or said another way: have different degrees of freedom through configuration. When creating a Camel application using the DSLs, all we do is to define the message flow and configure various endpoints and other characteristics of the application. So Camel applications are highly configurable by design. When all the various options are externalized using properties component, it is possible to configure an application for different expectations and redeploy without touching the actual source code at all. Camel is so configurable that you can change an endpoint with another (for example replace HTTP endpoint with JMS) without changing the application code which we will cover next. With smart endpoints.Micro services favour RESTish protocols and lightweight messaging rather than Web Services. Camel favors anything. It has HTTP support as no other framework. It has components for Asynchronous Http, GAE URL fetch service, Apache HTTP Client, Jetty, Netty, Servlet, Restlet, CXF and multiple data formats for serializing/deserializing messages. In addition the recent addition of Rest DSL makes REST a first class citizen in the Camel world and simply creating such services a lot. As for the queuing support, OOTB there are connectors for JMS, ActiveMQ, ZeroMQ, Amazon SQS, Amazon SNS, AMQP, Kestrel, Kafka, Stomp, you name it. Testable. There is no common view on this characteristic. Some favor no testing at all and relying on business metrics. Some cannot afford to have bad business metrics at all. I like TDD and for me having the ability to test my business POJOs in isolation from the actual message flow, then test the flow separately by mocking some of the external endpoints is invaluable. Camel testing support can intercept and mock endpoints, simulate events, verify expectations with ease. Having a well tested microservice for the expected behavior is the only guarantee to have the whole system to work as expected. Provisioned individually. The most important characteristics of microservices is that they run in isolation from other services most commonly as standalone Java applications. Camel can be embedded in Spring, OSGI or web containers. Camel can also run as a standalone Java application with embedded Jetty endpoints easily. But managing multiple processes, all running in isolation without a centralized tool is a hard job. This is what Fabric8 is made for. Fabric8 is developed by the same guys who developed Camel and supported by Red Hat JBoss. It is a poly Java application provisioning and management tool that can deploy and manage a variety of Java containers and standalone processes. To find out more about Fabric8, here is nice post by Christian Posta. Language neutral. Having small and independently deployed applications allow developers to choose thebest suited language for the given task. Camel has XML, Java, Scala, Groovy and few other DSLs with similar syntax and capabilities .But if you don’t want to you use Camel at all for a specific micro service, you can still use Fabric8 do deploy and manage applications written in other languages and run them as native processes.In summary: Microservices are not strictly defined and that’s the beauty. It is a lightweight style of implementing SOA that works. So is Apache Camel. It is not a full featured ESB, but it can be as part of JBoss Fuse. It is not a strictly defined specification driven project, but a lightweight tool that works and developers love it. ReferecencesMicro-Service Architecture by Fred George (video) Micro-Services – Java, the UNIX way by James Lewis Microservices by Martin Fowler µServices by Peter Kriens Micro Services the easy way with Fabric8 by James Strachan (with video) Fabric8 by Red Hat Meet Fabric8: An open­source integration platform by Christian Posta Micro Services the easy way with Fabric8 by James StrachanReference: Apache Camel for Micro­service Architectures from our JCG partner Bilgin Ibryam at the OFBIZian blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close