Featured FREE Whitepapers

What's New Here?

career-logo

Programming Language Job Trends Part 2 – August 2014

In part 1 of the programming language job trends, we reviewed Java, C++, C#, Objective C, and Visual Basic. In today’s installment, we will review trends for PHP, Python, JavaScript, Ruby, and PERL. Watch for part 3 in the next few days, where we will look at some emerging languages and others gaining steam. First, let’s look at the trends from Indeed.com:          Much like the languages in part 1, there is a general downward trend for about 2 years. JavaScript still leads comfortably with Python demand staying almost flat during the past two years. PERL has been in a long decline since 2010, but still stays above PHP and Ruby. PHP had stayed flat for a while, but the past year has not been kind, with a more significant downward trend. Ruby trails, but like Python, has been almost flat for close to three years and is closing the gap with PHP and PERL. The stability of the Python and Ruby trends is probably due to their growth in non-startup environments. Much like part 1 of the job trends, the SimplyHired trends are mostly unusable. The data is definitely close to current, but wild swings in demand show me that I cannot trust the data. I will review SimplyHired again in the next installment. Lastly, we look at the relative growth trends from Indeed.com:Given what the job demand graph shows, it seems surprising that the Ruby growth would outpace all of the others so dramatically. However, due to the length of time in these charts, where Ruby did not have much demand in 2006, the growth is a little misleading. When you look at the remaining languages, Python has a clear lead on the others, hovering at 500% for the past three years. PHP and Javascript come next, but still below 100% growth. PERL lags the group, near -50% growth. The overall demand trend is similar to the trends in part 1, though these languages show a little more stability. The stability of Ruby and Python are a bright spot in some otherwise dismal trends in Part 1 and this Part 2. For people looking to learn new languages for web development or even some scripting, PERL seems to be losing relevance. I think Python and Ruby have taken over in both of those cases. Due to the popularity of Web CMS systems like WordPress, PHP demand may decline but will stick around for a long time. Please visit the blog in a few days when we look at some relatively newer languages to see if the trends remain the same.Reference: Programming Language Job Trends Part 2 – August 2014 from our JCG partner Rob Diana at the Regular Geek blog....
jboss-drools-logo

Pluggable Knowledge with Custom Assemblers, Weavers and Runtimes

As part of the Bayesian work I’ve refactored much of Kie to have clean extension points. I wanted to make sure that all the working parts for a Bayesian system could be done, without adding any code to the existing core. So now each knowledge type can have it’s own package, assembler, weaver and runtime. Knowledge is no longer added directly into KiePackage, but instead into an encapsulated knowledge package for that domain, and that is then added to KiePackage. The assembler stage is used when parsing and assembling the knowledge definitions. The weaving stage is when weaving those knowledge definitions into an existing KieBase. Finally the runtime encapsulates and provides the runtime for the knowledge. drools-beliefs contains the Bayesian integration and a good starting point to see how this works: https://github.com/droolsjbpm/drools/tree/beliefs/drools-beliefs/ For this to work you and a META-INF/kie.conf file and it will be discovered and made available: https://github.com/droolsjbpm/drools/blob/beliefs/drools-beliefs/src/main/resources/META-INF/kie.conf The file uses the MVEL syntax and specifies one or more services: [ 'assemblers' : [ new org.drools.beliefs.bayes.assembler.BayesAssemblerService() ], 'weavers' : [ new org.drools.beliefs.bayes.weaver.BayesWeaverService() ], 'runtimes' : [ new org.drools.beliefs.bayes.runtime.BayesRuntimeService() ] ] Github links to the package and service implementations: Bayes Package Assembler Service Weaver Service Runtime Service Here is a quick unit test showing things working end to end, notice how the runtime can be looked up and accessed. It’s using the old api in the test, but will work fine with the declarative kmodule.xml stuff too. The only bit that is still hard coded is the ResourceType.Bayes. As ResourceTypes is an enum. We will probably refactor that to be a standard Class instead, so that it’s not hard coded. The code to lookup the runtime: StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession(); BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class); The unit test: KnowledgeBuilder kbuilder = new KnowledgeBuilderImpl(); kbuilder.add( ResourceFactory.newClassPathResource("Garden.xmlbif", AssemblerTest.class), ResourceType.BAYES );KnowledgeBase kbase = getKnowledgeBase(); kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession();BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class); BayesInstance instance = bayesRuntime.getInstance( Garden.class ); assertNotNull( instance ); jBPM is already refactored out from core and compiler, although it uses it’s own interfaces for this. We plan to port the existing jBPM way to this and actually all the Drools stuff will eventually be done this way too. This will create a clean KIE core and compiler with rules, processes, bayes or any other user knowledge type are all added as plugins. A community person is also already working on a new type declaration system, that will utilise these extensions. Here is an example of what this new type system will look like: https://github.com/sotty/metaprocessor/blob/master/deklare/src/test/resources/test1.ktdReference: Pluggable Knowledge with Custom Assemblers, Weavers and Runtimes from our JCG partner Geoffrey De Smet at the Drools & jBPM blog....
apache-camel-logo

Bootstrapping Apache Camel in Java EE7 with WildFly 8

Since Camel version 2.10 there is support for CDI (JSR-299) and DI (JSR-330). This offers new opportunities to develop and deploy Apache Camel projects in Java EE  containers but also in standalone Java SE or CDI containers. Time to try it out and get familiar with it. What exactly is Camel?Camel is an integration framework. Some like to call it ESB-lite. But in the end, it is a very developer and component focused way of being successful at integration projects. You have more than 80 pre-build components to pick from and with that it basically contains a complete coverage of the Enterprise Integration Pattern which are well known and state of the art to use. With all that in mind, it is not easy to come up with a single answer. If you need one, it could be something like this: It is messaging technology glue with routing. It joins together messaging start and end points allowing the transference of messages from different sources to different destinations. Why Do I Care? I’m obviously excited about enterprise grade software. But always been a fan of more pragmatic solutions. There’s been some good blog posts, about when to use Apache Camel and with the growing need to integrate different systems over very heterogeneous platforms it is always handy to have a mature solutions at hand. Most of the samples out there start with bootstrapping the complete Camel magic, including the XML based Spring DSL and with it the mandatory dependencies. That blows everything up to a extend I don’t want to accept. Knowing that there has to be a lightweight way of doing it (Camel-Core is 2.5 MB at Version 12.13.2) I was looking into how to bootstrap it myself. And use some of it’s CDI magic. The Place to Look for Ideas first Is obviously the Java EE samples project on GitHub. Some restless community members collected an awesome amount of examples for you to get started with. The ultimate goal here is to be a reference for how to use the different specifications within the Java EE umbrella. But even some first extra bits have been included and showcase an example from different areas like NoSQL, Twitter, Quartz Scheduling and last but not least Camel integration. If you run it as it is in latest WildFly 8.1 it is not working. The cdi extension of Camel makes it a bit tricky to do it, but as mentioned in the corresponding issue, there is a way to get rid of the ambiguous CDI dependency by just creating a custom veto extension. The issue is filed with Camel and I heard, that they are looking into improving the situation. If you want to to try out the example, go to my GitHub repository and look for the CamelEE7 project. How Did I Do It? The Bootstrap.java is a @Singleton EJB which is loaded on application startup (remember, there are different ways to start up things in Java EE) and by @Inject ing an org.apache.camel.cdi.CdiCamelContext you get access to Camel. The tiny example uses another HelloCamel bean to show how to work with payload in the CDI integration. Make sure to look at the CamelCdiVetoExtension.java and how it is configured in the META-INF folder. Now you’re ready to go. Happy Coding. And The Best For Last Camel 12.14 is on the horizon already, scheduled to be released in September. If you have issues or wishes you want to see in it, now is the time to speak up! Excerpt of the awesome new features, that are upcoming:Metrics Component DSL for rest services Swagger ComponentTime to get excited!Reference: Bootstrapping Apache Camel in Java EE7 with WildFly 8 from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
mongodb-logo

Introduction to MongoDB Geospatial feature

This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries. Storing Geospatial Informations As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries:      2d Index : uses simple coordinate (longitude, latitude). As stated in the documentation: The 2d index is intended for legacy coordinate pairs used in MongoDB 2.2 and earlier. For this reason, I won’t detail anything about this in this post. Just for the record 2d Index are used to query data stored as points on a two-dimensional plane 2d Sphere Index : support queries of any geometries on an-earth-like sphere, the data can be stored as GeoJSON and legacy coordinate pairs (longitude, latitude). For the rest of the article I will use this type of index and focusing on GeoJSON. Geo Haystack : that are used to query on very small area. It is today less used by applications and I will not describe it in this post.So this article will focus now on the 2d Sphere index with GeoJSON format to store and query documents. So what is GeoJSON? You can look at the http://geojson.org/ site, let’s do a very short explanation. GeoJSON is a format for encoding, in JSON, a variety of geographic data structures, and support the following types:  Point , LineString , Polygon , MultiPoint , MultiLineString , MultiPolygon and Geometry. The GeoJSON format  is quite straightforward based, for the simple geometries, on two attributes: type and coordinates. Let’s take some examples: The city where I spend all my childhood, Pleneuf Val-André, France, has the following coordinates (from Wikipedia) 48° 35′ 30.12″ N, 2° 32′ 48.84″ W This notation is a point, based on a latitude & longitude using the WGS 84 (Degrees, Minutes, Seconds) system. Not very easy to use by application/code, this is why it is also possible to represent the exact same point using the following values for latitude & longitude: 48.5917, -2.5469 This one uses the WGS 84 (Decimal Degrees) system. This is the coordinates you see use in most of the application/API you are using as developer (eg: Google Maps/Earth for example). By default GeoJSON, and MongoDB use these values but the coordinates must be stored in the longitude, latitude order, so this point in GeoJSON will look like: { "type": "Point", "coordinates": [ -2.5469, 48.5917 ] }This is a simple “Point”, let’s now for example look at a line, a very nice walk on the beach : { "type": "LineString", "coordinates": [ [-2.551082,48.5955632], [-2.551229,48.594312], [-2.551550,48.593312], [-2.552400,48.592312], [-2.553677, 48.590898] ] }So using the same approach you will be able to create MultiPoint, MultiLineString, Polygon, MultiPolygon. It is also possible to mix all these in a single document using a GeometryCollection. The following example is a Geometry Collection of MultiLineString and Polygon over Central Park: { "type" : "GeometryCollection", "geometries" : [ { "type" : "Polygon", "coordinates" : [ [ [ -73.9580, 40.8003 ], [ -73.9498, 40.7968 ], [ -73.9737, 40.7648 ], [ -73.9814, 40.7681 ], [ -73.9580, 40.8003 ] ] ] }, { "type" : "MultiLineString", "coordinates" : [ [ [ -73.96943, 40.78519 ], [ -73.96082, 40.78095 ] ], [ [ -73.96415, 40.79229 ], [ -73.95544, 40.78854 ] ], [ [ -73.97162, 40.78205 ], [ -73.96374, 40.77715 ] ], [ [ -73.97880, 40.77247 ], [ -73.97036, 40.76811 ] ] ] } ] }Note: You can if you want test/visualize these JSON documents using the http://geojsonlint.com/ service. Now what? Let’s store data! Once you have a GeoJSON document you just need to store it into your document. For example if you want to store a document about JFK Airport with its location you can run the following command: db.airports.insert( { "name" : "John F Kennedy Intl", "type" : "International", "code" : "JFK", "loc" : { "type" : "Point", "coordinates" : [ -73.778889, 40.639722 ] } } Yes this is that simple! You just save the GeoJSON as one of the attribute of the document (loc in this example). Querying Geospatial Informations Now that we have the data stored in MongoDB, it is now possible to use the geospatial information to do some interesting queries. For this we need a sample dataset. I have created one using some open data found in various places. This dataset contains the following informations:airports collection with the list of US airport (Point) states collection with the list of US states (MultiPolygon)I have created this dataset from various OpenData sources (http://geocommons.com/ , http://catalog.data.gov/dataset ) and use toGeoJSON to convert them into the proper format. Let’s install the dataset:Download it from here Unzip geo.zip file Restore the data into your mongoDB instance, using the following command mongorestore geo.zipMongoDB allows applications to do the following types of query on geospatial data:inclusion intersection proximityObviously, you will be able to use all the other operator in addition to the geospatial ones. Let’s now look at some concrete examples. Inclusion Find all the airports in California. For this you need to get the California location (Polygon) and use the command $geoWithin in the query. From the shell it will look like : use geovar cal = db.states.findOne( {code : "CA"} );db.airports.find(  {  loc : { $geoWithin : { $geometry : cal.loc } }  }, { name : 1 , type : 1, code : 1, _id: 0 }  ); Result: { "name" : "Modesto City - County", "type" : "", "code" : "MOD" } ... { "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" } { "name" : "San Jose International", "type" : "International", "code" : "SJC" } ... So the query is using the “California MultiPolygon” and looks in the airports collection to find all the airports that are in these polygons. This looks like the following image on a map:You can use any other query features or criteria, for example you can limit the query to international airport only sorted by name : db.airports.find(  {  loc : { $geoWithin : { $geometry : cal.loc } }, type : "International"  }, { name : 1 , type : 1, code : 1, _id: 0 }  ).sort({ name : 1 }); Result: { "name" : "Los Angeles Intl", "type" : "International", "code" : "LAX" } { "name" : "Metropolitan Oakland Intl", "type" : "International", "code" : "OAK" } { "name" : "Ontario Intl", "type" : "International", "code" : "ONT" } { "name" : "San Diego Intl", "type" : "International", "code" : "SAN" } { "name" : "San Francisco Intl", "type" : "International", "code" : "SFO" } { "name" : "San Jose International", "type" : "International", "code" : "SJC" } { "name" : "Southern California International", "type" : "International", "code" : "VCV" } I do not know if you have looked in detail, but we are querying these documents with no index. You can run a query with the explain()to see what’s going on. The $geoWithin operator does not need index but your queries will be more efficient with one so let’s create the index: db.airports.ensureIndex( { "loc" : "2dsphere" } ); Run the explain and you will se the difference. Intersection Suppose you want to know what are all the adjacent states to California, for this we just need to search for all the states that have coordinates that “intersects” with California. This is done with the following query: var cal = db.states.findOne( {code : "CA"} ); db.states.find( { loc : { $geoIntersects : { $geometry : cal.loc } } , code : { $ne : "CA" } }, { name : 1, code : 1 , _id : 0 } );Result: { "name" : "Oregon", "code" : "OR" } { "name" : "Nevada", "code" : "NV" } { "name" : "Arizona", "code" : "AZ" }Same as before $geoIntersect operator does not need an index to work, but it will be more efficient with the following index: db.states.ensureIndex( { loc : "2dsphere" } ); Proximity The last feature that I want to highlight in this post is related to query with proximity criteria. Let’s find all the international airports that are located at less than 20km from the reservoir in NYC Central Park. For this you will be using the $near operator. db.airports.find( { loc : { $near : { $geometry : { type : "Point" , coordinates : [-73.965355,40.782865] }, $maxDistance : 20000 } }, type : "International" }, { name : 1, code : 1, _id : 0 } );Results: { "name" : "La Guardia", "code" : "LGA" } { "name" : "Newark Intl", "code" : "EWR"So this query returns 2 airports, the closest being La Guardia, since the $near operator sorts the results by distance. Also it is important to raise here that the $near operator requires an index. Conclusion In this first post about geospatial feature you have learned:the basic of GeoJSON how to query documents with inclusion, intersection and proximity criteria.You can now play more with this for example integrate this into an application that expose data into some UI, or see how you can use the geospatial operators into an aggregation pipeline.Reference: Introduction to MongoDB Geospatial feature from our JCG partner Tugdual Grall at the Tug’s Blog blog....
java-logo

Big Java News in Late Summer 2014

As is typical when JavaOne is imminent, there has been much big news in the Java community recently. This post briefly references three of these items (Java SE 8 updates, Java SE 9, and Java EE 8) and a “bonus” reference to a post I found to be one of the clearer ones I have seen on classpath/classloader issues. String Deduplication in Oracle Java 8 JVM In String Deduplication – A new feature in Java 8 Update 20, Fabian Lange introduces String Deduplication for the G1 Garbage Collector using the JVM option -XX:+UseStringDeduplication that was introduced with JDK 8 Update 20. The tools page for the Java launcher has been updated to mention the JVM options -XX:+UseStringDeduplication, -XX:+PrintStringDeduplicationStatistics, and -XX:StringDeduplicationAgeThreshold. More details on JDK 8 Update 20 are available in the blog post Release: Oracle Java Development Kit 8, Update 20. The Lange post has also sparked discussion on this and related JVM options on the Java subreddit. Java 9 Features Java 9 has been the hot topic of discussion in the Java community since the OpenJDK JDK 9 Project was announced. Long-awaited Java modularity (Project Jigsaw, which was booted from JDK 8) is probably the largest new feature anticipated for Java 9. Paul Krill writes in Why developers should get excited about Java 9 that “Jigsaw isn’t the only new addition slated for Java 9. Support for the popular JSON (JavaScript Object Notation) data interchange format is key feature as well, along with process API, code cache, and locking improvements. The six JEPs currently proposed on that OpenJDK JDK 9 page are 102 (Process API Updates), 143 (Improve Contended Locking), 197 (Segmented Code Cache), 198 (Light-Weight JSON API), 199 (Smart Java Compilation, Phase 2), and 201 (Modular Source Code). In the blog post Java 9 is coming with money api, otaviojava introduces JSR 354 (“JSR 354: Money and Currency API”), describes why it is needed, covers how it might be implemented, and concludes, “this API is expected to [be in] Java 9.” Java EE 8 Reza Rahman‘s post Java EE 8 Takes Off! talks about JSR 366 (Java EE 8 Specification) being kicked off. This post lists some of the anticipated high-level content for Java EE along with links to related JSRs. Demystifying the Java Classpath Java classpath issues are definitely one of the more difficult challenges that Java developers can face. The post Jar Hell made Easy – Demystifying the classpath with jHades provides a nice overview of some of the most common issues related to classpath and classloaders with concise and simple explanations of why these occur. I have not used jHades, but the quality of this post has definitely sparked my interest in that tool. Conclusion “Java” (SE, EE, JVM, etc.) keeps advancing and bringing us new language features, libraries, and tools. This post has referenced posts that highlight recent developments in JDK 8, JDK 9, and Java EE 8.Reference: Big Java News in Late Summer 2014 from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-interview-questions-answers

Everything Developers Need To Know About xPaaS

I’ve been reading a lot about Red Hat products lately and being interested in cloud and such since some years now, it’s pretty obvious for me to look into the cloud offerings from Red Hat in more detail. Arun did a great overview about JBoss xPaaS back in April this year and I thought it might be time to not only give you an overview but also point you to all the relevant information that interested developers need to know about. If I missed something, or your stuck somewhere, don’t forget to reach out to me and let me know!       xPaaS= aPaaS, iPaaS, bpmPaaS, dvPaaS, mPaaS + OpenShift A very tiny little overview to get you up to speed. To make it simple, JBoss xPaaS services is another name for having all the powerful capabilities of JBoss Middleware available as a cloud based services, ready for use on OpenShift. A main differentiator to others is, that it is not just a bunch of services with little to know integration. It is a complete set of pre-build and ready to use integrated services.For those interested why it is called xPaaS: Gartner uses the term xPaaS to describe the whole spectrum of specialized middleware services that can be offered as PaaS. Red Hat has the complete implementation. More basic information:JBoss xPaaS Services at OpenShift (openshift.com/xpaas) Official Landing Page (red.ht/xpaas) Red Hat Summit JBoss Middleware Keynote (youtube.com) Mark Little about xPaaS (community.jboss.org) Gartner’s Magic Quadrant for On-Premise Application Platforms (Press Release, Gartner Report)Time to dig deeper into the individual pieces. The idea here is to just breakup the streamlined names a bit and break them down to the individual products and upstream projects used in it. Note: Some features on OpenShift are in Alpha release state. Designed and provided for developers to experiment with and explore. And for the i and bpm-PaaS offerings which can be deployed in the free OpenShift Online gears, it is recommend to use medium or large gears for optimum performance. aPaaS = JBoss Application Hosting + OpenShiftThe app-container services of OpenShift for Java EE 6 with Red Hat JBoss EAP/JBoss AS and Java EE 7 with WildFly is there for more than 2 years already. This is the foundation of everything in the xPaaS familiy. To keep it DRY, I put everything which is OpenShift related in this section. More basic information:JBoss Application Hosting on OpenShift OpenShift Getting Started GuideOpenShift Quickstarts and Cartridges:OpenShift WildFly 8 Quickstart OpenShift EAP 6.1/6.2 CartridgeBlogs to follow:Arun Gupta’s Blog Thomas Qvarnström JBoss Tech BlogVarious Developer Links:WildFly Website Java EE Samples on GitHub OpenShift Accelerator Program OpenShift GitHub Community Cartridges for OpenShift EAP Product DocumentationiPaaS = JBoss Fuse && JBoss Data Virtualization + OpenShiftThe integration services consist of two separate offerings at the moment. One is The JBoss Fuse enterprise service bus and the other is JBoss Data Virtualization. More basic information:Integration Services on OpenShift JBoss Fuse on OpenShift JBoss Data Virtualization on OpenShiftOpenShift Quickstarts and Cartridges:Fuse Getting Started Guide Fuse Quickstart Data Virtualization Getting Started Guide Data Virtualization QuickstartBlogs to follow:The Open Universe Christina James Strachan’s BlogVarious Developer Links:Samples and Demos by Kenny Peeples on Github Demo of Fuse 6.1 with Apache Camel and hawtio on OpenShift JBoss Fuse on GitHub JBoss Data Virtualization on GitHub Data Virtualization Product Documentation Fuse Product DocumentationbpmPaaS = JBoss BPM Suite + OpenShiftBusiness Process Management (BPM) and Business Rules Management (BRM) are the most important parts of this. More basic information:JBoss BPM Suite Product Overview (jboss.org/products/bpmsuite/overview/) Frequently Asked QuestionsOpenShift Quickstarts and Cartridges:BPM Suite on OpenShift Getting Started Guide BPM Suite QuickstartBlogs to follow:Eric D. SchabellVarious Developer Links:How to Use Rules and Events to Drive JBoss BRMS Cool Store for xPaaS Developer Materials on jboss.org Feedback and Support Official Product DocumentationmPaaS = AeroGear UnifiedPush Server + OpenShiftThe AeroGear UnifiedPush Server allows for sending native push messages to different mobile operation systems. This initial community version of the server supports Apple’s Push Notification Service (APNs), Google Cloud Messaging (GCM) and Mozilla’s SimplePush. More basic information:AeroGear Push 0.X on OpenShiftOpenShift Quickstarts and Cartridges:AeroGear Quickstart on OpenShiftBlogs to follow:chat & code by Corinne Matthias Wessendorf’s Weblog Bruno Oliviera’s BlogVarious Developer Links:AeroGear Project Website Mobile Push Simplified With The AeroGear Push Server On OpenShift AeroGear Documentation AeroDoc push notification application, step by step How to use the UnifiedPush ServerThat’s it for a first overview. Let me know if you’re missing something. I am committed to close the gap and make working and developing with xPaaS a fun and productive experience.Reference: Everything Developers Need To Know About xPaaS from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
spring-interview-questions-answers

Secure REST services using Spring Security

Overview Recently, I was working on a project which uses a REST services layer to communicate with the client application (GWT application). So I have spent a lot of to time to figure out how to secure the REST services with Spring Security. This article describe the solution I found, and I have implemented. I hope that this solution will be helpful to someone and will save a much valuable time.         The solution In a normal web application, whenever a secured resource is accessed Spring Security check the security context for the current user and will decide either to forward him to login page (if the user is not authenticated), or to forward him to the resource not authorised page (if he doesn’t have the required permissions). In our scenario this is different, because we don’t have pages to forward to, we need to adapt and override Spring Security to communicate using HTTP protocols status only, below I liste the things to do to make Spring Security works best :The authentication is going to be managed by the normal form login, the only difference is that the response will be on JSON along with an HTTP status which can either code 200 (if the autentication passed) or code 401 (if the authentication failed) ; Override the AuthenticationFailureHandler to return the code 401 UNAUTHORIZED ; Override the AuthenticationSuccessHandler to return the code 20 OK, the body of the HTTP response contain the JSON data of the current authenticated user ; Override the AuthenticationEntryPoint to always return the code 401 UNAUTHORIZED. This will override the default behavior of Spring Security which is forwarding the user to the login page if he don’t meet the security requirements, because on REST we don’t have any login page ; Override the LogoutSuccessHandler to return the code 20 OK ;Like a normal web application secured by Spring Security, before accessing a protected service, it is mandatory to first authenticate by submitting the password and username to the Login URL. Note: The following solution requires Spring Security in version minimum 3.2. Overriding the AuthenticationEntryPoint Class extends org.springframework.security.web.AuthenticationEntryPoint, and implements only one method, which sends response error (with 401 status code) in cause of unauthorized attempt. @Component public class HttpAuthenticationEntryPoint implements AuthenticationEntryPoint { @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { response.sendError(HttpServletResponse.SC_UNAUTHORIZED, authException.getMessage()); } } Overriding the AuthenticationSuccessHandler The AuthenticationSuccessHandler is responsible of what to do after a successful authentication, by default it will redirect to an URL, but in our case we want it to send an HTTP response with data. @Component public class AuthSuccessHandler extends SavedRequestAwareAuthenticationSuccessHandler { private static final Logger LOGGER = LoggerFactory.getLogger(AuthSuccessHandler.class);private final ObjectMapper mapper;@Autowired AuthSuccessHandler(MappingJackson2HttpMessageConverter messageConverter) { this.mapper = messageConverter.getObjectMapper(); }@Override public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException, ServletException { response.setStatus(HttpServletResponse.SC_OK);NuvolaUserDetails userDetails = (NuvolaUserDetails) authentication.getPrincipal(); User user = userDetails.getUser(); userDetails.setUser(user);LOGGER.info(userDetails.getUsername() + " got is connected ");PrintWriter writer = response.getWriter(); mapper.writeValue(writer, user); writer.flush(); } } Overriding the AuthenticationFailureHandler The AuthenticationFaillureHandler is responsible of what to after a failed authentication, by default it will redirect to the login page URL, but in our case we just want it to send an HTTP response with the 401 UNAUTHORIZED code. @Component public class AuthFailureHandler extends SimpleUrlAuthenticationFailureHandler { @Override public void onAuthenticationFailure(HttpServletRequest request, HttpServletResponse response, AuthenticationException exception) throws IOException, ServletException { response.setStatus(HttpServletResponse.SC_UNAUTHORIZED);PrintWriter writer = response.getWriter(); writer.write(exception.getMessage()); writer.flush(); } } Overriding the LogoutSuccessHandler The LogoutSuccessHandler decide what to do if the user logged out successfully, by default it will redirect to the login page URL, because we don’t have that I did override it to return an HTTP response with the 20 OK code. @Component public class HttpLogoutSuccessHandler implements LogoutSuccessHandler { @Override public void onLogoutSuccess(HttpServletRequest request, HttpServletResponse response, Authentication authentication) throws IOException { response.setStatus(HttpServletResponse.SC_OK); response.getWriter().flush(); } } Spring security configuration This is the final step, to put all what we did together, I prefer using the new way to configure Spring Security which is with Java no XML, but you can easily adapt this configuration to XML. @Configuration @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { private static final String LOGIN_PATH = ApiPaths.ROOT + ApiPaths.User.ROOT + ApiPaths.User.LOGIN;@Autowired private NuvolaUserDetailsService userDetailsService; @Autowired private HttpAuthenticationEntryPoint authenticationEntryPoint; @Autowired private AuthSuccessHandler authSuccessHandler; @Autowired private AuthFailureHandler authFailureHandler; @Autowired private HttpLogoutSuccessHandler logoutSuccessHandler;@Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); }@Bean @Override public UserDetailsService userDetailsServiceBean() throws Exception { return super.userDetailsServiceBean(); }@Bean public AuthenticationProvider authenticationProvider() { DaoAuthenticationProvider authenticationProvider = new DaoAuthenticationProvider(); authenticationProvider.setUserDetailsService(userDetailsService); authenticationProvider.setPasswordEncoder(new ShaPasswordEncoder());return authenticationProvider; }@Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.authenticationProvider(authenticationProvider()); }@Override protected AuthenticationManager authenticationManager() throws Exception { return super.authenticationManager(); }@Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable() .authenticationProvider(authenticationProvider()) .exceptionHandling() .authenticationEntryPoint(authenticationEntryPoint) .and() .formLogin() .permitAll() .loginProcessingUrl(LOGIN_PATH) .usernameParameter(USERNAME) .passwordParameter(PASSWORD) .successHandler(authSuccessHandler) .failureHandler(authFailureHandler) .and() .logout() .permitAll() .logoutRequestMatcher(new AntPathRequestMatcher(LOGIN_PATH, "DELETE")) .logoutSuccessHandler(logoutSuccessHandler) .and() .sessionManagement() .maximumSessions(1);http.authorizeRequests().anyRequest().authenticated(); } } This was a sneak peak at the overall configuration, I attached in this article a Github repository containing a sample project https://github.com/imrabti/gwtp-spring-security. I hope this will help some of you developers struggling to figure out a solution, please feel free to ask any questions, or post any enhancements that can make this solution better.Reference: Secure REST services using Spring Security from our JCG partner Idriss Mrabti at the Fancy UI blog....
java-interview-questions-answers

Analysing the performance degradation/improvements of a Java EE application with interceptors

When you are developing a Java EE application with certain performance requirements, you have to verify that these requirements are fulfilled before each release. An Hudson job that nightly executes a bunch of test measurements on some specific hardware platform is what you may think about. You can check the achieved timings and compare them with the given requirements. If the measured values deviate from the requirements too much, you can either break the build or at least send an email to the team. But how do you measure the execution time of your code? The very first thought might be to add thousands of insertions of time measuring code into your code base. But this is not only a lot of work, but also has an impact on the performance of your code, as now the time measurements are also executed in production. To get rid of the many insertions you might want to leverage an aspect oriented framework (AOP) that introduces the code for time measurements at compile time. Using this way you have at least two versions of your application: the one with and the one without the additional overhead. To measure performance at some production site still requires a redeployment of your code. And you have to decide which methods you want to observe already at compile time. Java EE comes therefore with an easy to use alternative: Interceptors. This is were the inversion of control pattern plays out its advantages. As the Application Server invokes your bean methods/web service calls, it is easy for it to intercept these invocations and provide you a way of adding code before and after each invocation. Using interceptors is then fairly easy. You can either add an annotation to your target method or class that references your interceptor implementation or you can add the interceptor using the deployment descriptor: @Interceptors(PerformanceInterceptor.class) public class CustomerService { ... } The same information supplied in the deployment descriptor looks like this: <interceptor-binding> <target-name>myapp.CustomerService</target-name> <interceptor-class>myapp.PerformanceInterceptor.class</interceptor-class> </interceptor-binding> The interceptor itself can be a simple POJO class with a method that is annotated with @AroundInvoke and one argument: @AroundInvoke public Object measureExecutionTime(InvocationContext ctx) throws Exception { long start = System.currentTimeMillis(); try { return ctx.proceed(); } finally { long time = System.currentTimeMillis() - start; Method method = ctx.getMethod(); RingStorage ringStorage = RingStorageFactory.getRingStorage(method); ringStorage.addMeasurement(time); } } Before the try block and in the finally block we add our code for the time measurement. As can be seen from the code above, we also need some in-memory location where we can store the last measurement values in order to compute for example a mean value and the deviation from the mean value. In this example we have a simple ring storage implementation that overrides old values after some time. But how to expose these values to the world outside? As many other values of the Application Server are exposed over the JMX interface, we can implement a simple MXBean interface like shown in the following code snippet: public interface PerformanceResourceMXBean { long getMeanValue(); }public class RingStorage implements PerformanceResourceMXBean { private String id;public RingStorage(String id) { this.id = id; registerMBean(); ... } private void registerMBean() { try { ObjectName objectName = new ObjectName("performance" + id + ":type=" + this.getClass().getName()); MBeanServer platformMBeanServer = ManagementFactory.getPlatformMBeanServer(); try { platformMBeanServer.unregisterMBean(objectName); } catch (Exception e) { } platformMBeanServer.registerMBean(this, objectName); } catch (Exception e) { throw new IllegalStateException("Problem during registration:" + e); } } @Override public long getMeanValue() { ... } ... } Now we can start the jconsole and query the exposed MXBean for the mean value:Writing a small JMX client application that writes the sampled values for example into a CSV file, enables you to later on process these values and compare them with later measurements. This gives you an overview of the evolution of your application’s performance. Conclusion Adding dynamically through the deployment descriptor performance measurement capabilities to an existing Java EE application is with the use of interceptors easy. If you expose the measured values over JMX, you can apply further processing of the values afterwards.Reference: Analysing the performance degradation/improvements of a Java EE application with interceptors from our JCG partner Martin Mois at the Martin’s Developer World blog....
software-development-2-logo

5 Things I Do to Stay Relevant

I have noticed that some Finnish IT professionals are complaining that being just a good employee isn’t good enough anymore. These people argue that they cannot get a job because:Their work experience isn’t worth anything because they have no experience from technology X that is hot right now. They are too old (over 40). They have a life outside work and that is why they don’t have time to learn new technologies.    I could argue that these reasons are just excuses and these people just aren’t good enough. I am not going to do this because:I don’t want to be a dick. I am getting older (I am 36 at the moment) and if age discrimination is a real problem, I should definitely be worried about it.On the other hand, I think that is stupid to worry about something and not do anything about it. That is why I decided to take my destiny into my own hands and ensure that I am still relevant when I am over 40 years old. I give you five things I do to stay relevant: 1. I Learn at Work I spend 8 hours of every business day at work. That is a lot of time, and I want to take advantage of this time. Does this mean that I spend all this time learning new things and ignore my work? No. It means that I learn new things when I am doing my work. My main priority is to keep my customers happy. The thing is that learning new things at work will help me to achieve this goal. This might sound a bit weird because learning new things takes time. Shouldn’t I spend this time working for my customer? I claim that I can learn new things, work for my customer, and save my customer’s money (or provide more value) at the same time. I can do this because I am constantly looking for ways to work smarter. If I see something that helps me to achieve this, I will start using it. However, this doesn’t mean that I make this decision lightly. I will evaluate the pros and cons of each new technology and use it only if its pros are greater than its cons. Luckily, I don’t have to do this alone. We have a lot of great developers and I can always ask their opinion when I need it. I don’t always like their answers but that is a good thing because it helps me to see things from another perspective. Here are some examples of libraries/frameworks/programming languages that I have learned at work during the last three years:Frontend: Javascript, Bower, Gulp, NPM, jQuery, Backbone.js, Marionette.js, Angular.js, Twitter Bootstrap, and a lot of other libraries that have weird names. Backend: Spring Batch, Spring Data JPA, Spring Data Solr, and Spring Social. Testing: AssertJ, Hamcrest, Spring MVC Test, and Spring-Test-DbUnit. Software Development: software design, automated testing techniques, agile, and using common sense.I am sorry that I turned this example into a buzzword bingo but that was necessary so that I could demonstrate to you how much a developer can learn at work. 2. I Read (a Lot) I think that if I want to stay relevant, I need to be able to identify “hot” technologies. Also, I need to improve my technical, business, and human skills. One way to do this is to read, and since I love reading, I read a lot. At the moment I am reading:I follow relevant “news” sites such as Dzone, Reddit, and HackerNews. I won’t read every popular article or discussion but these sites help me to identify trends and see what technologies are “hot” right now. Also, sometimes I find an article or discussion that teaches me something new. I read interesting blogs. When I feel like learning something new, I open my feed reader and pick one or two blog posts that I read right away. When I am done, I mark all other blog post as read. The reason why I do this is that at the moment I have about 100 blogs in my feed reader and it would take too much time to read every blog post. Thus, I prioritize. I read 5-10 software development books in a year. I love blogs but a good software development book fulfils a totally different need. If I want to get as much information about X as possible, I read a book (or books) because it is a lot easier than trying to find all this information from the internet. Also, I know that this is a bit of old fashioned, but when I buy a book that is published by a respected publisher, I can trust that the book contains correct information. I read 5-10 other non-fiction books in a year. Although software development is a my passion, I am interested in other things as well. Typically I read books about entrepreneurship, marketing, psychology, product development, and agile “processes”. Also, I think that reading these books will make me a better software developer because writing code is only a small part of my job. I think that if I want to add value to my customers, I need to understand a lot of other things as well. Reading non-fiction books help me to achieve that goal.3. I Write a Blog I started writing a blog because it felt like a fun thing to do. I was right. It is fun but writing a blog has other benefits as well:It helps me to learn new things. There are three ways how writing a blog helps me to learn something new:The truth is that I write some of my tutorials because I want to learn a new library/framework/tool and writing a tutorial is a good way to ensure that I actually do it. Writing helps me to clarify my thoughts and often I notice something I haven’t thought before. I answer to the comments left to my blog posts, and since I don’t usually know the answer right away, I have to do some investigation before I can write a helpful answer. In other words, I learn new things by answering to the questions of my readers.It helps me to get feedback from other developers. I know that I don’t know everything and that I can be wrong. When I publish my thoughts on my blog, everyone who reads it can say his/her opinion about my thoughts. Sometimes these comments help me to understand that I am not right, and this is very valuable to me because my goal is not be right. My goal is to make people think and hope that they return the favor by leaving a comment to my blog post. It helps me to build an online presence and a “brand”. Let’s assume I am applying for a new job or trying to find a new business partner. What happens when these persons google me and find nothing? This might not be a deal breaker, but I think that my blog gives me an edge over persons who are otherwise “as good as I am” but don’t have a blog. I think this way because I believe that my blog “proves” thatI can learn new things. If this person takes the time to read some of my older blog posts and compare them with my newer posts, he/she will see that my thinking has evolved. I am an expert in my field. This sounds a bit narcissistic but I think that my blog posts gives an impression that I know what I am talking about. If I wouldn’t write a blog this person would just have to take my word for it.4. I Am Active on Social Media I use social media for sharing the content created by other people, sharing my own content, and having fun. The social media “gurus” state that this should help to brand myself as an expert, but I have to admit that I haven’t really paid any attention to this. In other words, I don’t have a social media strategy. At the moment I am using the following social media services:Github is kind of a no-brainer if you are a developer. At the moment I publish the example applications of my blog posts on Github and I use it to follow interesting projects created by other developers. Google+ is a bit of a mystery to me but I decided to try it out because having civilized discussions is so much easier when I can use more than 140 characters. Also, I really like Google+ communities because they provide me an easy way to find interesting content and have civilized discussions. I am also the owner of the Google+ community called Java Testing Society. LinkedIn is a place to be if you want to connect with other professionals. Although the recruitment spam has made LinkedIn a bit less interesting to me, I think that I can still benefit from sharing my blog posts on LinkedIn. Also, I haven’t used LinkedIn groups yet, and I am going to pay more attention to this in the future. Twitter is a great place to find and share interesting content. I use it mostly because it is fun and it doesn’t really take that much time. The downside of Twitter is that it is “impossible” to have civilized discussions because you cannot use more than 140 characters. Youtube is the place to be if you want to publish video tutorials (or watch them). I have published a few video tutorials but I have to admit that at the moment I want to concentrate on other things. However, I will record more video tutorials some day. I promise.so, how does this help me to stay relevant? I think that social media helps me to discover “hot” technologies and learn new things. Also, it helps me to grow my network and having a large network is useful if you looking for a job or a business partner. 5. I Work Out This is the last thing on my list but it isn’t the least important one. I have noticed that working out helps me to reduce stress and avoid physical problems caused by sitting at work. I go to the gym three times a week and do aerobic exercise twice a week (I don’t do any aerobic exercise when I am on holiday though). I know that this sounds a bit excessive but it works for me, and that is all that matters. By the way, there was a time when I hated physical exercise. At that time I was stressed out, I had very low energy levels and I had weird pain between my ribs. In other words, I was a wreck. Then I decided to start working out. It was one of the best decisions I have ever made. Now I am stress free, my energy levels have skyrocketed, and the pain is gone. I feel great and this helps me to concentrate on other things that will help me to stay relevant. Is This Good Enough? Who knows. I don’t know what happens in the future. However, I do know that doing something is a lot better than doing nothing. I admit that I am lucky because I don’t have to do these things. I can do these things because I enjoy it, and that is why I think that no matter what happens in the future, I can feel proud of myself.Reference: 5 Things I Do to Stay Relevant from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

The next IT revolution: micro-servers and local cloud

Have you ever counted the number of Linux devices at home or work that haven’t been updated since they came out of the factory? Your cable/fibre/ADSL modem, your WiFi point, television sets, NAS storage, routers/bridges, media centres, etc. Typically this class of devices hosts a proprietary hardware platform, an embedded proprietary Linux and a proprietary application. If you are lucky you are able to log into a web GUI often using the admin/admin credentials and upload a new firmware blob. This firmware blob is frequently hard to locate on hardware supplier’s websites. No wonder the NSA and others love to look into potential firmware bugs. They are the ideal source of undetected wiretapping.   The next IT revolution: micro-servers The next IT revolution is about to happen however. Those proprietary hardware platforms will soon give room for commodity multi-core processors from ARM, Intel, etc. General purpose operating systems will replace legacy proprietary and embedded predecessors. Proprietary and static single purpose apps will be replaced by marketplaces and multiple apps running on one device. Security updates will be sent regularly. Devices and apps will be easy to manage remotely. The next revolution will be around managing millions of micro-servers and the apps on top of them. These micro-servers will behave like a mix of phone apps, Docker containers, and cloud servers. Managing them will be like managing a “local cloud” sometimes also called fog computing. Micro-servers and IoT? Are micro-servers some form of Internet of Things. Yes they can be but not all the time. If you have a smarthub that controls your home or office then it is pure IoT. However if you have a router, firewall, fibre modem, micro-antenna station, etc. then the micro-server will just be an improved version of its predecessor. Why should you care about micro-servers? If you are a mobile app developer then the micro-servers revolution will be your next battlefield. Local clouds need “Angry Bird”-like successes. If you are a telecom or network developer then the next-generation of micro-servers will give you unseen potentials to combine traffic shaping with parental control with QoS with security with … If you are a VC then micro-server solution providers is the type of startups you want to invest in. If you are a hardware vendor then this is the type of devices or SoCs you want to build. If you are a Big Data expert then imagine the new data tsunami these devices will generate. If you are a machine learning expert then you might want to look at algorithms and models that are easy to execute on constraint devices once they have been trained on potentially thousands of cloud servers and petabytes of data. If you are a Devop then your next challenge will be managing and operating millions of constraint servers. If you are a cloud innovator then you are likely to want to look into SaaS and PaaS management solutions for micro-servers. If you are a service provider then this is the type of solutions you want to have the capabilities to manage at scale and easily integrate with. If you are a security expert then you should start to think about micro-firewalls, anti-micro-viruses, etc. If you are a business manager then you should think about how new “mega micro-revenue” streams can be obtained or how disruptive “micro- innovations” can give you a competitive advantage. If you are an analyst or consultant then you can start predicting the next IT revolution and the billions the market will be worth in 2020. The next steps… It is still early days but expect some major announcements around micro-servers in the next months…Reference: The next IT revolution: micro-servers and local cloud from our JCG partner Maarten Ectors at the Telruptive blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close