Featured FREE Whitepapers

What's New Here?

enterprise-java-logo

Spring Boot: Fast MVC start

I was planning to write an article about Spring Boot more than a year ago. Finally I have the time and inspiration for this. So prepare yourself for 10 – 15 minutes of high quality Spring tutorial. I’m going to demonstrate Spring Boot basics with Gradle and embedded Tomcat. I use Intellij IDEA instead of Eclipse but this shouldn’t be a problem for those of you who are used to Eclipse. Introduction to Spring Boot What’s my goal? I want to develop something very similar to one of my previous tutorials about Spring and Java configurations. It’s a good exercise to compare two different approaches for Spring development. No doubt, most of you know what is the main aim of Spring Boot. For the rest of readers I want to say that Spring Boot makes developers happier because it takes care of configurations while developers can focus on code production. For more details read official reference. Gradle build fileFor managing dependencies and build of the project I use Gradle. Here is how build.gradle file looks: buildscript { repositories { //Required repos mavenCentral() maven { url "http://repo.spring.io/snapshot" } maven { url "http://repo.spring.io/milestone" } } dependencies { //Required dependency for spring-boot plugin classpath 'org.springframework.boot:spring-boot-gradle-plugin:1.1.2.BUILD-SNAPSHOT' } }apply plugin: 'java' apply plugin: 'war' apply plugin: 'spring-boot'war { baseName = 'companies' version = '0.1' }repositories { mavenCentral() maven { url "http://repo.spring.io/snapshot" } maven { url "http://repo.spring.io/milestone" } }dependencies { compile 'org.springframework.boot:spring-boot-starter-web' //Required dependency for JSP providedRuntime 'org.apache.tomcat.embed:tomcat-embed-jasper' } If you are new to Gradle, I recommend you to read about it somewhere else, e.g. on official site. It’s really nice and practical tool. It can do everything what Maven do, but without XML! Spring Boot initialization Now we can set up Spring Boot on java code level. package com.companies;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration;@Configuration @ComponentScan @EnableAutoConfiguration public class CompanyApplication {public static void main(String[] args) { SpringApplication.run(CompanyApplication.class, args); }} That’s it, now you can start developing your business logic. Just kidding, we need to put some extra configs related to view resolving. package com.companies.config;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.DefaultServletHandlerConfigurer; import org.springframework.web.servlet.config.annotation.EnableWebMvc; import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter; import org.springframework.web.servlet.view.InternalResourceViewResolver;@Configuration @EnableWebMvc public class WebMvcConfig extends WebMvcConfigurerAdapter{@Override public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) { configurer.enable(); }@Bean public InternalResourceViewResolver viewResolver() { InternalResourceViewResolver resolver = new InternalResourceViewResolver(); resolver.setPrefix("WEB-INF/pages/"); resolver.setSuffix(".jsp"); return resolver; }} After you have created the class published above, you can go ahead with controller development. Controller & View package com.companies.controller;import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView;@Controller public class HelloWorldController {@RequestMapping(value = "/hello", method = RequestMethod.GET) public ModelAndView hello() { ModelAndView mav = new ModelAndView(); mav.setViewName("hello"); String str = "Hello World!"; mav.addObject("message", str);return mav; }} And corresponding view hello.jsp for the controller: <html> <head> <title>Hello world page</title> </head> <body> <h1>${message}</h1> </body> </html> I hope it wan’t hard to repeat all these steps. Run Spring Boot application The last thing we have to do in this tutorial is launch of the application. Hence I use Gradle, and in our build.gradle file I specified that the application needs to be packaged as WAR file – I need to run build and run war file. Here is how it looks like in IDEA:Result you can see here: localhost:8080/helloReference: Spring Boot: Fast MVC start from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog....
software-development-2-logo

The hi/lo algorithm

Introduction In my previous post I talked about various database identifier strategies, you need to be aware of when designing the database model. We concluded that database sequences are very convenient, because they are both flexible and efficient for most use cases. But even with cached sequences, the application requires a database round-trip for every new the sequence value. If your applications demands a high number of insert operations per transaction, the sequence allocation may be optimized with a hi/lo algorithm. The hi/lo algorithm The hi/lo algorithms splits the sequences domain into “hi” groups. A “hi” value is assigned synchronously. Every “hi” group is given a maximum number of “lo” entries, that can by assigned off-line without worrying about concurrent duplicate entries.The “hi” token is assigned by the database, and two concurrent calls are guaranteed to see unique consecutive values Once a “hi” token is retrieved we only need the “incrementSize” (the number of “lo” entries) The identifiers range is given by the following formula:and the “lo” value will be taken from:starting from:When all “lo” values are used, a new “hi” value is fetched and the cycle continuesHere you can have an example of two concurrent transactions, each one inserting multiple entities:Testing the theory If we have the following entity: @Entity public class Hilo {@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "hilo_sequence_generator") @GenericGenerator( name = "hilo_sequence_generator", strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator", parameters = { @Parameter(name = "sequence_name", value = "hilo_seqeunce"), @Parameter(name = "initial_value", value = "1"), @Parameter(name = "increment_size", value = "3"), @Parameter(name = "optimizer", value = "hilo") }) @Id private Long id; } We can check how many database sequence round-trips are issued when inserting multiple entities: @Test public void testHiloIdentifierGenerator() { doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for(int i = 0; i < 8; i++) { Hilo hilo = new Hilo(); session.persist(hilo); session.flush(); } return null; } }); } Which end-ups generating the following SQL queries: Query:{[call next value for hilo_seqeunce][]} Query:{[insert into Hilo (id) values (?)][1]} Query:{[insert into Hilo (id) values (?)][2]} Query:{[insert into Hilo (id) values (?)][3]} Query:{[call next value for hilo_seqeunce][]} Query:{[insert into Hilo (id) values (?)][4]} Query:{[insert into Hilo (id) values (?)][5]} Query:{[insert into Hilo (id) values (?)][6]} Query:{[call next value for hilo_seqeunce][]} Query:{[insert into Hilo (id) values (?)][7]} Query:{[insert into Hilo (id) values (?)][8]} As you can see we have only 3 sequence calls for 8 inserted entities. The more entity inserts a transaction will we require the better the performance gain we’ll obtain from reducing the database sequence round-trips.Reference: The hi/lo algorithm from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
software-development-2-logo

I Don’t Think That Software Development Word Means What You Think It Means

There are several terms used inappropriately or incorrectly in software development. In this post, I look at some of these terms and the negative consequences of misuse of these terms. “agile” The Agile Manifesto started a movement that resonated with many software developers frustrated with inefficiencies and inadequacies of prevalent software development methodologies. Unfortunately, the relatively simple concepts of the Agile Manifesto were interpreted, changed, evangelized, commercialized, and sold in so many different ways that it became difficult to uniquely describe agile. To some, agile became synonymous with “no documentation” and to others agile meant going straight to coding without any process. So many disparate methodologies and practices are now sold as agile that it’s become increasingly difficult to describe what makes something agile or not. There have been several negative consequences of the multiple interpretations of what agile means. Implementation of so-called agile practices without understanding of agile can lead to failures that are blamed on agile, but which failures may have little or nothing to do with agile. Unrealistic expectations of agile and what it can do for development can lead to inevitable disappointment as there still is no silver bullet. It is difficult to help new developers, developers new to agile, managers, customers, and other stakeholders understand what agile is and how it may or may not be appropriate for them with so many different interpretations. I was at a presentation by an agile enthusiastic several years ago when he suggested that agile was anything that was successful and was not anything that is not successful. For me, “agile” means processes and approaches that closely match the values outlined in the Agile Manifesto (individuals and interactions, working software, customer collaboration, and responding to change). There are other approaches and methodologies out there that may be useful and positive, but if they aren’t inspired by these values, it is difficult for me to hear them called “agile.” “REST” Roy Fielding‘s dissertation Architectural Styles and the Design of Network-based Software Architectures popularized the term Representational State Transfer (REST). Unfortunately, many have used REST and HTTP interchangeably and in the process have muddled the conversations about both the REST architectural style and the Hypertext Transfer Protocol (HTTP). It is easy to see from a historical perspective why REST and HTTP are often treated interchangeably. REST embraced the functionality already provided by HTTP as a significant part of its architectural style at a time other popular architectural styles and frameworks were doing everything they could to hide or abstract away HTTP specifics details. REST leverages HTTP’s stateless nature while others were trying to wrap HTTP with state. Although REST certainly played a major role in raising awareness of HTTP, REST is more than HTTP. I have found that many who think of REST and HTTP as one and the same don’t appreciate the HATEOAS concept in REST. HATEOAS stands for Hypermedia as the Engine of Application State and refers to the concept of application state being embodied within the hypermedia exchanged between server and client rather than in the client. “refactoring” I’ve known of clients and managers filled with dread when they hear a developer state that he or she is going to “refactor” something. The reason is that “refactoring” too often means the developer plans to change the code structure and “improve” or “fix” behavior as part of this. Refactoring is supposed to be code improvements that do not effect the results of the software but lead to more maintainable code. Too many developers are lured into making other changes “while they are in the code” that change results. Even when for the better, these changes are not in the spirit of refactoring and so, when they have led to breaking of existing functionality, have led to “refactoring” being seen in a bad light. Comprehensive unit tests and other tests can help ensure that refactoring does not change any expected behavior, but developers should also clearly understand whether the goal is to maintain current functionality with improved code structure (refactor) or actually change/improve functionality and only use the term “refactoring” when appropriate to avoid confusion. “premature optimization” I generally agree with the principle behind the now famous quotation, “Premature optimization is the root of all evil.” However, my interpretation of this is that one should not write less maintainable or less readable code in an attempt to achieve small expected performance gains. However, as I posted in When Premature Optimization Isn’t, this term occasionally gets used as justification for not making good architecture and high-level design decisions just because they have a performance benefit associated with them. Some architectural decisions are difficult to change at a later point and performance does need to be accounted for. Similarly, even at implementation level, there are times when better performing code is as readable and easy to write as less performing code and so there is no good reason to not write the better performing code. NoSQL The term NoSQL was an unfortunate one for a class of databases that probably would have been better labeled “Not Relational.” As numerous “NoSQL databases” have adopted SQL (without adopting the relational model), alternative terms have been tried such as “Not Just SQL.” “open source” The term “open source” has often led to confusion about whether the software in question is “free” in terms of “freedom” (libre/free speech) and/or “free” in terms of no monetary price (gratuit/free beer). There can even be confusion about the minor differences between “open source” and “free software.” For me, “open source” means source code that I can look at and modify as necessary. JavaScript With JavaScript’s increased popularity, its poorly chosen name does not seem to confuse as many people as it used to. However, I still do occasionally hear people who think that JavaScript must have some relationship to Java because “Java” is in both languages’ names. SLOC I generally despise everything about the idea of source line of code. The appeal of SLOC is the pretense that somehow lines of code can be counted the same as beans and widgets. All lines of code are not created equal and there are differences in lines of code across different languages, across different developers, and across different functionality. Some have even gone so far as to think that more SLOC is always a good thing whereas I’ve found that more concise code with fewer lines of code can often be preferable. I have blogged before on lines of code and unintended consequences. SOAP This one is not a big deal in terms of negative consequences from its misuse, but it is worth noting that SOAP no longer stands for Simple Object Access Protocol. JDBC This is another one that doesn’t really lead to any problems even though it technically has never stood for Java Database Connectivity and is not even an acronym. The fact that it does indeed relate to connecting to databases, that it is Java-related, and that it is so widely said to stand for Java Database Connectivity means that this misuse of the term JDBC has no significant negative side effects. In fact, I suspect that Sun Microsystems folks intentionally wanted people to think of it as an acronym for Java Database Connectivity while explicitly stating that it was not an acronym because it allowed people to quickly understand what JDBC is via their awareness of ODBC. Conclusion The incorrect use of many of the terms discussed in this post could be described as largely pedantic, but misuse of a few of them can lead to miscommunication and general confusion. In some cases (such as “agile” and “refactoring”), the misuse of terms has led to negative experiences and soiled reputations for those terms. In other cases (such as using JDBC and SOAP as acronyms when they really are not acronyms), the confusion seems small and harmless as everyone discussing the falsely advertised “acronym” seems to understand what it implies.Reference: I Don’t Think That Software Development Word Means What You Think It Means from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-logo

A little bit on the JVM and JIT

As you might be aware, the JVM (Java Virtusal Machine) is what makes it possible for Java to adhere to the write-once-run-anywhere paradigm. At its core, the JVM consists of the following components:Heap Stack PermGen and Method Area JIT Compiler Code cache    The heap is where memory is allocated for every new operator you use during the application code development stage. Stack will store the local variables that you will assign within the scope of a method. One thing to note is that the variables defined within the scope of a method will be removed after the completion of the method. If for example, a String is assigned within the scope of a method, and its scope is guaranteed to be of local scope, then this will be stored in the stack which would otherwise be assigned within the heap. The PermGen space will store class and method level data as well as static variables that are defined in your application. The method area is actual an area within the PermGen space where it will store all method,field, constant pool level details of your application. The JIT compiler and the code cache go hand in hand. The JVM at its core interprets the Java Byte Code into assembly code at runtime. Interpreting can be a slow process because the code needs to be converted from byte code to machine code at runtime every time a portion of your application code is executed. This is where the JIT compiler comes into action, with its super awesome compilation of methods which it then stores in the code cache. The JIT compiler analyzes the application code at runtime to understand which methods can be categorized as hot methods. Hot in this context meaning code fragments that are accessed more frequently. At a very high level, what the JIT compiler does is that it will have a counter for each method executed in order to understand the frequency of its usage. When the counter reaches a defined threshold value, the method then becomes eligible to be compiled by the JIT compiler to its respective assemble code which will then be stored within the code cache. What happens is that now, whenever the JIT compiler comes across calls to those methods which were compiled and stored within the code cache, it will not try to interpret them yet again but will use the already compiled assembly code available within the code cache. This gives your application a performance boost because using the compiled code is much faster than interpreting it during runtime. When talking about the JIT compiler, there are mainly two flavors of it which we are mostly oblivious of due to the fact of the lack of documentation around them. The two types are;Client ServerThe default compiler used will defer according to the machine architecture and the JVM version (32bit or 64bit) that you are running on. Let us briefly see what each one does. The client compiler starts compiling your byte code to assembly code at the application startup time. What this indirectly means is that your application will have a much improved startup time. But the main disadvantage this brings along with it is that your code cache will run out of memory faster. Most optimizations can be made only after your application has run for a brief period of time. But since the client compiler already took up the code cache space, you will not have space to store the assembly code for these optimizations. This is where the server cache excels. Unlike the client compiler, the server compiler will not start compiling at the start of your application. It will allow the application code to run for some time (which is often referred to as the warm-up period) after which it will start compiling the byte code to assembly code which it will then store within the code cache. In my next post I will discuss how we can actually mix and match the client and server compilation and also introduce you to a few more JVM flags that we seldom come across but are vital for increasing the performance of your application.Reference: A little bit on the JVM and JIT from our JCG partner Dinuka Arseculeratne at the My Journey Through IT blog....
software-development-2-logo

Using Markdown syntax in Javadoc comments

In this post we will see how we can write Javadoc comments using Markdown instead of the typical Javadoc syntax. So what is Markdown?Markdown is a plain text formatting syntax designed so that it optionally can be converted to HTML using a tool by the same name. Markdown is popularly used to format readme files, for writing messages in online discussion forums or in text editors for the quick creation of rich text documents. (Wikipedia: Markdown)Markdown is a very easy to read formatting syntax. Different variations of Markdown can be used on Stack Overflow or GitHub to format user generated content. Setup By default the Javadoc tool uses Javadoc comments to generate API documentation in HTML form. This process can be customized used Doclets. Doclets are Java programs that specify the content and format of the output of the Javadoc tool. The markdown-doclet is a replacement for the standard Java Doclet which gives developers the option to use Markdown syntax in their Javadoc comments. We can set up this doclet in Maven using the maven-javadoc-plugin. <build>   <plugins>     <plugin>       <artifactId>maven-javadoc-plugin</artifactId>       <version>2.9</version>       <configuration>         <doclet>ch.raffael.doclets.pegdown.PegdownDoclet</doclet>         <docletArtifact>           <groupId>ch.raffael.pegdown-doclet</groupId>           <artifactId>pegdown-doclet</artifactId>           <version>1.1</version>         </docletArtifact>         <useStandardDocletOptions>true</useStandardDocletOptions>       </configuration>     </plugin>   </plugins> </build> Writing comments in Markdown Now we can use Markdown syntax in Javadoc comments: /**  * ## Large headline  * ### Smaller headline  *  * This is a comment that contains `code` parts.  *  * Code blocks:  *  * ```java  * int foo = 42;  * System.out.println(foo);  * ```  *  * Quote blocks:  *  * > This is a block quote  *  * lists:  *  *  - first item  *  - second item  *  - third item  *  * This is a text that contains an [external link][link].  *  * [link]: http://external-link.com/  *  * @param id the user id  * @return the user object with the passed `id` or `null` if no user with this `id` is found  */ public User findUser(long id) {   ... } After running mvn javadoc:javadoc we can find the generated HTML API documentation in target/site/apidocs. The generated documentation for the method shown above looks like this:As we can see the Javadoc comments get nicely converted to HTML. Conclusion Markdown has the clear advantage over standard Javadoc syntax that the source it is far easier to read. Just have a look at some of the method comments of java.util.Map. Many Javadoc comments are full with formatting tags and are barely readable without any tool. But be aware that Markdown can cause problems with tools and IDEs that expect standard Javadoc syntax.Reference: Using Markdown syntax in Javadoc comments from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
software-development-2-logo

NoSQL – A Quick Guide

NoSQL is buzz word nowadays among the developers and software professionals. 1. What is NoSQL ? NoSQL database, also called Not Only SQL, is an approach to data management and database design that’s useful for very large sets of distributed data. 2. Where to use NoSQL ? Use NOSQL, When project has unstructured big data that require real-time or offline analysis or web/mobile application. i.e. Social Network app, Analytics app. 3. Advantages and Disadvantages of NoSQL DB Advantages of NoSQLElastic scaling Big Data Ecomomics Flexible data modelsDisadvantages of NoSQLMaturity Support Analytics and business intelligence Administration Expertise4. Category of NoSQLColumn Document Key-value Graph5. How many NoSQL database are available in market ? More than 110 different (Open Source and Proprietary) NoSQL database available in market. 6. If all NoSQL database fall under above category then what is purpose of having lots of NoSQL databases ? Every NOSQL database has some special feature & functionality which makes it different, Base on the project requirement one can choose NOSQL database. 7. Can I use multiple NoSQL in my project / application ? Yes. 8. List of popular NoSQL database with usage Radis: For rapidly changing data (should fit mostly in memory). i.e. to store real-time stock prices, analytics, leaderboards and communication. And replacement of memcached. MongoDB: When you need dynamic queries, defined indexes, map/reduce and good performance on a big DB. i.e. for most things that you would do with MySQL but having predefined columns really holds you back. Cassandra: When you need to store data so huge that it doesn’t fit on server, but still want a friendly familiar interface to it. When you don’t need real-time analysis or other operation. i.e. Web analytics, Transaction logging, Data collection from huge sensor arrays. Riak: If you need very good single-site scalability, availability and fault-tolerance, but you’re ready to pay for multi-site replication. i.e. Point-of-sales data collection. Factory control systems. Places where even seconds of downtime hurt. Could be used as a well-update-able web server. CouchDB: For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important. i.e. CRM, CMS systems. Master-master replication is an especially interesting feature, allowing easy multi-site deployments. HBase: Hadoop is probably still the best way to run Map/Reduce jobs on huge datasets. Best if you use the Hadoop/HDFS stack already. ie. Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement. Accumulo: If you need to restict access on the cell level. i.e. Same as HBase, since it’s basically a replacement: Search engines. Hypertable: If you need a better HBase. i.e/ Same as HBase, since it’s basically a replacement: Search engines. Neo4j: For graph-style, rich or complex, interconnected data. Neo4j is quite different from the others in this sense. i.e. For searching routes in social relations, public transport links, road maps, or network topologies. ElasticSearch: When you have objects with (flexible) fields, and you need “advanced search” functionality. i.e. A dating service that handles age difference, geographic location, tastes and dislikes, etc. Or a leaderboard system that depends on many variables. you can replace your Solr with ElasticSearch. Couchbase: Any application where low-latency data access, high concurrency support and high availability is a requirement. i.e. Low-latency use-cases like ad targeting or highly-concurrent web apps like online gaming (e.g. Zynga).Reference: NoSQL – A Quick Guide from our JCG partner Ketan Parmar at the KP Bird blog....
career-logo

So You Want to Use A Recruiter Part I – Recruit Your Recruiter

This is the first in a three-part series to inform job seekers about working with a recruiter. Part II is “Establishing Boundaries” and Part III is “Warnings” This week I read an unusually high number of articles (and the comments!) about recruiting. Although most of the discussion quickly turns to harsh criticism, there are always a few people wondering the best ways to find a decent recruiter to work with and what to do once they have established contact. Some recruiter demand stems from candidates looking to relocate into areas where they have no network, while others just want to maximize their options and feel they may benefit from the services provided by an agency recruiter. Regardless of your reasons for seeking out an agency recruiter, first you have to find one. Finding a Recruiter There are three reliable methods to getting introduced to a recruiter. Referral This is the best method for most, as being introduced by a contact can have unexpected benefits. To maximize those benefits, you must consider the source of your referral. If the recruiter has a great deal of respect for the person introducing you, you are likely to be given some immediate credibility and favorable treatment due to that association. Unfortunately the alternative is true, and if you are referred by someone the recruiter does not respect it may be assumed that you are not a strong talent. When asking for recruiter referrals it is wise to start with the most talented people in your network. Your network does not have to be the only source of referrals, particularly if you are looking for a recruiter in an area where you have no network. User group and meetup leaders are frequently contacted by recruiters and one should expect group leaders to be knowledgeable of the local market. Even a random email to an engineer in another location could result in a solid lead. Let the recruiter find you If I had a nickel for every time I heard technologists complain that they aren’t hearing from enough recruiters, I’d be poor – though some voice frustration that they don’t hear from the ‘right ones‘. Increasing your visibility will attract recruiters who may or may not be the ones you’d want, but it helps establish a pool for evaluation to choose who is worthy of a response. To maximize your chances of being found and contacted, you need to consider how recruiters will find you. The obvious place is LinkedIn, and spending a few minutes fixing up your profile will help. Keywords and SEO concepts as well as profile ‘completeness’ should be your focus. (further reading on this) Recruiters are likely to be searching for combinations of keywords from their requirements, usually with some advanced search filters based on location, education, or experience. Completeness matters. Some recruiters search Twitter and the other standard social sites as well. If you have a profile anywhere, just assume that a recruiter might find it and optimize keywords similarly. Keep in mind how easy or difficult it is for people to contact you once you’ve been found. Just because I see your LinkedIn profile or Google + account doesn’t mean I can contact you. Many professionals create an email address (maybe currentemail-jobs@domain) strictly for recruiter correspondence and include it in their LinkedIn profile and other social pages. Another option is to get discovered on job search sites like Indeed, Monster, and Dice. These are frequented by active job seekers, and some recruiters may view your posting there as a somewhat negative signal. Be warned that posting personal information on these sites means that those phone numbers or email addresses will live forever in the databases of recruiters everywhere. PROTIP: Those that complain about recruiters often cite laziness in the initial contact. This may be evidenced by an obvious cut and paste or clear signs that the recruiter didn’t read the bio. If you want to screen out recruiters that don’t do the work, put up a barrier to weed out the lazy. This page that uses scripts in Python and Haskell to hide an email address is perhaps my favorite, but there are other less clever ways if you want to set the bar lower than the ability to cut/paste code into a compiler. Search Recruiters search for you, and you can search for them. Most recruiters are going to be easiest to find on LinkedIn due to the amount of time they spend there.Click on Advanced at the top of the main LinkedIn screen (just to the right of the search bar)  On the upper left side of your screen you will see several fields.  Make sure you are doing a People search (and not a Jobs search).Type ‘Recruiter’ and other terms specific to you in the Keywords field.  Try ‘developer‘ or ‘programmer’ and a term that a recruiter might use to brand you, such as a language.  Recruiters often populate their LinkedIn profiles with the technologies they seek, not unlike job seekers trying to catch the automated eye of a résumé scanning system.Enter the zip code of the area where you wish to find work and consider setting a mile limit.  Some recruiters work nationally, but local knowledge goes a long way if you are seeking to work in one area. Once you start entering the code, a menu appears.  Depending on where you live, you may want to select 25 or 50 miles (probably good for northeast or mid-Atlantic US), or up to 100 miles (for midwest).On the right, make sure you have 3rd + Everyone Else checked under Relationship.   This will maximize your results, particularly if your LinkedIn network is small.Click Search. Repeat, and vary the words you used in Step 3.  You should see a few different faces as you adjust the keywords, and you’ll also see whether you have connections in common with those in your search results.Twitter is another decent option. Make sure you are searching People (and not Everything), and pair up the word recruiter with some keywords and/or geographic locations. You’ll get numerous hits in most cases, and should only have to do a bit of legwork to find their bios. In addition to being able to find recruiters on social sites, you can use job boards as well. If you search for a Ruby job in New York City, you may quickly find that several of the listings are posted by one or two recruiting companies. Look into those firms to see if they have a specialty practice. Search engines might be a bit less useful and are likely to turn up the same listings found on job boards. Evaluating a Recruiter Once you have found a pool of potential recruiters, you need to decide which ones to contact. Most job seekers want a recruiter that can provide quality opportunities, has deep market knowledge, can leverage industry relationships, and will navigate issues in the hiring process. What criteria should we use in the evaluation? Experience Just like most disciplines, in recruiting there is no substitute for experience. It takes time to develop contacts and to learn how to uncover potential land mines. Extensive education, recruitment certifications, and training programs don’t get you a network or prepare you for handling unique situations. Early in my career I know I made many of the mistakes that technologists complain about, and I didn’t have a solid network or steady clients for at least five years. At a certain point in your recruiting career you may not have seen it all, but it’s rare that you are surprised by an outcome. Focus and Expertise Experienced recruiters that have spent little time in the industry may be good for general job search advice or negotiation, but can’t provide full value. Look for a consistent track record of years in your field and geography of your search. Talking to a few generalists will make the specialists stand out. Relationships/Clients Since recruiters aren’t paid by you and differ from a placement agency, it’s important that the firm has client relationships. Most firms do not advertise their client names which can make it difficult to discover the strength of an agency’s opportunities. The descriptions themselves could be enough to convince you that the agency has attractive clients. Agencies with solid relationships may reach out to past clients and contacts when they don’t have a position that is a clear fit for your background. Personality fit Being that an agency recruiter is going to be representing you to companies and even advocating and negotiating on your behalf, it’s important that you get along. You don’t need to be best friends, but someone who dislikes you is unlikely to fight for your best interests. A ten minute call should give the insight you need to make the decision. Ask questions about their experience and pay attention to the types of questions they ask you. If they don’t dig into your goals and objectives, they probably aren’t concerned with finding a good fit for you.Reference: So You Want to Use A Recruiter Part I – Recruit Your Recruiter from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
software-development-2-logo

Javascript for Java Developers

This post will go over the Javascript language from the point of view of a Java developer, focusing on the differences between the two languages and the frequent pain points. We will go over the following:                Objects Only, No Classes Functions are just Values The ‘this’ Keyword Classic vs Prototypal Inheritance Constructors vs Constructor Functions Closures vs Lambdas Encapsulation and Modules Block Scope and HoistingWhy Javascript in the Java World ? A lot of Java frontend development work is done using Java/XML based frameworks like JSF or GWT. The framework developers themselves need to know Javascript, but in principle the application developers don’t. However the reality is that:For doing custom component development in for example Primefaces (JSF), it’s important to know Javascript and jQuery. In GWT, integrating at least some third-party Javascript widgets is common and cost effective.The end result is that Javascript is usually needed to do at least the last 5 to 10% of frontend work, even using Java frameworks. Also it’s starting to get more and more used for polyglot enterprise development, alongside Angular for example. The good news is that, besides a few gotchas that we will get into, Javascript is a very learneable language for a Java developer. Objects Only – No Classes One of the most surprising things about Javascript is that although it’s an object oriented language, there are no classes (although the new Ecmascript 6 version will have them). Take for example this program, that initializes an empty object and set’s two properties: // create an empty object - no class was needed !! var superhero = {};superhero.name = 'Superman'; superhero.strength = 100; Javascript objects are just like a Java HashMap of related properties, where the keys are Strings only. The following would be the ‘equivalent’ Java code: Map<String,Object> superhero = new HashMap<>();superhero.put("name","Superman"); superhero.put("strength", 100); This means that a Javascript object is just a multi-level ‘hash map’ of key/value pairs, with no class definition needed. Functions Are Just Values Functions in Javascript are just values of type Function, it’s a simple as that! Take for example: var flyFunction = function() { console.log('Flying like a bird!'); };superhero.fly = flyFunction; This creates a function (a value of type Function) and assigns it to a variable flyFunction. A new property named fly is then created in the superhero object, that can be invoked like this: // prints 'Flying like a bird!' to the console superhero.fly(); Java does not have the equivalent of the Javascript Function type, but almost. Take for example the SuperHero class that takes a Power function: public interface Power { void use(); }public class SuperHero {private Power flyPower;public void setFly(Power flyPower) { this.flyPower = flyPower; }public void fly() { flyPower.use(); } } This is how to pass SuperHero a function in Java 7 and 8: // Java 7 equivalent Power flyFunction = new Power() { @Override public void use() { System.out.println("Flying like a bird ..."); } };// Java 8 equivalent superman.setFly( ()->System.out.println("Flying like a bird ..."));superman.fly(); So although a Function type does not exist in Java 8, this ends up not preventing a ‘Javascript-like’ functional programming style. But if we pass functions around, what happens to the meaning of the this keyword? The ‘this’ Keyword Usage What Javascript allows to do with this is quite surprising compared to the Java world. Let’s start with an example: var superman = {heroName: 'Superman',sayHello: function() { console.log("Hello, I'm " + this.heroName ); } };superman.sayHello(); This program creates an object superman with two properties: a String heroName and a Function named sayHello. Running this program outputs as expected Hello, I'm Superman. What if we pass the function around? By passing around sayHello, we can easily end up in a context where there is no heroName property: var failThis = superman.sayHello;failThis(); Running this snippet would give as output: Hello, I'm undefined. Why does this not work anymore? This is because the variable hello belongs to the global scope, which contains no member variable named heroName. To solve this:In Javascript the value of the this keyword is completely overridable to be anything that we want!// overrides 'this' with superman hello.call(superman); The snippet above would print again Hello, I'm Superman. This means that the value of this depends on both the context on which the function is called, and on how the function is called. Classic vs Prototypal Inheritance In Javascript, there is no class inheritance, instead objects can inherit directly from other objects. The way this works is that each object has an implicit property that points to a ‘parent’ object. That property is called __proto__, and the parent object is called the object’s prototype, hence the name Prototypal Inheritance. How does prototype work? When looking up a property, Javascript will try to find the property in the object itself. If it does not find it then it tries in it’s prototype, and so on. For example: var avengersHero = { editor: 'Marvel' };var ironMan = {};ironMan.__proto__ = avengersHero;console.log('Iron Man is copyrighted by ' + ironMan.editor); This snippet will output Iron Man is copyrighted by Marvel. As we can see, although the ironMan object is empty, it’s prototype does contain the property editor, which get’s found. How does this compare with Java inheritance? Let’s now say that the rights for the Avengers where bought by DC Comics: avengersHero.editor = 'DC Comics'; If we call ironMan.editor again, we now get Iron Man is copyrighted by DC Comics. All the existing object instances with the avengersHero prototype now see DC Comics without having to be recreated. This mechanism is very simple and very powerful. Anything that can be done with class inheritance can be done with prototypal inheritance. But what about constructors? Constructors vs Constructor Functions In Javascript an attempt was made to make object creation similar to languages like Java. Let’s take for example: function SuperHero(name, strength) { this.name = name; this.strength = strength; } Notice the capitalized name, indicating that it’s a constructor function. Let’s see how it can be used: var superman = new SuperHero('Superman', 100);console.log('Hello, my name is ' + superman.name); This code snippet outputs Hello, my name is Superman. You might think that this looks just like Java, and that is exactly the point! What this new syntax really does is to it creates a new empty object, and then calls the constructor function by forcing this to be the newly created object. Why is this syntax not recommended then? Let’s say that we want to specify that all super heroes have a sayHello method. This could be done by putting the sayHello function in a common prototype object: function SuperHero(name, strength) { this.name = name; this.strength = strength; }SuperHero.prototype.sayHello = function() { console.log('Hello, my name is ' + this.name); }var superman = new SuperHero('Superman', 100); superman.sayHello(); This would output Hello, my name is Superman. But the syntax SuperHero.prototype.sayHello looks anything but Java like! The new operator mechanism sort of half looks like Java but at the same time is completely different. Is there a recommended alternative to new? The recommended way to go is to ignore the Javascript new operator altogether and use Object.create: var superHeroPrototype = { sayHello: function() { console.log('Hello, my name is ' + this.name); } };var superman = Object.create(superHeroPrototype); superman.name = 'Superman'; Unlike the new operator, one thing that Javascript absolutely got right where Closures. Closures vs Lambdas Javascript Closures are not that different from Java anonymous inner classes used in a certain way. take for example the FlyingHero class: public interface FlyCommand { public void fly(); }public class FlyingHero {private String name;public FlyingHero(String name) { this.name = name; }public void fly(FlyCommand flyCommand) { flyCommand.fly(); } } We can can pass it a fly command like this in Java 8: String destination = "Mars"; superMan.fly(() -> System.out.println("Flying to " + destination )); The output of this snippet is Flying to Mars. Notice that the FlyCommand lambda had to ‘remember’ the variable destination, because it needs it for executing the fly method later. This notion of a function that remembers about variables outside it’s block scope for later use is called a Closure in Javascript. For further details, have a look at this blog post Really Understanding Javascript Closures. What is the main difference between Lambdas and Closures? In Javascript a closure looks like this: var destination = 'Mars';var fly = function() { console.log('Fly to ' + destination); }fly(); The Javascript closure, unlike the Java Lambda does not have the constraint that the destination variable must be immutable (or effectively immutable since Java 8). This seemingly innocuous difference is actually a ‘killer’ feature of Javascript closures, because it allows them to be used for creating encapsulated modules. Modules and Encapsulation There are no classes in Javascript and no public/ private modifiers, but then again take a look at this: function createHero(heroName) {var name = heroName;return { fly: function(destination) { console.log(name + ' flying to ' + destination); } }; } Here a function createHero is being defined, which returns an object which has a function fly. The fly function ‘remembers’ name when needed. How do Closures relate to Encapsulation? When the createHero function returns, noone else will ever be able to directly access name, except via fly. Let’s try this out: var superman = createHero('SuperMan');superman.fly('The Moon'); The output of this snippet is SuperMan flying to The Moon. But happens if we try to access name directly ? console.log('Hero name = ' + superman.name); The result is Hero name = undefined. The function createHero is said to a be a Javascript encapsulated module, with closed ‘private’ member variables and a ‘public’ interface returned as an object with functions. Block Scope and Hoisting Understanding block scope in Javascript is simple: there is no block scope! Take a look at this example: function counterLoop() {console.log('counter before declaration = ' + i);for (var i = 0; i < 3 ; i++) { console.log('counter = ' + i); }console.log('counter after loop = ' + i); }counterLoop(); By looking at this coming from Java, you might expect:error at line 3: ‘variable i does not exist’ values 0, 1, 2 are printed error at line 9: ‘variable i does not exist’It turns out that only one of these three things is true, and the output is actually this: counter before declaration = undefined counter = 0 counter = 1 counter = 2 counter after loop = 3 Because there is no block scope, the loop variable i is visible for the whole function. This means:line 3 sees the variable declared but not initialized line 9 sees i after the loop has terminatedWhat might be the most puzzling is that line 3 actually sees the variable declared but undefined, instead of throwing i is not defined. This is because the Javascript interpreter first scans the function for a list of variables, and then goes back to interpret the function code lines one by one. The end result is that it’s like the variable i was hoisted to the top, and this is what the Javascript runtime actually ‘sees’: function counterLoop() {var i; // i is 'seen' as if declared here!console.log('counter before declaration = ' + i);for (i = 0; i < 3 ; i++) { console.log('counter = ' + i); }console.log('counter after loop: ' + i); } To prevent surprises caused by hoisting and lack of block scoping, it’s a recommended practice to declare variables always at the top of functions. This makes hoisting explicit and visible by the developer, and helps to avoid bugs. The next version of Javascript (Ecmascript 6) will include a new keyword ‘let’ to allow block scoping. Conclusion The Javascript language shares a lot of similarities with Java, but also some huge differences. Some of the differences like inheritance and constructor functions are important, but much less than one would expect for day to day programming. Some of these features are needed mostly by library developers, and not necessarily for day to day application programming. This is unlike some of their Java counterparts which are needed every day. So if you are hesitant to give it a try, don’t let some of these features prevent you from going further into the language. One thing is for sure, at least some Javascript is more or less inevitable when doing Java frontend development, so it’s really worth to give it a try.Reference: Javascript for Java Developers from our JCG partner Aleksey Novik at the The JHades Blog blog....
java-logo

MineCraft and off heap memory

Overview MineCraft is a really good example of when off heap memory can really help.The key requirements are:The bulk of the retained data is a simple data structure (in minecraft’s case its lots of byte[]) The usage of off heap memory can be hidden in abstraction.    The testI used the following test for starting minecraft server from a seed from scratch which is a particularly expensive operation for the server.Preset the level-seed=114 in server.properties Delete the world* directories Start the server with these options to see what the GC is doing -Xloggc:gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -Xmx6g Connect with one client Perform /worldgen village Perform /save-all Exit.To analyse the logs I am using jClarity’s Censum.Standard MineCraft There are two particularly expensive things it does:It caches block stage in many byte[]s It attempts to cache int[] used for processing without limit.A Censum report for the above test looks like this:The high pause times are partly due to having to manage the large objects.Off heap MineCraft Two changes were made to address this:Use off heap ByteBuffer for long term caching.  Unsafe would be more efficient but not as portable. Put a cap on the number of int[] cached.Note: the problem with the temporary int[] only became visible to me after moving the bulk of the data off heap. Addressing the biggest problem over reveals more quick fix problems.A Censum report for the same test looks like this: There is still some premature promotion i.e. further improvements can be made but you can see that the application is spending.ConclusionUsing off heap memory can help you tame you GC pause times, especially if the bulk of your data is in simple data structures which can be easily abstracted.  Doing so can also help reveal other simple optimisations you can do to improve the consistency of your performance. FootnoteMany organisations treat performance as an enhancement and optional, however if you instil a culture where reasonable performance is a requirement, and failing to meet this requirement is a bug, performance issues are more likely to be fixed. c.f. https://bugs.mojang.com/browse/MC-56447 The Source UsedThe source used for the test is available here. https://github.com/peter-lawrey/MineOffHeapThe logs produced are available here. https://github.com/peter-lawrey/MineOffHeap/tree/master/logsReference: MineCraft and off heap memory from our JCG partner Peter Lawrey at the Vanilla Java blog....
java-logo

Java Build Tools: Ant vs Maven vs Gradle

In the beginning there was Make as the only build tool available. Later on it was improved with GNU Make. However, since then our needs increased and, as a result, build tools evolved. JVM ecosystem is dominated with three build tools:Apache Ant with Ivy Maven Gradle  Ant with Ivy Ant was the first among “modern” build tools. In many aspects it is similar to Make. It was released in 2000 and in a short period of time became the most popular build tool for Java projects. It has very low learning curve thus allowing anyone to start using it without any special preparation. It is based on procedural programming idea. After its initial release, it was improved with the ability to accept plug-ins. Major drawback was XML as the format to write build scripts. XML, being hierarchical in nature, is not a good fit for procedural programming approach Ant uses. Another problem with Ant is that its XML tends to become unmanageably big when used with all but very small projects. Later on, as dependency management over the network became a must, Ant adopted Apache Ivy. Main benefit of Ant is its control of the build process. Maven Maven was released in 2004. Its goal was to improve upon some of the problems developers were facing when using Ant. Maven continues using XML as the format to write build specification. However, structure is diametrically different. While Ant requires developers to write all the commands that lead to the successful execution of some task, Maven relies on conventions and provides the available targets (goals) that can be invoked. As the additional, and probably most important addition, Maven introduced the ability to download dependencies over the network (later on adopted by Ant through Ivy). That in itself revolutionized the way we deliver software. However, Maven has its own problems. Dependencies management does not handle well conflicts between different versions of the same library (something Ivy is much better at). XML as the build configuration format is strictly structured and highly standardized. Customization of targets (goals) is hard. Since Maven is focused mostly on dependency management, complex, customized build scripts are actually harder to write in Maven than in Ant. Maven configuration written in XML continuous being big and cumbersome. On bigger projects it can have hundreds of lines of code without actually doing anything “extraordinary”. Main benefit from Maven is its life-cycle. As long as the project is based on certain standards, with Maven one can pass through the whole life cycle with relative ease. This comes at a cost of flexibility. In the mean time the interest for DSLs (Domain Specific Languages) continued increasing. The idea is to have languages designed to solve problems belonging to a specific domain. In case of builds, one of the results of applying DSL is Gradle. Gradle Gradle combines good parts of both tools and builds on top of them with DSL and other improvements. It has Ant’s power and flexibility with Maven’s life-cycle and ease of use. The end result is a tool that was released in 2012 and gained a lot of attention in a short period of time. For example, Google adopted Gradle as the default build tool for the Android OS. Gradle does not use XML. Instead, it had its own DSL based on Groovy (one of JVM languages). As a result, Gradle build scripts tend to be much shorter and clearer than those written for Ant or Maven. The amount of boilerplate code is much smaller with Gradle since its DSL is designed to solve a specific problem: move software through its life cycle, from compilation through static analysis and testing until packaging and deployment. It is using Apache Ivy for JAR dependencies. Gradle effort can be summed as “convention is good and so is flexibility”. Code examples We’ll create build scripts that will compile, perform static analysis, run unit tests and, finally, create JAR files. We’ll do those operations in all three frameworks (Ant, Maven and Gradle) and compare the syntax. By comparing the code for each task we’ll be able to get a better understanding of the differences and make an informed decision regarding the choice of the build tool. First things first. If you’ll do the examples from this article by yourself, you’ll need Ant, Ivy, Maven and Gradle installed. Please follow installation instructions provided by makers of those tools. You can choose not to run examples by yourself and skip the installation altogether. Code snippets should be enough to give you the basic idea of how each of the tools work. Code repository https://github.com/vfarcic/JavaBuildTools contains the java code (two simple classes with corresponding tests), checkstyle configuration and Ant, Ivy, Maven and Gradle configuration files. Let’s start with Ant and Ivy. Ant with Ivy Ivy dependencies need to be specified in the ivy.xml file. Our example is fairly simple and requires only JUnit and Hamcrest dependencies. [ivy.xml] <ivy-module version="2.0"> <info organisation="org.apache" module="java-build-tools"/> <dependencies> <dependency org="junit" name="junit" rev="4.11"/> <dependency org="org.hamcrest" name="hamcrest-all" rev="1.3"/> </dependencies> </ivy-module> Now we’ll create our Ant build script. Its task will be only to compile a JAR file. The end result is the following build.xml. [build.xml] <project xmlns:ivy="antlib:org.apache.ivy.ant" name="java-build-tools" default="jar"><property name="src.dir" value="src"/> <property name="build.dir" value="build"/> <property name="classes.dir" value="${build.dir}/classes"/> <property name="jar.dir" value="${build.dir}/jar"/> <property name="lib.dir" value="lib" /> <path id="lib.path.id"> <fileset dir="${lib.dir}" /> </path><target name="resolve"> <ivy:retrieve /> </target><target name="clean"> <delete dir="${build.dir}"/> </target><target name="compile" depends="resolve"> <mkdir dir="${classes.dir}"/> <javac srcdir="${src.dir}" destdir="${classes.dir}" classpathref="lib.path.id"/> </target><target name="jar" depends="compile"> <mkdir dir="${jar.dir}"/> <jar destfile="${jar.dir}/${ant.project.name}.jar" basedir="${classes.dir}"/> </target></project> First we specify several properties. From there on it is one task after another. We use Ivy to resolve dependencies, clean, compile and, finally, create the JAR file. That is quite a lot of configuration for a task that almost every Java project needs to perform. To run the Ant task that creates the JAR file, execute following. ant jar Let’s see how would Maven does the same set of tasks. Maven [pom.xml] <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion> <groupId>com.technologyconversations</groupId> <artifactId>java-build-tools</artifactId> <packaging>jar</packaging> <version>1.0</version><dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <dependency> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-all</artifactId> <version>1.3</version> </dependency> </dependencies><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> </plugin> </plugins> </build></project> To run the Maven goal that creates the JAR file, execute following. mvn package The major difference is that with Maven we don’t need to specify what should be done. We’re not creating tasks but setting the parameters (what are the dependencies, what plugins to use…). This shows the major difference between Ant and Maven. Later promotes the usage of conventions and provides goals (targets) out-of-the-box. Both Ant and Maven XML files tend to grow big with time. To illustrate that, we’ll add Maven CheckStyle, FindBugs and PMD plugins that will take care of static analysis. All three are fairly standard tools used, in one form or another, in many Java projects. We want all static analysis to be executed as part of a single target verify together with unit tests. Moreover, we should specify the path to the custom checkstyle configuration and make sure that it fails on error. Additional Maven code is following: [pom.xml] <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>2.12.1</version> <executions> <execution> <configuration> <configLocation>config/checkstyle/checkstyle.xml</configLocation> <consoleOutput>true</consoleOutput> <failsOnError>true</failsOnError> </configuration> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.5.4</version> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <version>3.1</version> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> To run the Maven goal that runs both unit tests and static analysis with CheckStyle, FindBugs and PMD, execute following. mvn verify We had to write a lot of XML that does some very basic and commonly used set of tasks. On real projects with a lot more dependencies and tasks, Maven pom.xml files can easily reach hundreds or even thousands of lines of XML. Here’s how the same looks in Gradle. Gradle [build.gradle] apply plugin: 'java' apply plugin: 'checkstyle' apply plugin: 'findbugs' apply plugin: 'pmd'version = '1.0'repositories { mavenCentral() }dependencies { testCompile group: 'junit', name: 'junit', version: '4.11' testCompile group: 'org.hamcrest', name: 'hamcrest-all', version: '1.3' } Not only that the Gradle code is much shorter and, to those familiar with Gradle, easier to understand than Maven, but it actually introduces many useful tasks not covered with the Maven code we just wrote. To get the list of all tasks that Gradle can run with the current configuration, please execute the following. gradle tasks --all Clarity, complexity and the learning curve For newcomers, Ant is the clearest tool of all. Just by reading the configuration XML one can understand what it does. However, writing Ant tasks easily gets very complex. Maven and, specially, Gradle have a lot of tasks already available out-of-the-box or through plugins. For example, by seeing the following line it is probably not clear to those not initiated into mysteries of Gradle what tasks will be unlocked for us to use. [build.gradle] apply plugin: 'java' This simple line of code adds 20+ tasks waiting for us to use. Ant’s readability and Maven’s simplicity are, in my opinion, false arguments that apply only during the short initial Gradle learning curve. Once one is used to the Gradle DSL, its syntax is shorter and easier to understand than those employed by Ant or Maven. Moreover, only Gradle offers both conventions and creation of commands. While Maven can be extended with Ant tasks, it is tedious and not very productive. Gradle with Groovy brings it to the next level. Next article will go deeper into Gradle and explain in more details its integration with Groovy.Reference: Java Build Tools: Ant vs Maven vs Gradle from our JCG partner Viktor Farcic at the Technology conversations blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close