Featured FREE Whitepapers

What's New Here?


Legacy Code To Testable Code #2: Extract Method

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. As with renaming, extracting a method helps us understand the code better. If you find it easy to name the method, it makes sense. Otherwise, you just enclosed code that does a lot of things. It can be useful sometimes, although not as extracting small methods that make sense. Extracting a method also introduces a seam. This method can now be mocked, and can now affect the code as it being tested. One of the tricks when not using power-tools is wrapping a static method with an instance method. In our Person class, we have the GetZipCode method: public class Person { String street;public String getZipCode() { Directory directory = Directory.getInstance(); return directory.getZipCodeFromStreet(street); } } The Directory.getInstance() method is static. If we extract it to a getDirectory method (in the Person class) and make this method accessible, we now can mock it. public class Person { String street; public String getZipCode() { Directory directory = getDirectory(); return directory.getZipCodeFromStreet(street); } protected Directory getDirectory() { return Directory.getInstance(); } } While it’s now very easy to mock the getDirectory method using Mockito, it was also easy to mock the Directory.getInstance if we used PowerMockito. Is there an additional reason to introduce a new method? If it’s just for the sake of testing – there’s no need to do the extraction. Sometimes, however mocking things with power-tools is not easy. Problems appearing in static constructors may require more handling on the test side. It may be easier to wrap in a separate method. There are times when extracting helps us regardless of the mocking tool. We can use method extraction to simplify the test, even before we’ve written it. It’s simpler and safer to mock one method, rather than 3 calls. If our getZipCode method looked like this: public String getZipCode() { Address address = new Address(); address.setStreet(street); address.setCountry(country); address.setState(state); address.setCity(city); Directory directory = Directory.getInstance(address); return directory.GetZipCode(); } Even with power-tools, faking the Address instance and setting the rest of the behavior settings just for retrieving the directory is quite a lot of work, which means a longer test with a long setup. If we extract a getDirectoryFromAddress method: public String getZipCode() { Directory directory = getDirectoryFromAddress(); return directory.GetZipCode(); } We get more readable code, and we’ll need to mock only one line. While extracting has its up side, making a method a seam comes with the baggage. If the method is private, and we use power tools to mock it, coupling between test and code is increased. If we make it public, someone can call it. If it’s protected, a derived class can call it. Changes for testability is a change of design, for better or worse.Reference: Legacy Code To Testable Code #2: Extract Method from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

A Java conversion puzzler, not suitable for work (or interviews)

A really hard interview question would be something like this:                   int i = Integer.MAX_VALUE; i += 0.0f; int j = i; System.out.println(j == Integer.MAX_VALUE); // true Why does this print true? At first glace, the answer seem obvious, until you realise that if you change int i for long i things get weird: long i = Integer.MAX_VALUE; i += 0.0f; int j = (int) i; System.out.println(j == Integer.MAX_VALUE); // false System.out.println(j == Integer.MIN_VALUE); // trueWhat is going on you might wonder? When did Java become JavaScript? Let me start by explaining why long gives such a strange result. An important detail about += is that it does an implicit cast. You might think that: a += b; is the same as: a = a + b; and basically it is except with a subtle difference which most of the time doesn’t matter: a = (typeOf(a)) (a + b); Another subtle feature of addition is the result is the “wider” of the two types. This means that: i += 0.0f; is actually: i = (long) ((float) i + 0.0f); When you cast Integer.MAX_VALUE to a float you get a rounding error (as float has a mantissa of 24-bits) resulting in the value being one more than what you started with. i.e. it is the same as: i = Integer.MAX_VALUE + 1; // for long i When you cast Integer.MAX_VALUE + 1 to an int again, you get an overflow and you have: Integer.MIN_VALUE; j = Integer.MIN_VALUE; So why is that a long get the unexpected value, and int happens to get the expected value. The reason is that when rounding from floating point to an integer it rounds down to 0, to the nearest representable value. Thus: int k = (int) Float.MAX_VALUE; // k = Integer.MAX_VALUE; int x = (int) (Integer.MAX_VALUE + 1.0f) // x = Integer.MAX_VALUE; Note: Float.MAX_VALUE / Integer.MAX_VALUE is 1.5845632E29 which is a hell of an error, but the best int can do. In short, for an int value Integer.MAX_VALUE, the statement i += 0.0f; causes the value to jump up one (casting to a float) and then down one (casting back to an int) so you get the value you started with.Reference: A Java conversion puzzler, not suitable for work (or interviews) from our JCG partner Peter Lawrey at the Vanilla Java blog....

Integration testing done right with Embedded MongoDB

Introduction Unit testing requires isolating individual components from their dependencies. Dependencies are replaced with mocks, which simulate certain use cases. This way, we can validate the in-test component behavior across various external context scenarios. Web components can be unit tested using mock business logic services. Services can be tested against mock data access repositories. But the data access layer is not a good candidate for unit testing, because database statements need to be validated against an actual running database system. Integration testing database options Ideally, our tests should run against a production-like database. But using a dedicated database server is not feasible, as we most likely have more than one developer to run such integration test-suites. To isolate concurrent test runs, each developer would require a dedicated database catalog. Adding a continuous integration tool makes matters worse since more tests would have to be run in parallel. Lesson 1: We need a forked test-suite bound database When a test suite runs, a database must be started and only made available to that particular test-suite instance. Basically we have the following options:An in-memory embedded database A temporary spawned database processThe fallacy of in-memory database testing Java offers multiple in-memory relational database options to choose from:HSQLDB H2 Apache DerbyEmbedding an in-memory database is fast and each JVM can run it’s own isolated database. But we no longer test against the actual production-like database engine because our integration tests will validate the application behavior for a non-production database system. Using an ORM tool may provide the false impression that all database are equal, especially when all generated SQL code is SQL-92 compliant. What’s good for the ORM tool database support may deprive you from using database specific querying features (window functions, Common table expressions, PIVOT). So the integration testing in-memory database might not support such advanced queries. This can lead to reduced code coverage or to pushing developers to only use the common-yet-limited SQL querying features. Even if your production database engine provides an in-memory variant, there may still be operational differences between the actual and the lightweight database versions. Lesson 2: In-memory databases may give you the false impression that your code will also run on a production database Spawning a production-like temporary database Testing against the actual production database is much more valuable and that’s why I grew to appreciate this alternative. When using MongoDB we can choose the embedded mongo plugin. This open-source project creates an external database process that can be bound to the current test-suite life-cycle. If you’re using Maven, you can take advantage of the embedmongo-maven-plugin: <plugin> <groupId>com.github.joelittlejohn.embedmongo</groupId> <artifactId>embedmongo-maven-plugin</artifactId> <version>${embedmongo.plugin.version}</version> <executions> <execution> <id>start</id> <goals> <goal>start</goal> </goals> <configuration> <port>${embedmongo.port}</port> <version>${mongo.test.version}</version> <databaseDirectory>${project.build.directory}/mongotest</databaseDirectory> <bindIp></bindIp> </configuration> </execution> <execution> <id>stop</id> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> When running the plugin, the following actions are taken:A MongoDB pack is downloaded: [INFO] --- embedmongo-maven-plugin:0.1.12:start (start) @ mongodb-facts --- Download Version{2.6.1}:Windows:B64 START Download Version{2.6.1}:Windows:B64 DownloadSize: 135999092 Download Version{2.6.1}:Windows:B64 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% 17% 18% 19% 20% 21% 22% 23% 24% 25% 26% 27% 28% 29% 30% 31% 32% 33% 34% 35% 36% 37% 38% 39% 40% 41% 42% 43% 44% 45% 46% 47% 48% 49% 50% 51% 52% 53% 54% 55% 56% 57% 58% 59% 60% 61% 62% 63% 64% 65% 66% 67% 68% 69% 70% 71% 72% 73% 74% 75% 76% 77% 78% 79% 80% 81% 82% 83% 84% 85% 86% 87% 88% 89% 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% Download Version{2.6.1}:Windows:B64 downloaded with 3320kb/s Download Version{2.6.1}:Windows:B64 DONEUpon starting a new test suite, the MongoDB pack is unzipped under a unique location in the OS temp folder Extract C:\Users\vlad\.embedmongo\win32\mongodb-win32-x86_64-2008plus-2.6.1.zip START Extract C:\Users\vlad\.embedmongo\win32\mongodb-win32-x86_64-2008plus-2.6.1.zip DONEThe embedded MongoDB instance is started. [mongod output]note: noprealloc may hurt performance in many applications [mongod output] 2014-10-09T23:25:16.889+0300 [DataFileSync] warning: --syncdelay 0 is not recommended and can have strange performance [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] MongoDB starting : pid=2384 port=51567 dbpath=D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest 64-bit host=VLAD [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] db version v2.6.1 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb552c9c726e8 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] allocator: system [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] options: { net: { bindIp: "", http: { enabled: false }, port: 51567 }, security: { authorization: "disabled" }, storage: { dbPath: "D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest", journal: { enabled: false }, preallocDataFiles: false, smallFiles: true, syncPeriodSecs: 0.0 } } [mongod output] 2014-10-09T23:25:17.179+0300 [FileAllocator] allocating new datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.ns, filling with zeroes... [mongod output] 2014-10-09T23:25:17.179+0300 [FileAllocator] creating directory D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\_tmp [mongod output] 2014-10-09T23:25:17.240+0300 [FileAllocator] done allocating datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.ns, size: 16MB, took 0.059 secs [mongod output] 2014-10-09T23:25:17.240+0300 [FileAllocator] allocating new datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.0, filling with zeroes... [mongod output] 2014-10-09T23:25:17.262+0300 [FileAllocator] done allocating datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.0, size: 16MB, took 0.021 secs [mongod output] 2014-10-09T23:25:17.262+0300 [initandlisten] build index on: local.startup_log properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" } [mongod output] 2014-10-09T23:25:17.262+0300 [initandlisten] added index to empty collection [mongod output] 2014-10-09T23:25:17.263+0300 [initandlisten] waiting for connections on port 51567 [mongod output] Oct 09, 2014 11:25:17 PM MongodExecutable start INFO: de.flapdoodle.embed.mongo.config.MongodConfigBuilder$ImmutableMongodConfig@26b3719cFor the life-time of the current test-suite you can see the embedded-mongo process: C:\Users\vlad>netstat -ano | findstr 51567 TCP LISTENING 8500 C:\Users\vlad>TASKLIST /FI "PID eq 8500"Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ extract-0eecee01-117b-4d2 8500 RDP-Tcp#0 1 44,532 KWhen the test-suite is finished the embeded-mongo is stopped [INFO] --- embedmongo-maven-plugin:0.1.12:stop (stop) @ mongodb-facts --- 2014-10-09T23:25:21.187+0300 [initandlisten] connection accepted from #11 (1 connection now open) [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] terminating, shutdown command received [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] dbexit: shutdown called [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to close listening sockets... [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] closing listening socket: 520 [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to flush diaglog... [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to close sockets... [mongod output] 2014-10-09T23:25:21.190+0300 [conn11] shutdown: waiting for fs preallocator... [mongod output] 2014-10-09T23:25:21.190+0300 [conn11] shutdown: closing all files... [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] closeAllFiles() finished [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] shutdown: removing fs lock... [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] dbexit: really exiting now [mongod output] Oct 09, 2014 11:25:21 PM de.flapdoodle.embed.process.runtime.ProcessControl stopOrDestroyProcessConclusion The embed-mongo plugin is nowhere slower than any in-memory relation database systems. It makes me wonder why there isn’t such an option for open-source RDBMS (e.g. PostgreSQL). This is a great open-source project idea and maybe Flapdoodle OSS will offer support for relational databases too.Code available on GitHub.Reference: Integration testing done right with Embedded MongoDB from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Injecting domain objects instead of infrastructure components

Dependency Injection is a widely used software design pattern in Java (and many other programming languages) that is used to achieve Inversion of Control. It promotes reusability, testability, maintainability and helps building loosely coupled components. Dependency Injection is the de facto standard to wire Java objects together, these days. Various Java Frameworks like Spring or Guice can help implementing Dependency Injection. Since Java EE 6 there is also an official Java EE API for Dependency Injection available: Contexts and Dependency Injection (CDI). We use Dependency Injection to inject services, repositories, domain related components, resources or configuration values. However, in my experience, it is often overlooked that Dependency Injection can also be used to inject domain objects. A typical example of this, is the way the currently logged in user is obtained in Java many applications. Usually we end up asking some component or service for the logged in user. The code for this might look somehow like the following snippet: public class SomeComponent {  @Inject   private AuthService authService;      public void workWithUser() {     User loggedInUser = authService.getLoggedInUser();     // do something with loggedInUser   } } Here a AuthService instance is injected into SomeComponent. Methods of SomeComponent now use the AuthService object to obtain an instance of the logged in user. However, instead of injecting AuthService we could inject the logged in user directly into SomeComponent. This could look like this: public class SomeComponent {  @Inject   @LoggedInUser   private User loggedInUser;      public void workWithUser() {     // do something with loggedInUser   } } Here the User object is directly injected into SomeComponent and no instance of AuthService is required. The custom annotation @LoggedInUser is used to avoid conflicts if more than one (managed) bean of type User exist. Both, Spring and CDI are capable of this type of injection (and the configuration is actually very similar). In the following section we will see how domain objects can be injected using Spring. After this, I will describe what changes are necessary to do the same with CDI. Domain object injection with Spring To inject domain objects like shown in the example above, we only have to do two little steps. First we have to create the @LoggedInUser annotation: import java.lang.annotation.*; import org.springframework.beans.factory.annotation.Qualifier;@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface LoggedInUser {} Please note the @Qualifier annotation which turns @LoggedInUser into a custom qualifier. Qualifiers are used by Spring to avoid conflicts if multiple beans of the same type are available. Next we have to add a bean definition to our Spring configuration. We use Spring’s Java configuration here, the same can be done with xml configuration. @Configuration public class Application {  @Bean   @LoggedInUser   @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS)   public User getLoggedInUser() {     // retrieve and return user object from server/database/session   } } Inside getLoggedInUser() we have to retrieve and return an instance of the currently logged in user (e.g. by asking the AuthService from the first snippet). With @Scope we can control the scope of the returned object. The best scope depends on the domain objects and might differ among different domain objects. For a User object representing the logged in user, request or session scope would be valid choices. By annotating getLoggedInUser() with @LoggedInUser, we tell Spring to use this bean definition whenever a bean with type User annotated with @LoggedInUser should be injected. Now we can inject the logged in user into other components: @Component public class SomeComponent {  @Autowired   @LoggedInUser   private User loggedInUser;      ... } In this simple example the qualifier annotation is actually not necessary. As long as there is only one bean definition of type User available, Spring could inject the logged in user by type. However, when injecting domain objects it can easily happen that you have multiple bean definitions of the same type. So, using an additional qualifier annotation is a good idea. With their descriptive name qualifiers can also act as documentation (if named properly). Simplify Spring bean definitions When injecting many domain objects, there is chance that you end up repeating the scope and proxy configuration over and over again in your bean configuration. In such a situation it comes in handy that Spring annotations can be used on custom annotations. So, we can simply create our own @SessionScopedBean annotation that can be used instead of @Bean and @Scope: @Target({ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Bean @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS) public @interface SessionScopedBean {} Now we can simplify the bean definition to this: @Configuration public class Application {  @LoggedInUser   @SessionScopedBean   public User getLoggedInUser() {     ...   } } Java EE and CDI The configuration with CDI is nearly the same. The only difference is that we have to replace Spring annotations with javax.inject and CDI annotations. So, @LoggedInUser should be annotated with javax.inject.Qualifier instead of org.springframework.beans.factory.annotation.Qualifier (see: Using Qualifiers). The Spring bean definition can be replaced with a CDI Producer method. Instead of @Scope the appropriate CDI scope annotation can be used. At the injection point Spring’s @Autowired can be replaced with @Inject. Note that Spring also supports javax.inject annotations. If you add the javax.inject dependency to your Spring project, you can also use @Inject and @javax.inject.Qualifier. It is actually a good idea to do this because it reduces Spring dependencies in your Java code. Conclusion We can use custom annotations and scoped beans to inject domain objects into other components. Injecting domain objects can make your code easier to read and can lead to cleaner dependencies. If you only inject AuthService to obtain the logged in user, you actually depend on the logged in user and not on AuthService. On the downside it couples your code stronger to the Dependency Injection framework, which has to manage bean scopes for you. If you want to keep the ability to use your classes outside a Dependency Injection container this can be a problem. Which types of domain objects are suitable for injection highly depends on the application you are working on. Good candidates are domain objects you often use and which not depend on any method or request parameters. The currently logged in user is an object that might often be suitable for injection.You can find the source of the shown example on GitHub.Reference: Injecting domain objects instead of infrastructure components from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

Spring @Configuration and injecting bean dependencies as method parameters

One of the ways Spring recommends injecting inter-dependencies between beans is shown in the following sample copied from the Spring’s reference guide here:                   @Configuration public class AppConfig {@Bean public Foo foo() { return new Foo(bar()); }@Bean public Bar bar() { return new Bar("bar1"); }} So here, bean `foo` is being injected with a `bar` dependency. However, there is one alternate way to inject dependency that is not documented well, it is to just take the dependency as a `@Bean` method parameter this way: @Configuration public class AppConfig {@Bean public Foo foo(Bar bar) { return new Foo(bar); }@Bean public Bar bar() { return new Bar("bar1"); }} There is a catch here though, the injection is now by type, the `bar` dependency would be resolved by type first and if duplicates are found, then by name: @Configuration public static class AppConfig {@Bean public Foo foo(Bar bar1) { return new Foo(bar1); }@Bean public Bar bar1() { return new Bar("bar1"); }@Bean public Bar bar2() { return new Bar("bar2"); } } In the above sample dependency `bar1` will be correctly injected. If you want to be more explicit about it, an @Qualifer annotation can be added in: @Configuration public class AppConfig {@Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); }@Bean public Bar bar1() { return new Bar("bar1"); }@Bean public Bar bar2() { return new Bar("bar2"); } } So now the question of whether this is recommended at all, I would say yes for certain cases. For e.g, had the bar bean been defined in a different @Configuration class , the way to inject the dependency then is along these lines: @Configuration public class AppConfig {@Autowired @Qualifier("bar1") private Bar bar1;@Bean public Foo foo() { return new Foo(bar1); }} I find the method parameter approach simpler here: @Configuration public class AppConfig {@Bean public Foo foo(@Qualifier("bar1") Bar bar1) { return new Foo(bar1); }} Thoughts?Reference: Spring @Configuration and injecting bean dependencies as method parameters from our JCG partner Biju Kunjummen at the all and sundry blog....

10 Hosted Continuous Integration Services for a Private Repository

Every project I’m working with starts with a setup of continuous integration pipeline. I’m a big fan of cloud services, that’s why I was always using travis-ci.org. A few of my clients questioned this choice recently, mostly because of the price. So I decided to make a brief analysis of the market. I configured rultor, an open source project, in every CI service I managed to find. All of them are free for open source projects. All of them are hosted and do not require any server installation Here they are, in order of my personal preference:        Linux Windows MacOStravis-ci.com $129/mo YES NO YESsnap-ci.com $80/mo YES NO NOsemaphoreapp.com $29/mo YES NO NOappveyer.com $39/mo NO YES NOshippable.com $1/mo YES NO NOwercker.com free! YES NO NOcodeship.io $49/mo YES NO NOmagnum-ci.com ? YES NO NOdrone.io $25/mo YES NO NOcircleci.io $19/mo YES NO NOsonolabs.com $15/mo YES NO NOhosted-ci.com $49/mo NO NO YESship.io free! YES NO YES  If you know any other good continuous integration services, email me, I’ll review and add them to this list. travis-ci.org is the best platform I’ve seen so far. Mostly because it is the most popular. Perfectly integrates with Github and has proper documentation. One important downside is the price of $129 per month. “With this money you can get a dedicated EC2 instance and install Jenkins there” — some of my clients say. I strongly disagree, since Jenkins will require a 24×7 administration, which costs way more than $129, but it’s always difficult to explain. snap-ci.com is a product of ThoughtWorks, an author of Go, an open source continuous integration server. It looks a bit more complicated than others, giving you an ability to define “stages” and combine them into pipelines. I’m not sure yet how these mechanisms may help in small and medium size projects we’re mostly working with, but they look “cool”.   semaphoreapp.com is easy to configure and work with. It makes an impression of a light-weight system, which I generally appreciate. As a downside, they don’t have any Maven pre-installed, but this was solved easily with a short custom script that downloads and unpacks Maven. Another downside is that they are not configurable through a file (like .travis.yml) — you should do everything through a UI. appveyor.com is the only one that runs Windows builds. Even though I’m working mostly with Java and Ruby, which are expected to be platform independent, they very often appear to be exactly the opposite. When your build succeedes on Linux, there is almost no guarantee it will pass on Windows or Mac. I’m planning to use appveyor in every project, in combination with some other CI service. I’m still testing it though…shippable.com was easy to configure since it understands .travis.yml out of the box. Besides that, nothing fancy.    wercker.com is a European product from Amsterdam, which is still in beta and that’s why free for all projects. The platform looks very promissing. It is still free for private repositories and is backed up by investments. I’m still testing it…  codeship.io works fine, but their web UI looks a bit out-dated. Anyway, I’m using them now, will see.    magnum-ci.com is a very lightweight and young system. It doesn’t connect automatically to Github, so you should do some manual operations of adding a web hook. Besides that, works just fine.  drone.io works fine, but their support didn’t reply to me when I asked for a Maven version update. Besides that, their badge is not updated correctly in Gitub README.md.  circleci.io I still don’t know why my build fails there. Really difficult to configure and understand what’s going on. Trying to figure it out… solanolabs.com testing now… hosted-ci.com testing now… ship.io testing now… Keep in mind that no matter how good and expensive your continuous integration service is, your quality won’t grow unless you make your master branch read-only. Related Posts You may also find these posts interesting:Deploying to Heroku, in One Click Deployment Script vs. Rultor How to Publish to Rubygems, in One Click How to Deploy to CloudBees, in One Click How to Release to Maven Central, in One ClickReference: 10 Hosted Continuous Integration Services for a Private Repository from our JCG partner Yegor Bugayenko at the About Programming blog....

Exploding Job Offers and Multiple Offer Synchronization

A recent post by Y Combinator’s Sam Altman, Exploding Offers Suck, detailed his distaste for accelerators and venture capitalists who pressure entrepreneurs into critical decisions on investment offers before they have a chance to shop. The article outlines Y Combinator’s policy of allowing offer acceptance up until the beginning of the program. An exploding offer is as any offer that lists a date for the offer to expire, with the allotted time being minimal. Altman’s article is about venture funding, but most in the industry gain exposure to this situation via job offers. This practice is fairly standard for college internships, where acceptance is required months before start date. Exploding offers may be less common for experienced professionals, but are hardly rare.   Many companies use templates for job offers where deadlines are arbitrary or listed only to encourage quick responses, which gives a false appearance of an exploding offer. Other firms have strict policies on enforcement, although strong candidates in a seller’s market will cause exceptions. Why exploding offers exist? The employer’s justification for exploding job offers may focus on planning, multiple candidates, and finite needs. If a company has three vacancies and three candidates, how likely is it that all three receive offers? What is the likelihood they all accept. Companies develop  a pipeline of perhaps twenty candidates for those three jobs. If six are found qualified, the company has a dilemma. The numbers and odds become ominous for firms evaluating thousands of college students for 100 internships. The exploding offer is one method for companies to mitigate the risk of accepted offers outnumbering vacancies. They are also used to ensure that the second or third best candidate will still be available while the hirer awaits response from the first choice. Fast acceptance of exploding offers may be viewed as a measure of a candidate’s interest in the position and company, particularly at smaller and riskier firms. Job seekers may feel that exploding offers serve to limit their employment options, with a potential side effect being lower salaries due to reduced competition. These offers may also help level the playing field for non-elite companies, as risk-averse candidates may subscribe to the bird in the hand theory. Since they are not uncommon, it’s important to consider a strategy for how to handle exploding offers, multiple offer scenarios, and how to prevent these problems altogether. Avoid the situation entirely The issue with exploding and multiple offers is time constraint, and job seekers need to be proactive about these scenarios. The best strategy is to maximize the possibility that all offers arrive at the same time. If all offers are received simultaneously there is no problem. Those applying to several firms should anticipate the possibility that offers may arrive over days or weeks. When researching companies of interest, investigate their standard interview process, the number of steps involved, and how long it takes. Current or former employees and recruiters representing the company will have answers, and when in doubt the general rule is that larger companies tend to move slower. Initiate contact with the slower companies first, and apply to the fastest hirers once interviews start with the first group. Strategies to control timing of multiple offers Unfortunately, job searches are unpredictable and candidates feel they have little influence on the speed or duration of a hiring process. Stellar candidates have much more control than they might expect, but even average applicants can affect timelines. If Company A has scheduled a third face-to-face meeting while Company B is just getting around to setting up a phone screen with HR, the candidate needs to slow the process with A while expediting B. What tactics can hasten or extend hiring processes in order to synchronize offers? Speeding upAsk – Asking about the anticipated duration of the interview process and about any ways it can be expedited. This is a relatively benign request so long as it is made respectfully and tactfully. Flexibility and availability – Provide prompt interview availability details regularly even they aren’t requested, and (if possible) offer flexibility to meet off-hours or on weekends. Pressure – As somewhat of a last resort, some candidates may choose to disclose the existence of other offers and the need for a decision by the employer. This can backfire and should be approached delicately.Slowing downDelay interviews – This is the easiest and most effective method to employ, with the risk being that the position may be offered to someone else in the interim. When multiple rounds are likely, adding a couple days between rounds can extend the process significantly. Ask questions – There are many details about a company that influence decisions to accept or reject offers, and the process of gathering that information takes time. At the offer stage, questions about benefits or policies can usually buy a day or two. Negotiate – Negotiating an offer will require the company to get approvals and to incorporate new terms into a letter. Request additional interviews or meetings – Once an offer is made, candidates feel pressure to accept or reject. Another option is to request additional dialogue and meetings to address any concerns and finalize decisions.Specifics for exploding offers The issue with exploding offers is typically the need to respond before other interviews are completed, so the goal is to buy time. Some candidates choose to accept the exploding offer as a backup in case a better offer isn’t made. This tactic isn’t optimal for either party, as the company may be without a replacement and the candidate has burned a bridge. In an exploding offer situation, first discover if the offer is truly exploding. As was mentioned earlier, many companies want a timely answer but don’t need one. The offer letter may give the appearance of being an exploding offer without actual intent. One response to test the waters is “The offer letter says I have x days to decide. Is that deadline firm or could it be extended a day or two if I am not prepared to make a decision at that point?”. The company’s answer will be telling. If it is discovered that it is truly an exploding offer, resorting to the tactics listed above could help. HR reps may be uncomfortable asking for a decision if they feel a candidate’s legitimate questions are unanswered. As the deadline approaches, negotiating terms and asking for more detail will provide time. The request for another meeting will require scheduling, and the parties involved might not be available until after the deadline. As a last resort, simply asking for an extension is always an option.Reference: Exploding Job Offers and Multiple Offer Synchronization from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Factory Without IF-ELSE

Object Oriented Language has very powerful feature of Polymorphism, it is used to remove if/else or switch case in code. Code without condition is easy to read. There are some places where you have to put them and one of such example is Factory/ServiceProvider class. I am sure you have seen factory class with IF-ELSEIF which keeps on getting big. In this blog i will share some techniques that you can be used to remove condition in factory class.   I will use below code snippet as example: public static Validator newInstance(String validatorType) { if ("INT".equals(validatorType)) return new IntValidator(); else if ("DATE".equals(validatorType)) return new DateValidator(); else if ("LOOKUPVALUE".equals(validatorType)) return new LookupValueValidator(); else if ("STRINGPATTERN".equals(validatorType)) return new StringPatternValidator(); return null; } Reflection This is first thing that comes to mind when you want to remove conditions. You get the feeling of framework developer! public static Validator newInstance(String validatorClass) { return Class.forName(validatorClass).newInstance(); } This looks very simple but only problem is caller has to remember fully qualified class name and some time it could be issue. Map Map can be used to to map actual class instance to some user friendly name: Map<String, Validator> validators = new HashMap<String,Validator>(){ { put("INT",new IntValidator()); put("LOOKUPVALUE",new LookupValueValidator()); put("DATE",new DateValidator()); put("STRINGPATTERN",new StringPatternValidator()); } }; public Validator newInstance(String validatorType) { return validators.get(validatorType); } This also looks neat without overhead of reflection. Enum This is interesting one: enum ValidatorType { INT { public Validator create() { return new IntValidator(); } }, LOOKUPVALUE { public Validator create() { return new LookupValueValidator(); } }, DATE { public Validator create() { return new DateValidator(); } }; public Validator create() { return null; } }public Validator newInstance(ValidatorType validatorType) { return validatorType.create(); } This method is using enum method to remove condition, one of the issue is that you need Enum for each type. You don’t want to create tons of them! I personally like this method. Conclusion If-else or switch case makes code difficult to understand, we should try to avoid them as much as possible. Language construct should be used to avoid some of switch case. We should try to code without IF-ELSE and that will force us to come up with better solution.Reference: Factory Without IF-ELSE from our JCG partner Ashkrit Sharma at the Are you ready blog....

Ceylon 1.1.0 is now available

Ten whole months in the making, this is the biggest release of Ceylon so far! Ceylon 1.1.0 incorporates oodles of enhancements and bugfixes, with well over 1400 issues closed. Ceylon is a modern, modular, statically typed programming language for the Java and JavaScript virtual machines. The language features a flexible and very readable syntax, a unique and uncommonly elegant static type system, a powerful module architecture, and excellent tooling, including an awesome Eclipse-based IDE. Ceylon enables the development of cross-platform modules that execute portably in both virtual machine environments. Alternatively, a Ceylon module may target one or the other platform, in which case it may interoperate with native code written for that platform.   For the end user, the most significant improvements in Ceylon 1.1 are:performance enhancements, especially to compilation times in the IDE, even smoother interoperation with Java overloading and Java generics, out of the box support for deployment of Ceylon modules on OSGi containers, enhancements to the Ceylon SDK, including the new platform modules ceylon.promise, ceylon.locale, and ceylon.logging, along with many improvements to ceylon.language, ceylon.collection, and ceylon.test, many new features and improvements in Ceylon IDE, including ceylon.formatter, a high-quality code formatter written in Ceylon, support for command line tool plugins, including the new ceylon format and ceylon build plugins, and integration with vert.x.A longer list of changes may be found here. In the box This release includes:a complete language specification that defines the syntax and semantics of Ceylon in language accessible to the professional developer, a command line toolset including compilers for Java and JavaScript, a documentation compiler, and support for executing modular programs on the JVM and Node.js, a powerful module architecture for code organization, dependency management, and module isolation at runtime, the language module, our minimal, cross-platform foundation of the Ceylon SDK, and a full-featured Eclipse-based integrated development environment.Language Ceylon is a highly understandable object-oriented language with static typing. The language features:an emphasis upon readability and a strong bias toward omission or elimination of potentially-harmful or potentially-ambiguous constructs and toward highly disciplined use of static types, an extremely powerful and uncommonly elegant type system combining subtype and parametric polymorphism with:first-class union and intersection types, both declaration-site and use-site variance, and the use of principal types for local type inference and flow-sensitive typing,a unique treatment of function and tuple types, enabling powerful abstractions, along with the most elegant approach to null of any modern language, first-class constructs for defining modules and dependencies between modules, a very flexible syntax including comprehensions and support for expressing tree-like structures, and fully-reified generic types, on both the JVM and JavaScript virtual machines, and a unique typesafe metamodel.More information about these language features may be found in the feature list and quick introduction. This release introduces the following new language features:support for use-site variance, enabling complete interop with Java generics, dynamic interfaces, providing a typesafe way to interoperate with dynamically typed native JavaScript code, type inference for parameters of anonymous functions that occur in an argument list, and a Byte class that is optimized by the compiler.Language module The language module was a major focus of attention in this release, with substantial performance improvements, API optimizations, and new features, including the addition of a raft of powerful operations for working with streams. The language module now includes an API for deploying Ceylon modules programmatically from Java. The language module is now considered stable, and no further breaking changes to its API are contemplated. Command line tools The ceylon command now supports a plugin architecture. For example, type: ceylon plugin install ceylon.formatter/1.1.0To install the ceylon format subcommand. IDE This release of the IDE features dramatic improvements to build performance, and introduces many new features, including:a code formatter, seven new refactorings and many improvements to existing refactorings, many new quick fixes/assists, IntelliJ-style “chain completion” and completion of toplevel functions applying to a value, a rewritten Explorer view, with better presentation of modules and modular dependencies, synchronization of all keyboard accelerators with JDT equivalents, Quick Find References, Recently Edited Files, Format Block, Visualize Modular Dependencies, Open in Type Hierarchy View, Go to Refined Declaration, and much more.SDK The platform modules, recompiled for 1.1.0, are available in the shared community repository, Ceylon Herd. This release introduces the following new platform modules:ceylon.promise, cross-platform support for promises, ceylon.locale, a cross-platform library for internationalization, and ceylon.logging, a simple logging API.In addition, there were many improvements to ceylon.collection, which is now considered stable, and to ceylon.test. The Ceylon SDK is available from Ceylon Herd, the community module repository. Vert.x integration mod-lang-ceylon implements Ceylon 1.1 support for Vert.x 2.1.x, and may be downloaded here. Community The Ceylon community site, http://ceylon-lang.org, includes documentation, and information about getting involved. Source code The source code for Ceylon, its specification, and its website is freely available from GitHub. Issues Bugs and suggestions may be reported in GitHub’s issue tracker. Acknowledgement We’re deeply indebted to the community volunteers who contributed a substantial part of the current Ceylon codebase, working in their own spare time. The following people have contributed to this release: Gavin King, Stéphane Épardaud, Tako Schotanus, Emmanuel Bernard, Tom Bentley, Aleš Justin, David Festal, Max Rydahl Andersen, Mladen Turk, James Cobb, Tomáš Hradec, Ross Tate, Ivo Kasiuk, Enrique Zamudio, Roland Tepp, Diego Coronel, Daniel Rochetti, Loic Rouchon, Matej Lazar, Lucas Werkmeister, Akber Choudhry, Corbin Uselton, Julien Viet, Stephane Gallès, Paco Soberón, Renato Athaydes, Michael Musgrove, Flavio Oliveri, Michael Brackx, Brent Douglas, Lukas Eder, Markus Rydh, Julien Ponge, Pete Muir, Henning Burdack, Nicolas Leroux, Brett Cannon, Geoffrey De Smet, Guillaume Lours, Gunnar Morling, Jeff Parsons, Jesse Sightler, Oleg Kulikov, Raimund Klein, Sergej Koščejev, Chris Marshall, Simon Thum, Maia Kozheva, Shelby, Aslak Knutsen, Fabien Meurisse, Sjur Bakka, Xavier Coulon, Ari Kast, Dan Allen, Deniz Türkoglu, F. Meurisse, Jean-Charles Roger, Johannes Lehmann, Alexander Altman, allentc, Nikolay Tsankov, Chris Horne, gabriel-mirea, Georg Ragaller, Griffin DeJohn, Harald Wellmann, klinger, Luke, Oliver Gondža, Stephen Crawley.Reference: Ceylon 1.1.0 is now available from our JCG partner Gavin King at the Ceylon Team blog blog....

WildFly subsystem for RHQ Metrics

For RHQ-Metrics I have started writing a subsystem for WildFly 8 that is able to collect metrics inside WildFly and then send them at regular intervals (currently every minute) to a RHQ-Metrics server. The next graph is a visualization with Grafana of the outcome when this sender was running for 1.5 days in a row:          (It is interesting to see how the JVM is fine tuning its memory requirement over time and using less and less memory for this constant workload). The following is a visualization of the setup:The sender is running as a subsystem inside WildFly and reading metrics from the WildFly management api. The gathered metrics are then pushed via REST to RHQ-Metrics. Of course it is possible to send them to a RHQ-Metrics server that is running on a separate host. The configuration of the subsystem looks like this: <subsystem xmlns="urn:org.rhq.metrics:wildflySender:1.0"> <rhqm-server name="localhost" enabled="true" port="8080" token="0x-deaf-beef"/> <metric name="non-heap" path="/core-service=platform-mbean/type=memory" attribute="non-heap-memory-usage"/> <metric name="thread-count" path="/core-service=platform-mbean/type=threading" attribute="thread-count"/> </subsystem> As you see, the path to the DMR resource and the name of the attribute to be monitored as metrics can be given in the configuration. The implementation is still basic at the moment – you can find the source code in the RHQ-Metrics repository on GitHub. Contributions are very welcome. Heiko Braun and Harald Pehl are currently working on optimizing the scheduling with individual intervals and possible batching of requests for managed servers in a domain. Many thanks go to Emmanuel Hugonnet, Kabir Khan and especially Tom Cerar for their help to get me going with writing a subsystem, which was pretty tricky for me. The parsers, the object model and the XML had a big tendency to disagree with each other !Reference: WildFly subsystem for RHQ Metrics from our JCG partner Heiko Rupp at the Some things to remember blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: