Featured FREE Whitepapers

What's New Here?

java-logo

An Introduction to Generics in Java – Part 6

This is a continuation of an introductory discussion on Generics, previous parts of which can be found here. In the last article we were discussing about recursive bounds on type parameters. We saw how recursive bound helped us to reuse the vehicle comparison logic. At the end of that article, I suggested that a possible type mixing may occur when we are not careful enough. Today we will see an example of this. The mixing can occur if someone mistakenly creates a subclass of Vehicle in the following way:     /** * Definition of Vehicle */ public abstract class Vehicle<E extends Vehicle<E>> implements Comparable<E> { // other methods and propertiespublic int compareTo(E vehicle) { // method implementation } }/** * Definition of Bus */ public class Bus extends Vehicle<Bus> {}/** * BiCycle, new subtype of Vehicle */ public class BiCycle extends Vehicle<Bus> {}/** * Now this class’s compareTo method will take a Bus type * as its argument. As a result, you will not be able to compare * a BiCycle with another Bicycle, but with a Bus. */ cycle.compareTo(anotherCycle); // This will generate a compile time error cycle.compareTo(bus); // but you will be able to do this without any error This type of mix up does not occur with Enums because JVM takes care of subclassing and creating instances for enum types, but if we use this style in our code then we have to be careful. Let’s talk about another interesting application of recursive bounds. Consider the following class: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public void setAttrib1(String attrib1) { this.attrib1 = attrib1; }public String getAttrib2() { return attrib2; }public void setAttrib2(String attrib2) { this.attrib2 = attrib2; }public String getAttrib3() { return attrib3; }public void setAttrib3(String attrib3) { this.attrib3 = attrib3; }public String getAttrib4() { return attrib4; }public void setAttrib4(String attrib4) { this.attrib4 = attrib4; }public String getAttrib5() { return attrib5; }public void setAttrib5(String attrib5) { this.attrib5 = attrib5; } } If we want to create an instance of this class, then we can do this: MyClass mc = new MyClass(); mc.setAttrib1("Attribute 1"); mc.setAttrib2("Attribute 2"); The above code creates an instance of the class and initializes the properties. If we could use Method Chaining here, then we could have written: MyClass mc = new MyClass().setAttrib1("Attribute 1") .setAttrib2("Attribute 2"); which obviously looks much better than the first version. However, to enable this type of method chaining, we need to modify MyClass in the following way: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public MyClass setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public MyClass setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public MyClass setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public MyClass setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public MyClass setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } } and then we will be able to use method chaining for instances of this class. However, if we want to use method chaining where inheritance is involved, things kind of get messy: public abstract class Parent { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() {}public String getAttrib1() { return attrib1; }public Parent setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public Parent setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public Parent setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public Parent setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public Parent setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } }public class Child extends Parent { private String attrib6; private String attrib7;public Child() {}public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } }/** * Now try using method chaining for instances of Child * in the following way, you will get compile time errors. */ Child c = new Child().setAttrib1("Attribute 1").setAttrib6("Attribute 6"); The reason for this is that even though Child inherits all the setters from its parent, the return type of all those setter methods are of type Parent, not Child. So the first setter will return reference of type Parent, calling setAttrib6 on which will result in compilation error,  because it does not have any such method. We can resolve this problem by introducing a generic type parameter on Parent and defining a recursive bound on it. All of its children will pass themselves as type argument when they extend from it, ensuring that the setter methods will return references of its type: public abstract class Parent<T extends Parent<T>> { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() { }public String getAttrib1() { return attrib1; }@SuppressWarnings("unchecked") public T setAttrib1(String attrib1) { this.attrib1 = attrib1; return (T) this; }public String getAttrib2() { return attrib2; }@SuppressWarnings("unchecked") public T setAttrib2(String attrib2) { this.attrib2 = attrib2; return (T) this; }public String getAttrib3() { return attrib3; }@SuppressWarnings("unchecked") public T setAttrib3(String attrib3) { this.attrib3 = attrib3; return (T) this; }public String getAttrib4() { return attrib4; }@SuppressWarnings("unchecked") public T setAttrib4(String attrib4) { this.attrib4 = attrib4; return (T) this; }public String getAttrib5() { return attrib5; }@SuppressWarnings("unchecked") public T setAttrib5(String attrib5) { this.attrib5 = attrib5; return (T) this; } }public class Child extends Parent<Child> { private String attrib6; private String attrib7;public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } } Notice that we have to explicitly cast this to type T  because compiler does not know whether or not this conversion is possible, even though it is because T by definition is bounded by Parent<T>. Also since we are casting an object reference to T, an unchecked warning will be issued by the compiler. To suppress this we used @SuppressWarnings(“unchecked”) above the setters. With the above modifications, it’s perfectly valid to do this: Child c = new Child().setAttrib1("Attribute 1") .setAttrib6("Attribute 6"); When writing method setters this way, we should be careful as to not to use recursive bounds for any other purposes, like to access children’s states from parent, because that will expose parent to the internal details of its subclasses and will eventually break the encapsulation. With this post I finish the basic introduction to Generics. There are so many things that I did not discuss in this series, because I believe they are beyond the introductory level. Until next time.Reference: An Introduction to Generics in Java – Part 6 from our JCG partner Sayem Ahmed at the Random Thoughts blog....
android-logo

Event Tracking with Analytics API v4 for Android

As I’ve learned from developing my own mileage tracking app for cyclists and commuters, getting ratings and feedback from users can be challenging and time consuming. Event tracking can help by enabling you to develop a sense of how popular a particular feature is and how often it’s getting used by users of your app. In Android, Google Play Services’ Analytics API v4 can be used to gather statistics on the user-events that occur within your app.  In this post I’ll quickly show you how to use this API to accomplish simple event tracking.       Getting started It’s important to say at this point that all of these statistics are totally anonymous. App developers who use analytics have no idea who is using each feature or generating each event, only that an event occurred. Assuming you’ve set up Google Analytics v4 in your app as per my last post, tracking app events is fairly simple. The first thing you need is your analytics application Tracker (obtained in my case by calling getApplication() as per the previous post). Bear in mind that this method is only available in an object that extends Android’s Activity or Service class so you can’t use it everywhere without some messing about. Once you have your application Tracker you should use an analytics EventBuilder to build() an event and use the send() method on the Tracker to send it to Google. Building an event is easy. You simply create a new HitBuilders.EventBuilder, setting a ‘category’ and an ‘action’ for your new event. Sample code The sample code below shows how I track the users manual use use of the ‘START’ button in Trip Computer. I have similar tracking events for STOP and also for the use of key settings and features like the activation of the app’s unique ‘battery-saver’ mode (which I understand is quite popular with cyclists). // Get an Analytics Event tracker. Tracker myTracker = ((TripComputerApplication) getApplication()) .getTracker(TripComputerApplication.TrackerName.APP_TRACKER);// Build and Send the Analytics Event. myTracker.send(new HitBuilders.EventBuilder() .setCategory("Journey Events") .setAction("Pressed Start Button") .build()); Once the events are reported back to Google, the Analytics console will display them in the ‘Behaviour > Events > Overview’ panel and display a simple count how many times each event was raised within the tracking period. You can also further subdivide the actions by setting a ‘label’ or by providing a ‘value’ (but neither of these is actually required). More information For more information see the following articles:https://support.google.com/analytics/answer/1033068?hl=en https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide#AnatomyReference: Event Tracking with Analytics API v4 for Android from our JCG partner Ben Wilcock at the Ben Wilcock’s blog blog....
java-logo

Daemonizing JVM-based applications

Deployment architecture design is a vital part of any custom-built server-side application development project. Due to it’s significance, deployment architecture design should commence early and proceed in tandem with other development activities. The complexity of deployment architecture design depends on many aspects, including scalability and availability targets of the provided service, rollout processes as well as technical properties of the system architecture. Serviceability and operational concerns, such as deployment security, monitoring, backup/restore etc., relate to the broader topic of deployment architecture design. These concerns are cross-cutting in nature and may need to be addressed on different levels ranging from service rollout processes to the practical system management details. On the system management detail level the following challenges often arise when using a pure JVM-based application deployment model (on Unix-like platforms):how to securely shut down the app server or application? Often, a TCP listener thread listening for shutdown requests is used. If you have many instances of the same app server deployed on the same host, it’s sometimes easy to confuse the instances and shutdown the wrong one. Also, you’ll have to prevent unauthorized access to the shutdown listener. creating init scripts that integrate seamlessly with system startup and shutdown mechanisms (e.g. Sys-V init, systemd, Upstart etc.) how to automatically restart the application if it dies? log file management. Application logs can be managed (e.g. rotate, compress, delete) by a log library. App server or platform logs can sometimes be also managed using a log library, but occasionally integration with OS level tools (e.g. logrotate) may be necessary.There’s a couple of solutions to these problems that enable tighter integration between the operating system and application / application server. One widely used and generic solution is the Java Service Wrapper. The Java Service Wrapper is good at addressing the above challenges and is released under a proprietary license. GPL v2 based community licensing option is available as well. Apache commons daemon is another option. It has its roots in Apache Tomcat and integrates well with the app server, but it’s much more generic than that, and in addition to Java, commons daemon can be used with also other JVM-based languages such as Scala. As the name implies, commons daemon is Apache licensed. Commons daemon includes the following features:automatically restart JVM if it dies enable secure shutdown of JVM process using standard OS mechanisms (Tomcat TCP based shutdown mechanism is error-prone and unsecure) redirect STDERR/STDOUT and set JVM process name allow integration with OS init script mechanisms (record JVM process pid) detach JVM process from parent process and console run JVM and application with reduced OS privileges allow coordinating log file management with OS tools such as logrotate (reopen log files with SIGUSR1 signal)Deploying commons daemon From an application developer point of view commons daemon consists of two parts: the jsvc binary used for starting applications and commons daemon Java API. During startup, jsvc binary bootstraps the application through lifecycle methods implemented by the application and defined by commons daemon Java API. Jsvc creates a control process for monitoring and restarting the application upon abnormal termination. Here’s an outline for deploying commons daemon with your application:implement commons daemon API lifecycle methods in an application bootstrap class (see Using jsvc directly). compile and install jsvc. (Note that it’s usually not good practice to install compiler toolchain on production or QA servers). place commons-daemon API in application classpath figure out command line arguments for running your app through jsvc. Check out bin/daemon.sh in Tomcat distribution for reference. create a proper init script based on previous step. Tomcat can be installed via package manager on many Linux distributions and the package typically come with an init script that can be used as a reference.Practical experiences Tomcat distribution includes “daemon.sh”, a generic wrapper shell script that can be used as a basis for creating a system specific init script variant. One of the issues that I encountered was the wait configuration parameter default value couldn’t be overridden by the invoker of the wrapper script. In some cases Tomcat random number generator initialization could exceed the maximum wait time, resulting in the initialization script reporting a failure, even if the app server would eventually get started. This seems to be fixed now. Another issue was that the wrapper script doesn’t allow passing JVM-parameters with spaces in them. This can be handy e.g. in conjunction with the JVM “-XX:OnOutOfMemoryError” & co. parameters. Using the wrapper script is optional, and it can also be changed easily, but since it includes some pieces of useful functionality, I’d rather reuse instead of duplicating it, so I created a feature request and proposed tiny patch for this #55104. While figuring out the correct command line arguments for getting jsvc to bootstrap your application, the “-debug” argument can be quite useful for troubleshooting purposes. Also, by default the jsvc changes working directory to /, in which case absolute paths should typically be used with other options. The “-cwd” option can be used for overriding the default working directory value. Daemonizing Jetty In addition to Tomcat, Jetty is another servlet container I often use. Using commons daemon with Tomcat poses no challenge since the integration already exists, so I decided to see how things would work with an app server that doesn’t support commons daemon out-of-the-box. To implement the necessary changes in Jetty, I cloned the Jetty source code repository, added jsvc lifecycle methods in the Jetty bootstrap class and built Jetty. After that, I started experimenting with jsvc command line arguments for bootstrapping Jetty. Jetty comes with jetty.sh startup script that has an option called “check” for outputting various pieces of information related to the installation. Among other things it outputs the command line arguments that would be used with the JVM. This provided quite a good starting point for the jsvc command line. These are the command lines I ended up with: export JH=$HOME/jetty-9.2.2-SNAPSHOT export JAVA_HOME=`/usr/libexec/java_home -v 1.8` jsvc -debug -pidfile $JH/jetty.pid -outfile $JH/std.out -errfile $JH/std.err -Djetty.logs=$JH/logs -Djetty.home=$JH -Djetty.base=$JH -Djava.io.tmpdir=/var/folders/g6/zmr61rsj11q5zjmgf96rhvy0sm047k/T/ -classpath $JH/commons-daemon-1.0.15.jar:$JH/start.jar org.eclipse.jetty.start.Main jetty.state=$JH/jetty.state jetty-logging.xml jetty-started.xml This could be used as a starting point for a proper production grade init script for starting and shutting down Jetty. I submitted my code changes as issue #439672 in the Jetty project issue tracker and just received word that the change has been merged with the upstream code base, so you should be able to daemonize Jetty with Apache commons daemon jsvc in the future out-of-the-box.Reference: Daemonizing JVM-based applications from our JCG partner Marko Asplund at the practicing techie blog....
grails-logo

Grails Goodness: Using Converter Named Configurations with Default Renderers

Sometimes we want to support in our RESTful API a different level of detail in the output for the same resource. For example a default output with the basic fields and a more detailed output with all fields for a resource. The client of our API can then choose if the default or detailed output is needed. One of the ways to implement this in Grails is using converter named configurations. Grails converters, like JSON and XML, support named configurations. First we need to register a named configuration with the converter. Then we can invoke the use method of the converter with the name of the configuration and a closure with statements to generate output. The code in the closure is executed in the context of the named configuration. The default renderers in Grails, for example DefaultJsonRenderer, have a property namedConfiguration. The renderer will use the named configuration if the property is set to render the output in the context of the configured named configuration. Let’s configure the appropriate renderers and register named configurations to support the named configuration in the default renderers. In our example we have a User resource with some properties. We want to support short and details output where different properties are included in the resulting format. First we have our resource: // File: grails-app/domain/com/mrhaki/grails/User.groovy package com.mrhaki.grailsimport grails.rest.*// Set URI for resource to /users. // The first format in formats is the default // format. So we could use short if we wanted // the short compact format as default for this // resources. @Resource(uri = '/users', formats = ['json', 'short', 'details']) class User {String username String lastname String firstname String emailstatic constraints = { email email: true lastname nullable: true firstname nullable: true }} Next we register two named configurations for this resource in our Bootstrap: // File: grails-app/conf/Bootstrap.groovy class Bootstrap {def init = { servletContext -> ... JSON.createNamedConfig('details') { it.registerObjectMarshaller(User) { User user -> final String fullname = [user.firstname, user.lastname].join(' ') final userMap = [ id : user.id, username: user.username, email : user.email, ] if (fullname) { userMap.fullname = fullname } userMap } // Add for other resources a marshaller within // named configuration. }JSON.createNamedConfig('short') { it.registerObjectMarshaller(User) { User user -> final userMap = [ id : user.id, username: user.username ] userMap } // Add for other resources a marshaller within // named configuration. } ... } ... } Now we must register custom renderers for the User resource as Spring components in resources: // File: grails-app/conf/spring/resources.groovy import com.mrhaki.grails.User import grails.rest.render.json.JsonRenderer import org.codehaus.groovy.grails.web.mime.MimeTypebeans = { // Register JSON renderer for User resource with detailed output. userDetailsRenderer(JsonRenderer, User) { // Grails will compare the name of the MimeType // to determine which renderer to use. So we // use our own custom name here. // The second argument, 'details', specifies the // supported extension. We can now use // the request parameter format=details to use // this renderer for the User resource. mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.details+json', 'details')] // Here we specify the named configuration // that must be used by an instance // of this renderer. See Bootstrap.groovy // for available registered named configuration. namedConfiguration = 'details' }// Register second JSON renderer for User resource with compact output. userShortRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.short+json', 'short')]// Named configuration is different for short // renderer compared to details renderer. namedConfiguration = 'short' }// Default JSON renderer as fallback. userRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/json', 'json')] }} We have defined some new mime types in grails-app/conf/spring/resources.groovy, but we must also add them to our grails-app/conf/Config.groovy file: // File: grails-app/conf/spring/resources.groovy ... grails.mime.types = [ ... short : ['application/vnd.com.mrhaki.grails.short+json', 'application/json'], details : ['application/vnd.com.mrhaki.grails.details+json', 'application/json'], ] ... Our application is now ready and configured. We mostly rely on Grails content negotiation to get the correct renderer for generating our output if we request a resource. Grails content negotiation can use the value of the request parameter format to find the correct mime type and then the correct renderer. Grails also can check the Accept request header or the URI extension, but for our RESTful API we want to use the format request parameter. If we invoke our resource with different formats we see the following results: $ curl -X GET http://localhost:8080/rest-sample/users/1 { "class": "com.mrhaki.grails.User", "id": 1, "email": "hubert@mrhaki.com", "firstname": "Hubert", "lastname": "Klein Ikkink", "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=short { "id": 1, "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=details { "id": 1, "username": "mrhaki", "email": "hubert@mrhaki.com", "fullname": "Hubert Klein Ikkink" } Code written with Grails 2.4.2.Reference: Grails Goodness: Using Converter Named Configurations with Default Renderers from our JCG partner Hubert Ikkink at the JDriven blog....
jboss-drools-logo

Examining Red Hat JBoss BRMS deployment architectures for rules and events (part II)

(Article guest authored together with John Hurlocker, Senior Middleware Consultant at Red Hat in North America) In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures. When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise. This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success. This will be a multi-part series that will introduce the deployment architectures in phases. You can catch up on last weeks article before continuing with this weeks look at the next two architectures. The possibilities A rule administrator or architect work with application team(s) to design the runtime architecture for rules and depending on the organizations needs the architecture could be any one of the following architectures or a hybrid of the designs below. In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs. The basic components in these architectures shown in the accompanying illustrations are:JBoss BRMS server Rules developer / Business analyst Version control (GIT) Deployment servers (JBoss EAP) Clients using your applicationRules execution server What you are doing in this architectural scenario is deploying JBoss BRMS as an application in its own environment. You can then expose it as a service (e.g. JMS, SOAP, etc.) so that any applications in your enterprise architecture can remotely execute rules and events.Illustration 1: Rules execution serverThis deployment architecture completely externalizes the entire JBoss BRMS rules and events component from your application development process as shown in illustration 1. It then only requires an application to make an external call for rules or event decisions. ProsCompletely decoupled architecture Common implementation to setup and execute rules Upgrades to BRMS versions become easier with single point of focus in your enterpriseConsPossible performance implications due to externalized component relative to your applications The execution server could be used by multiple applications.a team will need to take ownership of this application and maintain itHybrid of the rules execution server As a final example we present a hybrid architecture that leverages the previous basic rules execution server architecture and adds in the previously discussed (in part I) KieScanner component.Illustration 2: Hybrid architecture.With this architecture you have the ability to develop applications that just leverage a remote call to execute rules and events decisions, but add in the mix of being able to update rules and events packages without changing the execution server service structure. As a refresher, remember that the JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application. The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package. Illustration 2 shows how the rule execution server is now hosting a KieScanner implemented component to monitor the rules and events packages for updates which would then automatically be picked up for the next application that calls. ProsCompletely decoupled architecture Common implementation to setup and execute rules Upgrades to BRMS versions become easier with single point of focus in your enterprise Less maintenance for the execution server componentConsPossible performance implications due to externalized component relative to your applicationsNext up Next time we will take a look at the design time architecture and the options for you to deploy your rules and events into your architecture.Reference: Examining Red Hat JBoss BRMS deployment architectures for rules and events (part II) from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....
java-logo

Introduction to writing custom collectors in Java 8

Java 8 introduced the concept of collectors. Most of the time we barely use factory methods from Collectors class, e.g. collect(toList()), toSet() or maybe something more fancy like counting() or groupingBy(). Not many of us actually bother to look how collectors are defined and implemented. Let’s start from analysing what Collector<T, A, R> really is and how it works. Collector<T, A, R> works as a “sink” for streams – stream pushes items (one after another) into a collector, which should produce some “collected” value in the end. Most of the time it means building a collection (like toList()) by accumulating elements or reducing stream into something smaller (e.g. counting() collector that barely counts elements). Every collector accepts items of type T and produces aggregated (accumulated) value of type R (e.g. R = List<T>). Generic type A simply defines the type of intermediate mutable data structure that we are going to use to accumulate items of type T in the meantime. Type A can, but doesn’t have to be the same as R - in simple words the mutable data structure that we use to collect items from input Stream<T> can be different than the actual output collection/value. That being said, every collector must implement the following methods: interface Collector<T,A,R> { Supplier<A> supplier() BiConsumer<A,T> acumulator() BinaryOperator<A> combiner() Function<A,R> finisher() Set<Characteristics> characteristics() }supplier() returns a function that creates an instance of accumulator – mutable data structure that we will use to accumulate input elements of type T. accumulator() returns a function that will take accumulator and one item of type T, mutating accumulator. combiner() is used to join two accumulators together into one. It is used when collector is executed in parallel, splitting input Stream<T> and collecting parts independently first. finisher() takes an accumulator A and turns it into a result value, e.g. collection, of type R. All of this sounds quite abstract, so let’s do a simple example.Obviously Java 8 doesn’t provide a built-in collector for ImmutableSet<T> from Guava. However creating one is very simple. Remember that in order to iteratively build ImmutableSet we use ImmutableSet.Builder<T> - this is going to be our accumulator. import com.google.common.collect.ImmutableSet;public class ImmutableSetCollector<T> implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> { @Override public Supplier<ImmutableSet.Builder<T>> supplier() { return ImmutableSet::builder; }@Override public BiConsumer<ImmutableSet.Builder<T>, T> accumulator() { return (builder, t) -> builder.add(t); }@Override public BinaryOperator<ImmutableSet.Builder<T>> combiner() { return (left, right) -> { left.addAll(right.build()); return left; }; }@Override public Function<ImmutableSet.Builder<T>, ImmutableSet<T>> finisher() { return ImmutableSet.Builder::build; }@Override public Set<Characteristics> characteristics() { return EnumSet.of(Characteristics.UNORDERED); } }First of all look carefully at generic types. Our ImmutableSetCollector takes input elements of type T, so it works for any Stream<T>. In the end it will produce ImmutableSet<T> - as expected. ImmutableSet.Builder<T> is going to be our intermediate data structure.supplier() returns a function that creates new ImmutableSet.Builder<T>. If you are not that familiar with lambdas in Java 8, ImmutableSet::builder is a shorthand for () -> ImmutableSet.builder(). accumulator() returns a function that takes builder and one element of type T. It simply adds said element to the builder. combiner() returns a function that will accept two builders and turn them into one by adding all elements from one of them into the other – and returning the latter. Finally finisher() returns a function that will turn ImmutableSet.Builder<T> into ImmutableSet<T>. Again this is a shorthand syntax for: builder -> builder.build(). Last but not least, characteristics() informs JDK what capabilities our collector has. For example if ImmutableSet.Builder<T> was thread-safe (it isn’t), we could say Characteristics.CONCURRENT as well.We can now use our custom collector everywhere using collect(): final ImmutableSet<Integer> set = Arrays .asList(1, 2, 3, 4) .stream() .collect(new ImmutableSetCollector<>());However creating new instance is slightly verbose so I suggest creating static factory method, similar to what JDK does: public class ImmutableSetCollector<T> implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> {//...public static <T> Collector<T, ?, ImmutableSet<T>> toImmutableSet() { return new ImmutableSetCollector<>(); } }From now on we can take full advantage of our custom collector by simply typing: collect(toImmutableSet()). In the second part we will learn how to write more complex and useful collectors. Update @akarazniewicz pointed out that collectors are just verbose implementation of folding. With my love and hate relationship with folds, I have to comment on that. Collectors in Java 8 are basically object-oriented encapsulation of the most complex type of fold found in Scala, namely GenTraversableOnce.aggregate[B](z: ⇒ B)(seqop: (B, A) ⇒ B, combop: (B, B) ⇒ B): B. aggregate() is like fold(), but requires extra combop to combine two accumulators (of type B) into one. Comparing this to collectors, parameter z comes from a supplier(), seqop()reduction operation is an accumulator() and combop is a combiner(). In pseudo-code we can write: finisher( seq.aggregate(collector.supplier()) (collector.accumulator(), collector.combiner()))GenTraversableOnce.aggregate() is used when concurrent reduction is possible – just like with collectors.Reference: Introduction to writing custom collectors in Java 8 from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
agile-logo

Agile Myth #2: “Agile is About Project Management”

This is my third post on a multi-part series entitled, “Agile Myths and Misconceptions”. It’s based on the talk I gave at the first PSIA Softech Philippine Software Engineering Conference. I am striving to correct 12 common misconceptions about Agile Software Development. Agile has equal or more emphasis on engineering than project management. Just check out the authors of the Agile Manifesto – a large number of them are thought leaders in nuts-and-bolts software engineering: Kent Beck is the creator of JUnit and Test-Driven Development, Martin Fowler has written several books on Design Patterns and Refactoring, Robert Martin has written several books on Object-Oriented Design and code quality… If you check out Extreme Programming, which was the first Agile process to gain popularity, and precedes Agile itself, it leans more towards engineering practices, such as TDD, Continuous Integration, Coding Standards and Pair Programming. If you attended an early Agile training course, it was actually a programming workshop where you pair up with another programmer around a workstation and write code! Most of the Agile authors were aware that even if you do the project management side well, if you ignore the engineering side, a project will eventually be in quagmire. Consider what Martin Fowler said in his blog: There’s a mess I’ve heard about with quite a few projects recently. It works out like this:They want to use an agile process, and pick Scrum They adopt the Scrum practices, and maybe even the principles After a while progress is slow because the code base is a messWhat’s happened is that they haven’t paid enough attention to the internal quality of their software. If you make that mistake you’ll soon find your productivity dragged down because it’s much harder to add new features than you’d like. You’ve taken on a crippling Technical Debt and your scrum has gone weak at the knees. Here’s an illustration of what Martin Fowler is saying:  So initially, a team’s velocity increases with each iteration, but as the project progresses, the team’s velocity starts to steadily degrade, even if people are working just as hard or harder. Here’s a breakdown of what’s happening to the team. Let’s pretend that a team’s capacity is like a pipe. Requirements go in one end, some development happens inside, and then features come out the other end. Just like any pipe, the team’s capacity is finite.  Initially, all of the team’s capacity is used to turn requirements into features, but later on, naturally, some bugs are found, so part of the team’s capacity is now taken up by bug fixing.That situation is still fine, but if the team neglects the engineering practices, you end up with Technical Debt. In a nutshell, Technical Debt is difficulty in working code where the design and code quality is bad. The code is very difficult to understand, it’s hard to find bugs, a lot of things need to be modified for every change, and the code is very fragile. It’s a lot like real debt – when you don’t pay your credit card bill, the interest payments eat up your capacity to pay for day-to-day things like food and rent. With technical debt, if the team doesn’t take the effort to clean up their code, write automated tests, integrate often, apply proper design approaches, measure code quality and other engineering practices, eventually the code becomes harder and harder to work with, and just wrestling with the code takes up more and more of the team’s capacity, just like interest from debt takes up one’s capacity to spend. What’s more, since the code is more fragile, the project has more bugs, and so even more of the team’s capacity is used up in bug fixing.Just like real debt, if technical debt is not addressed, it just grows – the more code you add to the system, the harder and harder it just gets to work with. Eventually, the majority of the team’s capacity is taken up by just technical debt and bug fixing. Very little actual creation of new features comes out of the development team, if any at all.  I’m not exaggerating in any way. One of the clients I consulted for had development teams in three countries – with none of them producing any new features for months! The teams were bogged down with such a bad codebase that they couldn’t change any code without creating a large number of bugs. Again, the way you avoid this situation is through proper engineering practices. If you’re just starting out with engineering practices, I think a good place to start would be at coding practices. If you’re a Jave programmer, good book for this is Robert Martin’s “Clean Code”. After that, you might want to move to learning Test-Driven Development. My recommended TDD book for Java developers is “Test-Driven” by Lasse Koskela. After that, read up on proper practices for version control and Continuous Integration. For those of us working on legacy code, which is the majority I would think, a good reference is“Working Effectively with Legacy Code” by Michael Feathers. For those further along, you might want to start exploring approaches to design. Domain-model design is a good place to start. Some recommended references are “Applying UML and Patterns” by Craig Larman, “The Data Model Resource Book” by Len Silverston, “Domain-Driven Design” by Eric Evans, and “Implementing Domain-Driven Design” by Vaughn Vernon. For the other areas of design, you can check out the various design patterns books around such as Head First Design Patterns, Patterns of Enterprise Application Architecture, Enterprise Integration Patterns and Service Design Patterns. The next myth I will tackle is Agile Myth #3: “Agile is Short Milestones”.Reference: Agile Myth #2: “Agile is About Project Management” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
software-development-2-logo

Complexity is the Excuse

When I speak to people about how it is possible to continuously deliver customer value with near zero issues, I usually get laughed at. After I tell them that there is nothing to laugh at, people start challenging me on how I integrate with other systems, how I manage defects, how I deal with changing requirements, how I manage regression issues, and how I cope with context switching and other similar issues. Luckily enough I have empirical data and my answers are based on experience and not some book or model I read about somewhere, so after a while I manage to get people moving from laughing at me to at least starting to be curious. It has to be said, people become curious but deep inside they still don’t believe me and still think I am a big fool. How could I blame them? I would have done the same a few years back. I know that people need to have to prove it for themselves to be able to believe it, I have no problem with that. While I manage to explain my way of dealing with defects, changing requirements, regression, and context switching, until now I haven’t been able to answer the biggest question of them all, the one that every conversation ends up with eventually: how do you deal with extremely complex systems that need to scale up? I have been thinking about this for a while now and the more I think about it the more I become convinced that Complexity is the Excuse. Complexity exists when we are not able to prioritise the value to deliver (we don’t know what we really want). Complexity exist when we are not able to understand and describe the system we are building. And finally, Complexity is a nice excuse for not doing our job properly as software engineers and having something to blame.Reduce Complexity and stop taking excuses:You want to deliver customer value, OK. You don’t need to deliver everything on day 1. Sit down with your business partners and identify the highest value added feature and focus on that one. If asked for bells and whistles, say, “we will do it later and only if we really need to”, chances are when you are finished doing the first feature you will have learned that you need something different anyway. When deciding how to implement such feature, look for the simplest and cheapest solution, do NOT future proof. By future proofing you will be adding complexity and guess what? YAGNI. Measure the success of your feature and feel free to change direction, failure is learning. Once you have identified the value to be delivered, make sure you break down its own complexity. If a user story or unit of work or whatever you call it has more than one happy path, then it is too complex, break it down into 2 or more units of work. If you start working on something and you discover it is more complex than you had thought, then stop and break it down to less complex units, if you keep on going saying nothing, you will hide complexity and sooner or later you are bound to mess it up. Scaling up is the wrong answer to the false complexity question. Chances are you don’t need to scale up at all: Read 1 and 2 again and again, you will find out that you don’t need as many resources as you thought you would. Scaling up, most of the times, is the easiest, most expensive and laziest approach to fight complexity. For doing 1. and 2. in a structured manner I strongly recommend an approach called Impact Mapping devised by Gojko Adzic, it works. For doing 3. click here For doing 4. use your head. TL;DR: stop blaming complexity when you don’t understand what you are building.Reference: Complexity is the Excuse from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
software-development-2-logo

Writing Tests for Data Access Code – Green Build Is Not Good Enough

The first thing that we have to do before we can start writing integration tests for our data access code is to decide how we will configure our test cases. We have two options: the right one and wrong one. Unfortunately many developers make the wrong choice. How can we avoid making the same mistake? We can make the right decisions by following these three rules:     Rule 1: We Must Test Our Application This rule seems obvious. Sadly, many developers use a different configuration in their integration tests because it makes their tests pass. This is a mistake! We should ask ourselves this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? I think that the answer is obvious. If we use a different configuration in our integration tests, we are not testing how our data access code behaves in the production environment. We are testing how it behaves when we run our integration tests. In other words, we cannot verify that our data access code works as expected when we deploy our application to the production environment. Does this sound like a worthy goal? If we want to test that our data access code works when we use the production configuration, we should follow these simple rules:We should configure our tests by using the same configuration class or configuration file which configures the persistence layer of our application. Our tests should use the same transactional behavior than our application.These rules have two major benefits:Because our integration tests use exactly the same configuration than our application and share the same transactional behavior, our tests help us to verify that our data access code is working as expected when we deploy our application to the production environment. We don’t have to maintain different configurations. In other words, if we make a change to our production configuration, we can test that the change doesn’t break anything without making any changes to the configuration of our integration tests.Rule 2: We Can Break Rule One There are no universal truths in software development. Every principle rule is valid only under certain conditions. If the conditions change, we have to re-evaluate these principles. This applies to the first rule as well. It is a good starting point, but sometimes we have to break it. If we want to introduce a test specific change to our configuration, we have to follow these steps:Figure out the reason of the change. List the benefits and drawbacks of the change. If the benefits outweigh the drawbacks, we are allowed to change the configuration of our tests. Document the reason why this change was made. This crucial because it gives us the possibility to revert that change if we find out that making it was a bad idea.For example, we want to run our integration tests against an in-memory database when these tests are run in a development environment (aka developer’s personal computer) because this shortens the feedback loop. The only drawback of this change is that we cannot be 100% sure that our code works in the production environment because it uses a real database. Nevertheless, the benefits of this change outweigh its drawbacks because we can (and we should) still run our integration tests against a real database. A good way to do this is to configure our CI server to run these tests. This is of course a very simple (and maybe a bit naive) example and often the situations we face are much more complicated. That is why we should follow this guideline: If in doubt, leave test config out. Rule 3: We Must Not Write Transactional Integration Tests One of the most dangerous mistakes that we can make is to modify the transactional behavior of our application in our integration tests. If we make our tests transactional, we ignore the transaction boundary of our application and ensure that the tested code is executed inside a transaction. This is extremely harmful because it only helps us to hide the possible errors instead of revealing them. If you want to know how transactional tests can ruin the reliability of your test suite, you should read a blog post titled: Spring pitfalls: transactional tests considered harmful by Tomasz Nurkiewicz. It provides many useful examples about the errors which are hidden if you write transactional integration tests. Once again we have to ask ourselves this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? And once again, the answer is obvious. Summary This blog post has taught use three things:Our goal is not to verify that our data access code is working correctly when we run our tests. Our goal is to ensure that it is working correctly when our application is deployed to the production environment. Every test specific change creates a difference between our test configuration and production configuration. If this difference is too big, our tests are useless. Transactional integration tests are harmful because they ignore the transactional behavior of our application and hides errors instead of revealing them.That is a pretty nice summary. We did indeed learn those things, but we learned something much more important as well. The most important thing we learned from this blog post is this question: Do we want to test that our data access code works when we use the configuration that is used in the production environment or do we just want that our tests pass? If we keep asking this question, the rest should be obvious to us.Reference: Writing Tests for Data Access Code – Green Build Is Not Good Enough from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

The 4 Levels of Freedom For Software Developers

For quite some time now I’ve been putting together, in my mind, what I think are the four distinct levels that software developers can go through in trying to gain their “freedom.” For most of my software development career, when I worked for a company, as an employee, I had the dream of someday being free. I wanted to be able to work for myself. To me, that was the ultimate freedom.But, being naive, as I was, I didn’t realize that there were actually different levels of “working for yourself.” I just assumed that if you were self-employed, you were self-employed. It turns out most software developers I have talked to about this topic have the same views I did – before I knew better. I’ve written in the past about how to quit your job, but this post is a bit different. This post is not really about how to quit your job, but the different levels of self-employment you can attain, after you do so. The four levels The four levels I am about to describe are based on the level of freedom you experience in your work; they have nothing to do with skill level. But, generally we progress up these levels as we seek to, and hopefully succeed, in gaining more freedom.  So, most software developers start at level one, and the first time they become self-employed, it is usually at level three – although it is possible to skip straight to three. Here is a quick definition of the levels (I’ll cover them each in detail next.)Employed – you work for someone else Freelancer – you are your boss, but you work for many someone elses Product creator – you are your own boss, but your customers determine what you work on Financially free – you work on what you want when you want; you don’t need to make moneyI started my career at level one and bounced back and forth between level two and level one for quite awhile before I finally broke through to level three. I’m currently working on reaching level four – although, I’ve found that it is easy to stay at level three even though you could move to four. Along the way, I’ve found that at each level I was at, I assumed I would feel completely free when I reached the level above. But, each time I turned out to be wrong. While each level afforded me more general freedom, each level also seemed to not be what I imagined it would be. Level one: employedLike I said, most software developers start out at this level. To be honest, most software developers stay at this level – and don’t get me wrong, there is nothing wrong with staying there – as long as you are happy. At this level, you don’t have much freedom at all, because you basically have to work on what you are told to work on, you have to work when you are told and you are typically tied to a physical location. (Throughout this post, you’ll see references to these three degrees of freedom.) Working for someone else isn’t all that bad. You can have a really good job that pays really well, but in most cases you are trading some amount of security for a certain amount of bondage. You are getting a stable paycheck on a regular interval, but at the cost of a large portion of your freedom. Now, that doesn’t mean that you can’t have various levels of traditional employment. I think there are mini-levels of freedom that exist even when you are employed by someone else. For example, you are likely to be afforded more freedom about when you start and leave work as you move up and become more senior at a job. You are also likely to be given a bit more autonomy over what you do – although Agile methodologies may have moved us back in that regard. You might even find freedom from location if you are able to find a job that allows you to telecommute. In my quest for more freedom, I actually made a trade of a considerable amount of pay in order to accept a job where I would have the freedom of working from home. I erroneously imagined that working from home would be the ultimate freedom and that I would be content working for someone else the rest of my career, so long as I could do it from home. (Don’t get me wrong, working from home has its perks, but it also has its disadvantages as well. When I worked from home, I felt more obligated to get more work done to prove that I wasn’t just goofing off. I also felt that my work was never done.) Now, like I said before, more people will stay at level one and perhaps move around, gaining more freedom through things like autonomy and a flexible working schedule, but there are definite caps on freedom at this level. No one is going to pay you to do what you want and tell you that you can disappear whenever you want to. You are also going to have your income capped. You can only make so much money working for someone else and that amount is mostly fixed ahead of time. Level two: freelancerSo, this is the only other level that I had really imagined existed for a software developer, for most of my career. I remember thinking about how wonderful it would be to work on my own projects with my own clients. I imagined that as a freelancer I could bid on government contracts and spend a couple of years doing a contract before moving on to the next. I also imagined an alternative where I worked for many different clients, working on different jobs at different times – all from the comfort of my PJs. When most software developers talk about quitting their jobs and becoming self-employed, I think this is what they imagine. They think, I like I did, that this is the ultimate level of freedom. It didn’t take me very long as a freelancer to realize that there really wasn’t much more freedom, working as a freelancer, than there was working for someone else. First of all, if you have just one big client, like most starting freelancers do, you are basically in a similar situation as what you are when you are employed – the big difference is that now you can’t bill for those hours you were goofing off. You will likely have more freedom about your working hours and days, but you’ll be confined to the project your client has hired you to work on and you might even have to come into their office to do the work. This doesn’t mean that you don’t have more freedom though, it is just a different kind. If you have multiple clients, you have more control over your life and what you work on. You can set your own rate, you can set your own hours and you can potentially turn down work that you don’t want to do – although, in reality, you probably won’t be turning down anything – especially if you are just starting out. Don’t get me wrong, it is nice to have your own company and to be able to bill your clients, instead of being compelled to work for one boss who has ultimate control over your life, but freelancing is a lot of work and on a daily basis it may be difficult to actually feel more free than you would working for someone else. Given the choice of just doing freelancing work or working for someone else, I’d rather just take the steady paycheck. I wouldn’t have said this five years ago, but I know now that freelancing is difficult and stressful. I really wouldn’t go down this road unless you know this is what you want to do or you are using it as a stepping stone to get to somewhere else. From a pay perspective, a freelancer can make a lot more money than most employees. I currently do freelance work and I don’t accept any work for less than $300 an hour. Now, I didn’t start at that rate – when I first started out $100 an hour was an incredible rate – but, I eventually worked my way up to it. (If you want to find out how, check out my How to Market Yourself as a Software Developer package.) The big thing though, is that your pay is not capped. The more you charge and the more hours you work, the more you make. You are only limited by the limits of those two things combined. Level three: product creatorThis level is where things get interesting. When I was mostly doing freelancing, I realized that my key mistake was not in working for someone else, but in trading dollars for hours. I realized that as a freelancer my life was not as beautiful as I had imagined it. I was not really free, because if I wasn’t working I wasn’t getting paid. I actually ended up going back to fulltime employment in order to rethink my strategy. The more and more I thought about it, the more I realized that in order to really gain the kind of freedom I wanted, I would need to create some kind of product that I could sell or some kind of service that would generate me income all the time without me having to work all the time. There are many ways to reach this level, but perhaps the most common is to build some kind of software or software as a service (SASS) that generates income for you. You can then make money from selling that product and you get to work on that product when and how you see fit. You can also reach this level by selling digital products of some sort. I was able to reach this level through a combination of this blog, mobile apps I built, creating royalty generating courses for Pluralsight and my own How to Market Yourself as a Software Developer package. (Yes, I have plugged it twice now, but hey this is my blog – and this is how I make money.) You have quite a bit of freedom at this level. You no longer have any real boss. There is no pointy-haired boss telling you what to work on and you don’t have clients telling you what projects to work on either. You most likely can work from anywhere you want and whenever you want. You can even disappear for months at a time – so long as you figure out a way to handle support. Now, that doesn’t mean that everything is peaches and roses at this level either. For one thing, I imagined that if I was creating products, that I would get to work on exactly what I wanted to work on. This is far from the truth. I have a large degree of control over what I choose to work on and create, but because I am bound by the need to make money, I have to give a large portion of that control over to the market. I have to build what my customers will pay for. This might not seem like a big deal, but it is. I’ve always had the dream of writing code and working on my own projects. I dreamed that being a product creator and making money from my own products would give me that freedom. To some degree it does, but I also have to pay careful attention to what my audience and customers want and I have to put my primary focus on building those things. This level is also quite stressful, because everything depends on you. You have to be successful to get paid. When you are an employee, all you have to do is show up. When you are a freelancer, you just have to get clients and do the work – you get paid for the work you put in, not the results. When you are a product creator, you might spend three months working on something and not make a dime. No one cares how much work you did, only results count. As far as income potential, there is no cap here. You might struggle to just make enough to live, but if you are successful, there is no limit to how much you could earn, since you are not bound by time. At this level you are no longer trading hours for dollars. To me, it isn’t worth striving for level two, it is better to just work for someone else until you can reach level three, because this level of freedom is one that actually makes a big difference in your life. You still may not be able to work on just what you want to work on, but at least at this point – once you are successful – all the other areas of your life start to become much more free. Level four: financially freeI couldn’t come up with a good name for this level, but this is the level where you no longer have to worry about making money. One thing I noticed when I finally reached level three was that a large portion of what was holding me back from potentially doing exactly what I wanted to do was the need to generate income. Now, it’s true that you can work on what you want to work on and make money doing it, but often the need to generate income tends to influence what you work on and how you work on it. For example, I’d really like to create a video game. I’ve always dreamed of doing a large game development project. But, I know it isn’t likely to be profitable. As long as I am worrying about income, my freedom is going to be limited to some degree. If I don’t have passive income coming in that is more than enough to sustain me, I can’t just quit doing the projects that do make me money and start writing code for a video game – well, I could, but it wouldn’t be smart, and I’d feel pretty guilty about it. So, in my opinion, the highest form of freedom a software developer can achieve is when they are financially free. What do I mean by financially free though? It basically means that you don’t have to worry about cash. Perhaps you sold your startup for several millions of dollars or you have passive income coming in from real estate or other investments that more than provides for your daily living needs. (For some good information on how to do this or how this might work, I recommend starting with the book “Rich Dad Poor Dad”.) At this level of freedom, you can basically do what you want. You can create software that interests you, because it interests you—you aren’t worried about profitability. Want to create an Android app, just because, go ahead. Want to learn a new programming language, because you think it would be fun – go for it. This has always been the level of freedom I have secretly wanted. I never wanted to sit back and not do anything, but I’ve always wanted to work on what interested me and only what interested me. Every other level that I thought would have this freedom, I realized didn’t. I realized that there was always something else that was controlling what I worked on, be it my boss, my clients or my customers. Now, this doesn’t mean that you can’t still make money from your projects. In fact, paradoxically, I believe, if you can get to this stage, you have the potential to make the most money. Once you start working on what you want to work on, you are more likely to put much more passionate work into it and it is very likely that it will be of high value. This is where programming becomes more like art. I don’t have any proof of this, of course, but I suspect that when you don’t care about making money, because you are just doing what you love, that is when you make the most of it. Don’t get me wrong, you might be able to focus on doing what you love, even if you aren’t making any money. I know plenty of starving artists do – or at least they tell themselves they do – but, I can’t do it. I’ve tried it, but I always feel guilty and stressed about the fact that what I am working on isn’t profitable. In my opinion, you really have to be financially free to experience true creative freedom. I’m actually working on getting to this level. Technically, I could say I am there now, but I am still influenced greatly by profitability. Although, now, I am not choosing my projects solely on the criteria of what will make the most amount of money. I am turning down more and more projects and opportunities that don’t align with what I want to do as I am trying to transition to working on only what interests me as my passive income is increasing. What can you gather from all this? Well, the biggest thing is that freedom has different levels and that, perhaps, you don’t want to be a freelancer, after all. I think many software developers assume working for themselves by freelancing will give them the ultimate freedom. They don’t realize that they’ll only be able to work on exactly what they want to work on when they are actually financially free. So, my advice to you is that if you want to have full creative control over your life and what you work on, work on becoming financially free. If you want a high degree of autonomy in most of the areas of your life, you should try to develop and sell products. If you are happy just being your own boss, even if you have to essentially take orders from clients, freelancing might be the road for you. And, if all of this just seems like too high of a price to pay, you might want to just stay where you are at and keep collecting your weekly paychecks – nothing wrong with that.Reference: The 4 Levels of Freedom For Software Developers from our JCG partner John Sonmez at the Making the Complex Simple blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close