Featured FREE Whitepapers

What's New Here?

agile-logo

The Agile Tester, a curious and empathetic animal

The agile tester (ˈadʒʌɪl/ ˈtɛstə/) is an mammal member of the family “Exploratoris”. He lives in the wild in small groups named cross-functional agile teams.           Skills Besides communication and technical skills, his main traits are curiosity and empathy [1]. Curiosity helps the agile tester in finding opportunities to improve the product. The agile tester questions everything. Empathy allows the agile tester to interact and collaborate with the other members of the agile team smoothly. Mission The agile tester has an overwhelming interest in delighting customers. His customers are the product owner and the final users. He strives in balancing his efforts to make his company successful and delighting the final users by delivering a continuous flow of value to both. Life The agile tester spends most of his time having numerous conversations with other members of the team [2]. He will often speak to and question the product owner and the final user in the quest of real value. The agile tester knows that if he doesn’t understand perfectly what the value to be delivered is he won’t be able to do his job. He builds a strong understanding of the business in order to be able to help the product owner identify more valuable solutions. This aspect is extremely important to the agile tester, he strives to contribute to building a better product. Often he will be found having a conversations with the product owner and a developer. This group of animals also known as “The 3 Amigos” [3] feed off each others knowledge, different perspective and passion for value to resolve all sort of problems and design lean solutions. Other times he will be seen pair testing while coaching his partner developer or supporting a developer writing checks, or even writing some checks himself. Some agile testers have been seen speaking to final users to better understand their experience with the application. He is also sometimes found alone at his desk testing, softly talking to the application under test. During the night the agile tester studies and researches his craft, sometimes he blogs and if you watch attentively you might spot a lone agile tester engaging in passionate testing conversations on twitter or in a bar in front of a beer. Social life The agile tester’s’ life would not be possible without the team. He works and lives with the team and for the team, the team is an organism that functions with the agile tester [4]. The agile tester is a pragmatic animal and doesn’t like the company of moaners that do nothing to improve their condition. The moaner is the nemesis of the agile tester [16]. The agile tester believes in sustainable development and will not work overtime except for very special circumstances. He will push for process changes to remove other overtime occurrences. The agile tester and waste In general the agile tester refuses the concept of waste. He will not under any circumstance do something “because that’s  how we do things here” or “because the boss said so”. He will ask “why?”[5]. If he cannot get an answer that clearly explains what the value is, he won’t do it. He’d rather be fired than spend time doing things that don’t produce value. On this subject he is known for using lean documentation, he generally enjoys documenting the application he is helping build through executable specifications [6]. He rejects the waste of bureaucracy and signoffs [7], in fact it is common seeing agile testers signing off by high five[8] in groups of Three Amigos rather than negotiate contracts. The agile testers understands that producing, finding and fixing bugs is a wasteful activity and he will strive to help the agile team prevent them and do the thing right the first time as much as humanly possible[9]. The agile tester, not only understands this, but he coaches the developers members of the team on this concept and trains them in learning  techniques that help them prevent bugs. The agile tester believes that his skills are wasted performing regression checks, in fact he employs tools for this menial task [10]. The agile tester prefers cards and conversations to large documents. He plans his activity just in time and helps build the next parts of the product using discovery. Some agile testers believe predicting the future is a waste of time and prefer building predictable process rather than estimating, they have been known for insistently using the tag #NoEstimates Some extremist agile testers even got to the point to say that bug management is waste and have removed bug management tools from their organizations with a positive impact [11]. Education The agile tester is a continuous learner. He believes in agile principles and he studies the impacts of agile software development on his industry trying to learn new approaches to improve his own company and the whole agile community. He believes that continuous improvement (as in kaizen) means everybody in the agile team is empowered to drive it. He helps other team members bring out their solutions and support them in convincing the team to try and measure results. He does not believe in best practices but in good practices that can be improved [12]. References: [1] Get In Shape to become a better Agile Tester [2] [6] [9] When Something Works Share it  [3] George Dinwiddie on the Three Amigos [4] Cross-dysfunctional teams [5] Be lean ask Why? [7] The Cover your Ass manifesto [8] Sign off by High Five [10] Test Automation, Help or Hindrance? [11] How I stopped logging bugs and started living happy [15] 5 Reasons why best practices are bad for you [16] Stop Moaning, be the changeReference: The Agile Tester, a curious and empathetic animal from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
ceylon_logo

Write in Ceylon, Deploy as OSGI, use in JEE

… or how to use Ceylon inside Java EE application servers. The Ceylon language is inherently modular, and is shipped with a complete infrastructure that allows leveraging this modularity out-of-the box. However Ceylon is not captive of its own infrastructure. After the Java and JS interoperability efforts, the 1.1.0 version has brought out-of-the-box compatibility with OSGI, which enables running Ceylon code inside many other containers. Every module archive produced by the Ceylon compiler contains OSGI headers in its MANIFEST file, that describe the module as it should seen by OSGI containers. Containers tested so far are:  Apache Felix 4.4.1, Oracle Glassfish v4.1, Equinox platform, JBoss WildFly 8.0.0.alpha3 (with JBossOSGi installed)Of course, the Ceylon distribution and SDK modules should first be added inside the OSGI container as OSGI bundles. But instead of writing long explanations here, let me direct you to some concrete examples provided, with the required instructions, in the following repository: https://github.com/davidfestal/Ceylon-Osgi-Examples/ For the moment, it contains a single example that, though very simple, will give you the main steps to start. It also shows the use of a Ceylon module totally outside Ceylon’s standard infrastructure, even outside the JBoss world, in a Web application servlet running on a Glassfish v4.1 application server. But of course you should be able to run it inside other OSGI-enabled application servers or containers. In the next examples we’ll try to go further an do more interesting things such as providing services, using Ceylon annotations (which are compatible with Java annotations), or using OSGI services. Please report any problem you might encounter while testing, and feel free to submit pull requests for any other successful use cases you might have built. Looking forward for your remarks, and for the time to write the following examples.Reference: Write in Ceylon, Deploy as OSGI, use in JEE from our JCG partner David Festal at the Ceylon Team blog blog....
javafx-logo

New Custom Control: TaskProgressView

I have written a new custom control and commited it to the ControlsFX project. It is a highly specialized control for showing a list of background tasks, their current status and progress. This is actually the first control I have written for ControlsFX just for the fun of it, meaning I do not have a use case for it myself (but sure one will come eventually). The screenshot below shows the control in action.              If you are already familiar with the javafx.concurrent.Task class you will quickly grasp that the control shows the value of its title, message, and progress properties. But it also shows an icon, which is not covered by the Task API. I have added an optional graphics factory (a callback) that will be invoked for each task to lookup a graphic node that will be placed on the left-hand side of the list view cell that represents the task. A video showing the control in action can be found here:The Control Since this control is rather simple I figured it would make sense to post the entire source code for it so that it can be used for others to study. The following listing shows the code of the control itself. As expected it extends the Control class and provides an observable list for the monitored tasks and an object property for the graphics factory (the callback). package org.controlsfx.control;import impl.org.controlsfx.skin.TaskProgressViewSkin; import javafx.beans.property.ObjectProperty; import javafx.beans.property.SimpleObjectProperty; import javafx.collections.FXCollections; import javafx.collections.ListChangeListener; import javafx.collections.ObservableList; import javafx.concurrent.Task; import javafx.concurrent.WorkerStateEvent; import javafx.event.EventHandler; import javafx.scene.Node; import javafx.scene.control.Control; import javafx.scene.control.Skin; import javafx.util.Callback;/** * The task progress view is used to visualize the progress of long running * tasks. These tasks are created via the {@link Task} class. This view * manages a list of such tasks and displays each one of them with their * name, progress, and update messages.<p> * An optional graphic factory can be set to place a graphic in each row. * This allows the user to more easily distinguish between different types * of tasks. * * <h3>Screenshots</h3> * The picture below shows the default appearance of the task progress view * control: * <center><img src="task-monitor.png" /></center> * * <h3>Code Sample</h3> * * <pre> * TaskProgressView<MyTask> view = new TaskProgressView<>(); * view.setGraphicFactory(task -> return new ImageView("db-access.png")); * view.getTasks().add(new MyTask()); * </pre> */ public class TaskProgressView<T extends Task<?>> extends Control {/** * Constructs a new task progress view. */ public TaskProgressView() { getStyleClass().add("task-progress-view");EventHandler<WorkerStateEvent> taskHandler = evt -> { if (evt.getEventType().equals( WorkerStateEvent.WORKER_STATE_SUCCEEDED) || evt.getEventType().equals( WorkerStateEvent.WORKER_STATE_CANCELLED) || evt.getEventType().equals( WorkerStateEvent.WORKER_STATE_FAILED)) { getTasks().remove(evt.getSource()); } };getTasks().addListener(new ListChangeListener<Task<?>>() { @Override public void onChanged(Change<? extends Task<?>> c) { while (c.next()) { if (c.wasAdded()) { for (Task<?> task : c.getAddedSubList()) { task.addEventHandler(WorkerStateEvent.ANY, taskHandler); } } else if (c.wasRemoved()) { for (Task<?> task : c.getAddedSubList()) { task.removeEventHandler(WorkerStateEvent.ANY, taskHandler); } } } } }); }@Override protected Skin<?> createDefaultSkin() { return new TaskProgressViewSkin<>(this); }private final ObservableList<T> tasks = FXCollections .observableArrayList();/** * Returns the list of tasks currently monitored by this view. * * @return the monitored tasks */ public final ObservableList<T> getTasks() { return tasks; }private ObjectProperty<Callback<T, Node>> graphicFactory;/** * Returns the property used to store an optional callback for creating * custom graphics for each task. * * @return the graphic factory property */ public final ObjectProperty<Callback<T, Node>> graphicFactoryProperty() { if (graphicFactory == null) { graphicFactory = new SimpleObjectProperty<Callback<T, Node>>( this, "graphicFactory"); }return graphicFactory; }/** * Returns the value of {@link #graphicFactoryProperty()}. * * @return the optional graphic factory */ public final Callback<T, Node> getGraphicFactory() { return graphicFactory == null ? null : graphicFactory.get(); }/** * Sets the value of {@link #graphicFactoryProperty()}. * * @param factory an optional graphic factory */ public final void setGraphicFactory(Callback<T, Node> factory) { graphicFactoryProperty().set(factory); } The Skin As you might have expected the skin is using a ListView with a custom cell factory  to display the tasks. package impl.org.controlsfx.skin;import javafx.beans.binding.Bindings; import javafx.concurrent.Task; import javafx.geometry.Insets; import javafx.geometry.Pos; import javafx.scene.Node; import javafx.scene.control.Button; import javafx.scene.control.ContentDisplay; import javafx.scene.control.Label; import javafx.scene.control.ListCell; import javafx.scene.control.ListView; import javafx.scene.control.ProgressBar; import javafx.scene.control.SkinBase; import javafx.scene.control.Tooltip; import javafx.scene.layout.BorderPane; import javafx.scene.layout.VBox; import javafx.util.Callback;import org.controlsfx.control.TaskProgressView;import com.sun.javafx.css.StyleManager;public class TaskProgressViewSkin<T extends Task<?>> extends SkinBase<TaskProgressView<T>> {static { StyleManager.getInstance().addUserAgentStylesheet( TaskProgressView.class .getResource("taskprogressview.css").toExternalForm()); //$NON-NLS-1$ }public TaskProgressViewSkin(TaskProgressView<T> monitor) { super(monitor);BorderPane borderPane = new BorderPane(); borderPane.getStyleClass().add("box");// list view ListView<T> listView = new ListView<>(); listView.setPrefSize(500, 400); listView.setPlaceholder(new Label("No tasks running")); listView.setCellFactory(param -> new TaskCell()); listView.setFocusTraversable(false);Bindings.bindContent(listView.getItems(), monitor.getTasks()); borderPane.setCenter(listView);getChildren().add(listView); }class TaskCell extends ListCell<T> { private ProgressBar progressBar; private Label titleText; private Label messageText; private Button cancelButton;private T task; private BorderPane borderPane;public TaskCell() { titleText = new Label(); titleText.getStyleClass().add("task-title");messageText = new Label(); messageText.getStyleClass().add("task-message");progressBar = new ProgressBar(); progressBar.setMaxWidth(Double.MAX_VALUE); progressBar.setMaxHeight(8); progressBar.getStyleClass().add("task-progress-bar");cancelButton = new Button("Cancel"); cancelButton.getStyleClass().add("task-cancel-button"); cancelButton.setTooltip(new Tooltip("Cancel Task")); cancelButton.setOnAction(evt -> { if (task != null) { task.cancel(); } });VBox vbox = new VBox(); vbox.setSpacing(4); vbox.getChildren().add(titleText); vbox.getChildren().add(progressBar); vbox.getChildren().add(messageText);BorderPane.setAlignment(cancelButton, Pos.CENTER); BorderPane.setMargin(cancelButton, new Insets(0, 0, 0, 4));borderPane = new BorderPane(); borderPane.setCenter(vbox); borderPane.setRight(cancelButton); setContentDisplay(ContentDisplay.GRAPHIC_ONLY); }@Override public void updateIndex(int index) { super.updateIndex(index);/* * I have no idea why this is necessary but it won't work without * it. Shouldn't the updateItem method be enough? */ if (index == -1) { setGraphic(null); getStyleClass().setAll("task-list-cell-empty"); } }@Override protected void updateItem(T task, boolean empty) { super.updateItem(task, empty);this.task = task;if (empty || task == null) { getStyleClass().setAll("task-list-cell-empty"); setGraphic(null); } else if (task != null) { getStyleClass().setAll("task-list-cell"); progressBar.progressProperty().bind(task.progressProperty()); titleText.textProperty().bind(task.titleProperty()); messageText.textProperty().bind(task.messageProperty()); cancelButton.disableProperty().bind( Bindings.not(task.runningProperty()));Callback<T, Node> factory = getSkinnable().getGraphicFactory(); if (factory != null) { Node graphic = factory.call(task); if (graphic != null) { BorderPane.setAlignment(graphic, Pos.CENTER); BorderPane.setMargin(graphic, new Insets(0, 4, 0, 0)); borderPane.setLeft(graphic); } } else { /* * Really needed. The application might have used a graphic * factory before and then disabled it. In this case the border * pane might still have an old graphic in the left position. */ borderPane.setLeft(null); }setGraphic(borderPane); } } } } The CSS The stylesheet below makes sure we use a bold font for the task title, a smaller / thinner progress bar (without rounded corners), and list cells with a fade-in / fade-out divider line in their bottom position. .task-progress-view { -fx-background-color: white; }.task-progress-view > * > .label { -fx-text-fill: gray; -fx-font-size: 18.0; -fx-alignment: center; -fx-padding: 10.0 0.0 5.0 0.0; }.task-progress-view > * > .list-view { -fx-border-color: transparent; -fx-background-color: transparent; }.task-title { -fx-font-weight: bold; }.task-progress-bar .bar { -fx-padding: 6px; -fx-background-radius: 0; -fx-border-radius: 0; }.task-progress-bar .track { -fx-background-radius: 0; }.task-message { }.task-list-cell { -fx-background-color: transparent; -fx-padding: 4 10 8 10; -fx-border-color: transparent transparent linear-gradient(from 0.0% 0.0% to 100.0% 100.0%, transparent, rgba(0.0,0.0,0.0,0.2), transparent) transparent; }.task-list-cell-empty { -fx-background-color: transparent; -fx-border-color: transparent; }.task-cancel-button { -fx-base: red; -fx-font-size: .75em; -fx-font-weight: bold; -fx-padding: 4px; -fx-border-radius: 0; -fx-background-radius: 0; }Reference: New Custom Control: TaskProgressView from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
java-interview-questions-answers

JPA Tutorial: Mapping Entities – Part 3

In my last article I showed two different ways to read/write persistent entity state – field and property. When field access mode is used, JPA directly reads the state values from the fields of an entity using reflection. It directly translates the field names into database column names if we do not specify the column names explicitly.  In case of property access mode, the getter/setter methods are used to read/write the state values. In this case we annotate the getter methods of the entity states instead of the fields using the same annotations. If we do not explicitly specify the database column names then they are determined following the JavaBean convention, that is by removing the “get” portion from the getter method name and converting the first letter of the rest of the method name to lowercase character.   We can specify which access mode to use for an entity by using the @Access annotation in the entity class declaration. This annotation takes an argument of type AccessType (defined in the javax.persistence package) enum, which has two different values corresponding to two different access modes – FIELD and PROPERTY. As an example, we can specify property access mode for the Address entity in the following way: @Entity @Table(name = "tbl_address") @Access(AccessType.PROPERTY) public class Address { private Integer id; private String street; private String city; private String province; private String country; private String postcode; private String transientColumn;@Id @GeneratedValue @Column(name = "address_id") public Integer getId() { return id; }public Address setId(Integer id) { this.id = id; return this; }public String getStreet() { return street; }public Address setStreet(String street) { this.street = street; return this; }public String getCity() { return city; }public Address setCity(String city) { this.city = city; return this; }public String getProvince() { return province; }public Address setProvince(String province) { this.province = province; return this; }public String getCountry() { return country; }public Address setCountry(String country) { this.country = country; return this; }public String getPostcode() { return postcode; }public Address setPostcode(String postcode) { this.postcode = postcode; return this; } } Couple of points to note about the above example:As discussed before, we are now annotating the getter method of the entity id with the @Id, @GeneratedValue and @Column annotations. Since now column names will be determined by parsing the getter methods, we do not need to mark the transientColumn field with the @Transient annotation anymore. However if Address entity had any other method whose name started with “get”, then we needed to apply @Transient on it.If an entity has no explicit access mode information, just like our Address entity that we created in the first part of this series, then JPA assumes a default access mode. This assumption is not made at random. Instead, JPA first tries to figure out the location of the @Id annotation. If the @Id annotation is used on a field, then field access mode is assumed. If the @Id annotation is used on a getter method, then property access mode is assumed. So even if we remove the @Access annotation from the Address entity in the above example the mapping will still be valid and JPA will assume property access mode: @Entity @Table(name = "tbl_address") public class Address { private Integer id; private String street; private String city; private String province; private String country; private String postcode; private String transientColumn;@Id @GeneratedValue @Column(name = "address_id") public Integer getId() { return id; }// Rest of the class........ Some important points to remember about the access modes:You should never declare a field as public if you use field access mode. All fields of the entity should have either private (best!), protected or default access type. The reason behind this is that declaring the fields as public will allow any unprotected class to directly access the entity states which could defeat the provider implementation easily. For example, suppose that you have an entity whose fields are all public. Now if this entity is a managed entity (which means it has been saved into the database) and any other class changes the value of its id, and then you try to save the changes back to the database, you may face unpredictable behaviors (I will try to elaborate on this topic in a future article). Even the entity class itself should only manipulate the fields directly during initialization (i.e., inside the constructors). In case of property access mode, if we apply the annotations on the setter methods rather than on the getter methods, then they will simply be ignored.It’s also possible to mix both of these access types. Suppose that you want to use field access mode for all but one state of an entity, and for that one remaining state you would like to use property access mode because you want to perform some conversion before writing/after reading the state value to and from the database. You can do this easily by following the steps below:Mark the entity with the @Access annotation and specify AccessType.FIELD as the access mode for all the fields. Mark the field for which you do not like to use the field access mode with the @Transient annotation. Mark the getter method of the property with the @Access annotation and specify AccessType.PROPERTY as the access mode.The following example demonstrates this approach as the postcode has been changed to use property access mode: @Entity @Table(name = "tbl_address") @Access(AccessType.FIELD) public class Address { @Id @GeneratedValue @Column(name = "address_id") private Integer id;private String street; private String city; private String province; private String country; /** * postcode is now marked as Transient */ @Transient private String postcode; @Transient private String transientColumn;public Integer getId() { return id; }public Address setId(Integer id) { this.id = id; return this; }public String getStreet() { return street; }public Address setStreet(String street) { this.street = street; return this; }public String getCity() { return city; }public Address setCity(String city) { this.city = city; return this; }public String getProvince() { return province; }public Address setProvince(String province) { this.province = province; return this; }public String getCountry() { return country; }public Address setCountry(String country) { this.country = country; return this; }/** * We are now using property access mode for reading/writing * postcode */ @Access(AccessType.PROPERTY) public String getPostcode() { return postcode; }public Address setPostcode(String postcode) { this.postcode = postcode; return this; } } The important thing to note here is that if we do not annotate the class with the @Access annotation to explicitly specify the field access mode as the default one, and we annotate both the fields and the getter methods, then the resultant behavior of the mapping will be undefined. Which means the outcome will totally depend on the persistence provider i.e., one provider might choose to use the field access mode as default, one might use property access mode, or one might decide to throw an exception! That’s it for today. If you find any problems/have any questions, please do not hesitate to comment! Until next time.Reference: JPA Tutorial: Mapping Entities – Part 3 from our JCG partner Sayem Ahmed at the Random Thoughts blog....
jetbrains-intellijidea-logo

NetBeansIDE and IntellijIDEA From The Eyes of a Long Time Eclipse User

I have been using Eclipse IDE since 2006 and I liked it so much for various reasons. First it is open source and free to use. Eclipse looks pretty neat on Windows OS on which I work most of the time. Occasionally I tried to use NetBeansIDE (before 6.x version) and I didn’t like it because it’s too slow. And I never tried IntellijIDEA because it’s a commercial product and I am 100% sure that my employer is not going to pay $$$ for an IDE. So over the years I have been using JavaEE based Eclipse version and once I found SpringSource Tool Suite it became my default Java IDE for everything. I like Spring framework very much and I use Spring technologies everyday, both on personal and official projects. STS provides lot of additional features for Spring related technologies like auto-completion in spring xml files, beans graph etc etc. I should mention SpringBoot support in STS specifically. You can create SpringBoot applications with lot of customization options (which modules to use, java version, maven/gradle, java/groovy etc) right from the IDE itself. As of now no other IDE has this much of good support for SpringBoot. But as everyone knows working with Eclipse isn’t fun all the times. It has it’s own set of problems. I get used to see the NullPointerException or  IllegalArgumentException error alerts all the times. When you press Ctrl + Space you may get auto completion suggestion or get an error alert. If you type too fast and press Ctrl+Space many times then Eclipse might disappear and shows a big alert box with very useful details. If you have many open projects in your workspace and if it contains JPA/JSF/JAX-WS/JAX-RS modules then as soon as you opened your Eclipse it may stuck in Building Workspace state forever. The only way to solve it is End Process via Task Manager. Till this point its bearable. If you install any plugin which contain any conflicting XML libraries then the real problems start. As soon as you open pom.xml you will get to see error alerts repeatedly, you can’t even close it..it keeps popping up the error alerts. If you are lucky then restarting eclipse might solve the issue or you have to try uninstalling the newly installed plugin(which never solved the problem for me) or start with a new Eclipse altogether. Even after all these pains I stick to Eclipse because I get used to it. As I said I have been using STS and till STS-3.5.1 version it’s fine and I am OK to live with all the previously mentioned pain points. But once I downloaded STS-3.6.0 and started using it, things get even worse. First, Gradle plugin didn’t work. After googling for a while there is already a bug filed regarding the same issue. I thought this may be resolved in STS-3.6.1 release but it is not. Then I upgraded Gradle plugin with nightly build and it started working fine. I am very happy. Then I started my SpringBoot application and it worked fine. Great!!. Then I opened another Java class and made some changes and tried to click on Relaunch button. As soon as mouse cursor is on Relaunch button it is showing error alert. Navigate to any other file and put cursor on Relaunch button then again it will show error alert. What the hell!! For almost 4 days I was struggling with these kind of issues only. I Haven’t ever started writing code at all. I told myself “Enough!! Shut this f***king eclipse and start using some better IDE, come out of you Eclipse comfort zone”. I have been playing with NetBeansIDE every now and then, and I am aware that NetBeansIDE got lot better than its previous versions, especially from 7.x onwards its very fast and feature rich. A year ago I tried IntellijIDEA Ultimate edition with trial version and it’s totally confusing to me because of my prior Eclipse experience. When I google for “Eclipse vs NetBeansIDE vs IntellijIDEA” there are lots of articles comparing them and almost every article ends with a conclusion that “IntellijIDEA > NetBeansIDE > Eclipse”. But I though of trying NetBeansIDE and IntellijIDEA myself. So I installed NetBeansIDE 8.0.1 and IntellijIDEA Ultimate Edition 13. How I feel about NetBeansIDE: First thing I noticed is NetBeansIDE totally improved over its previous versions. It is fast and feature rich. Pros:You will get most the Java stuff that you need out-of-the-box. You don’t need to hunt for plugins. If your project is based on JavaEE technologies like CDI/EJB/JPA/JSF/JAX-RS then you will love NetBeansIDE. It has awesome code generators for JPA Entities from Database, JSF views from Entities, JAX-RS resource from Entities etc. Its Maven support is fantastic. Looking up and adding dependencies works out-of-the-box. No need to check “Download Indexes at startup” and perform Rebuild Indexes…You know what I mean! Great support for HTML5 technologies, especially AngularJS. Auto completion for AngularJS directives is amazing. You can download and install many of the popular Javascript libraries right from the IDE itself. It has very good Java8 support. It even shows code recommendations to turn for-loops into Java8 streams and lambdas. Recently I am learning Mobile App development using PhoneGap/Cordova. Getting started with Cordova in NetBeans is a piece of cake.Cons:No workspace concept. For some people it could be an advantage, but for me its a disadvantage. Usually I maintain multiple workspaces for different projects and at times I would like to open them in parallel. Opening multiple NetBeans IDEs is possible, but it should not be that difficult. At home I installed NB 8.0.1 and Wildlfy 8.0.0.FINAL and worked well. The very same day Wildlfy 8.1.0.FINAL got released and at office I tried to run an app using NB 8.0.1 and Wildlfy 8.1.0.FINAL and it’s not working at all. After pulling off my hair for few hours I figured it out that NB 8.0.1 doesn’t work with Wildlfy 8.1.0 version yet. That’s a little bit odd!! Wildlfy changed that much from 8.0.0 to 8.1.0???? I just created a web application and tried to deploy on Tomcat, what should go wrong!! But while deployment its failing. After struggling for few minutes and found answer in StackOverflow that it might be because of Proxy issue. After configuring my corporate proxy details in NetBeans its working fine. But this is not cool. Deploying an app on my local tomcat should not worry about Proxy..right??!!?? There is no shortcut for block comment!!! Come on…Overall I liked NetBeans IDE very much. Being an open source and Free IDE NetBeans is awesome. How I feel about IntellijIDEA: Whenever I read about IntellijIDEA user experience, I always here “wow”, “amazing”,”can’t go back to Eclipse/NB” and “I don’t mind paying $$$ for such a wonderful tool”!!. But I struggled a bit to get used to its Project/Module style of code organization because of my previous Eclipse Workspace/Project style experience. Still I am not very comfortable with it but it’s not a blocker. Pros:No random NullPointerException/IllegalArgumentException exception alerts. Everything can be done from IDE itself. Be it Working with Database, tinkering from Command Prompt, Maven/Gradle tasks execution, RestClient etc etc. AutoCompletion support is just mind blowing. Type sort and Ctrl+Space twice showing sort methods from all Java classes. Wonderful. Interaction with many Version Control Systems works smoothly. Support for other IDE’s key bindings.Cons: Well, following may not be really Cons, but from an Eclipse user perspective following are confusing and difficult to get use to it: Project/Module style code organization is very different from other IDEs. I terribly miss right clicking on a web project and choosing Run on Server. It took me 30 mins to figure out how to run a web application on IntellijIDEA. Please provide an option “Run on Server” and open Edit Configuration window to choose the Server and other stuff.Actually, it is too early for me to say whether IntellijIDEA is best or not because still I am learning to do things in Intellij way. But I can clearly sense that IntellijIDEA is very addictive because of its Editing capabilities and “Everything from IDE” experience. But the major issue is it is very costly and I am 100% sure that my employer won’t pay for IDE though its great productivity booster. I am actually considering to use IntellijIDEA Community Edition also because it has Java/Groovy/Maven/Gradle support. And SpringBoot can be run as a standalone Java program and no need of Server support. Overall I feel it is powerful and feature rich IDE and I just need to understand the IntellijIDEA way of doing things. What Eclipse features I missed from NetbeansIDE/IntellijIDEA: After playing with NetBeansIDE and IntellijIDEA, I feel Eclipse is better in the following aspects:Support for multiple workspaces and multiple instances Eclipse color scheme for Java editor is pleasant than NetBeans glassy look and IntellijIDEA’s dull grey look. Sensible Eclipse shortcut key bindings. Many of the key bindings don’t include crazy combination of Ctrl+Shift+Alt as in IntellijIDEA. Maven pom editor’s Dependency Hierarchy Tab view which provides a neat view of “from where this jar dependency came”. Simple tree structure looks better than fancy graphs to me.Conclusion: All in all, what I came to know is most of the things that you do in one IDE can also be done in other IDEs as well. It is just a matter of getting use to the selected IDEs way of doing things. But if you are spending lot of time fighting with IDE itself then it’s a red flag. You should consider moving to a better IDE. After playing with NetBeans and IntellijIDEA I came to the conclusion that:If you have to work with JavaEE projects heavily go with NetBeans. If you can get a license for IntellijIDEA that’s great, if not choose a stable version of STS and live with it. Don’t just upgrade your Eclipse/STS because there is a newer version released. Newer not always means better.Reference: NetBeansIDE and IntellijIDEA From The Eyes of a Long Time Eclipse User from our JCG partner Siva Reddy at the My Experiments on Technology blog....
java-logo

Java And The Sweet Science

When you have been developing in Java for 15 years and a coworker asks you to help them debug a null pointer exception, you don’t expect to be surprised. Usually it is quite obvious what is null and the only thing you need to do is find out why. Sometimes it is a little more difficult because someone has created a chain of dereferenced objects. The other day I ran into something a little new to me and baffling for a period of time. One of the easiest things to debug in Java was a momentary mystery. Consider the code below and tell me where the Null Pointer Exception is:   return value; That right, the NPE was being thrown on a simple return statement. How could this be? There is no explicit dereferencing going on. No reference to be null. That statement is as simple as they come. Let me expand the code view a little bit for you to get a better idea of what is going on: public int getValue(){ return value; } Once again, we are looking at very simple code. Between the code above and the hint in the title of the article, you may have figured out what is going on or you may be more confused. Again nothing is being explicitly dereferenced. Not only that we aren’t even dealing with a reference, it is returning a primitive. Have you figured it out from the clues yet? Okay, here is the rest of the code and the explanation: package Example; public class Example { Integer value; public int getValue(){ return value; } } Notice that value is an Integer with a capital I and getValue return int. In the old days before Java 5, you would have gotten a compile error on the above code. Java 5 however introduced Autoboxing. This feature has been around for almost half my Java career and had never stung or confused me. It has always been a convenient feature. Autoboxing allows for seamless conversion between primitives and their first class object equivalents. So instead of calling value.intValue to get the primitive, you can just assign value. But under the covers it still calls the intValue method. That is where the NPE happened. The line in question became: return value.intValue(); On that line, it is obvious where the NPE happens. Oh, in case anyone missed it, the sport boxing is called the Sweet Science. I felt like I had been sucker-punched by Auto­boxing, thus the name of this article.Reference: Java And The Sweet Science from our JCG partner Brad Mongar at the Keyhole Software blog....
software-development-2-logo

Legacy Code To Testable Code #2: Extract Method

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. As with renaming, extracting a method helps us understand the code better. If you find it easy to name the method, it makes sense. Otherwise, you just enclosed code that does a lot of things. It can be useful sometimes, although not as extracting small methods that make sense. Extracting a method also introduces a seam. This method can now be mocked, and can now affect the code as it being tested. One of the tricks when not using power-tools is wrapping a static method with an instance method. In our Person class, we have the GetZipCode method: public class Person { String street;public String getZipCode() { Directory directory = Directory.getInstance(); return directory.getZipCodeFromStreet(street); } } The Directory.getInstance() method is static. If we extract it to a getDirectory method (in the Person class) and make this method accessible, we now can mock it. public class Person { String street; public String getZipCode() { Directory directory = getDirectory(); return directory.getZipCodeFromStreet(street); } protected Directory getDirectory() { return Directory.getInstance(); } } While it’s now very easy to mock the getDirectory method using Mockito, it was also easy to mock the Directory.getInstance if we used PowerMockito. Is there an additional reason to introduce a new method? If it’s just for the sake of testing – there’s no need to do the extraction. Sometimes, however mocking things with power-tools is not easy. Problems appearing in static constructors may require more handling on the test side. It may be easier to wrap in a separate method. There are times when extracting helps us regardless of the mocking tool. We can use method extraction to simplify the test, even before we’ve written it. It’s simpler and safer to mock one method, rather than 3 calls. If our getZipCode method looked like this: public String getZipCode() { Address address = new Address(); address.setStreet(street); address.setCountry(country); address.setState(state); address.setCity(city); Directory directory = Directory.getInstance(address); return directory.GetZipCode(); } Even with power-tools, faking the Address instance and setting the rest of the behavior settings just for retrieving the directory is quite a lot of work, which means a longer test with a long setup. If we extract a getDirectoryFromAddress method: public String getZipCode() { Directory directory = getDirectoryFromAddress(); return directory.GetZipCode(); } We get more readable code, and we’ll need to mock only one line. While extracting has its up side, making a method a seam comes with the baggage. If the method is private, and we use power tools to mock it, coupling between test and code is increased. If we make it public, someone can call it. If it’s protected, a derived class can call it. Changes for testability is a change of design, for better or worse.Reference: Legacy Code To Testable Code #2: Extract Method from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

A Java conversion puzzler, not suitable for work (or interviews)

A really hard interview question would be something like this:                   int i = Integer.MAX_VALUE; i += 0.0f; int j = i; System.out.println(j == Integer.MAX_VALUE); // true Why does this print true? At first glace, the answer seem obvious, until you realise that if you change int i for long i things get weird: long i = Integer.MAX_VALUE; i += 0.0f; int j = (int) i; System.out.println(j == Integer.MAX_VALUE); // false System.out.println(j == Integer.MIN_VALUE); // trueWhat is going on you might wonder? When did Java become JavaScript? Let me start by explaining why long gives such a strange result. An important detail about += is that it does an implicit cast. You might think that: a += b; is the same as: a = a + b; and basically it is except with a subtle difference which most of the time doesn’t matter: a = (typeOf(a)) (a + b); Another subtle feature of addition is the result is the “wider” of the two types. This means that: i += 0.0f; is actually: i = (long) ((float) i + 0.0f); When you cast Integer.MAX_VALUE to a float you get a rounding error (as float has a mantissa of 24-bits) resulting in the value being one more than what you started with. i.e. it is the same as: i = Integer.MAX_VALUE + 1; // for long i When you cast Integer.MAX_VALUE + 1 to an int again, you get an overflow and you have: Integer.MIN_VALUE; j = Integer.MIN_VALUE; So why is that a long get the unexpected value, and int happens to get the expected value. The reason is that when rounding from floating point to an integer it rounds down to 0, to the nearest representable value. Thus: int k = (int) Float.MAX_VALUE; // k = Integer.MAX_VALUE; int x = (int) (Integer.MAX_VALUE + 1.0f) // x = Integer.MAX_VALUE; Note: Float.MAX_VALUE / Integer.MAX_VALUE is 1.5845632E29 which is a hell of an error, but the best int can do. In short, for an int value Integer.MAX_VALUE, the statement i += 0.0f; causes the value to jump up one (casting to a float) and then down one (casting back to an int) so you get the value you started with.Reference: A Java conversion puzzler, not suitable for work (or interviews) from our JCG partner Peter Lawrey at the Vanilla Java blog....
mongodb-logo

Integration testing done right with Embedded MongoDB

Introduction Unit testing requires isolating individual components from their dependencies. Dependencies are replaced with mocks, which simulate certain use cases. This way, we can validate the in-test component behavior across various external context scenarios. Web components can be unit tested using mock business logic services. Services can be tested against mock data access repositories. But the data access layer is not a good candidate for unit testing, because database statements need to be validated against an actual running database system. Integration testing database options Ideally, our tests should run against a production-like database. But using a dedicated database server is not feasible, as we most likely have more than one developer to run such integration test-suites. To isolate concurrent test runs, each developer would require a dedicated database catalog. Adding a continuous integration tool makes matters worse since more tests would have to be run in parallel. Lesson 1: We need a forked test-suite bound database When a test suite runs, a database must be started and only made available to that particular test-suite instance. Basically we have the following options:An in-memory embedded database A temporary spawned database processThe fallacy of in-memory database testing Java offers multiple in-memory relational database options to choose from:HSQLDB H2 Apache DerbyEmbedding an in-memory database is fast and each JVM can run it’s own isolated database. But we no longer test against the actual production-like database engine because our integration tests will validate the application behavior for a non-production database system. Using an ORM tool may provide the false impression that all database are equal, especially when all generated SQL code is SQL-92 compliant. What’s good for the ORM tool database support may deprive you from using database specific querying features (window functions, Common table expressions, PIVOT). So the integration testing in-memory database might not support such advanced queries. This can lead to reduced code coverage or to pushing developers to only use the common-yet-limited SQL querying features. Even if your production database engine provides an in-memory variant, there may still be operational differences between the actual and the lightweight database versions. Lesson 2: In-memory databases may give you the false impression that your code will also run on a production database Spawning a production-like temporary database Testing against the actual production database is much more valuable and that’s why I grew to appreciate this alternative. When using MongoDB we can choose the embedded mongo plugin. This open-source project creates an external database process that can be bound to the current test-suite life-cycle. If you’re using Maven, you can take advantage of the embedmongo-maven-plugin: <plugin> <groupId>com.github.joelittlejohn.embedmongo</groupId> <artifactId>embedmongo-maven-plugin</artifactId> <version>${embedmongo.plugin.version}</version> <executions> <execution> <id>start</id> <goals> <goal>start</goal> </goals> <configuration> <port>${embedmongo.port}</port> <version>${mongo.test.version}</version> <databaseDirectory>${project.build.directory}/mongotest</databaseDirectory> <bindIp>127.0.0.1</bindIp> </configuration> </execution> <execution> <id>stop</id> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> When running the plugin, the following actions are taken:A MongoDB pack is downloaded: [INFO] --- embedmongo-maven-plugin:0.1.12:start (start) @ mongodb-facts --- Download Version{2.6.1}:Windows:B64 START Download Version{2.6.1}:Windows:B64 DownloadSize: 135999092 Download Version{2.6.1}:Windows:B64 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% 17% 18% 19% 20% 21% 22% 23% 24% 25% 26% 27% 28% 29% 30% 31% 32% 33% 34% 35% 36% 37% 38% 39% 40% 41% 42% 43% 44% 45% 46% 47% 48% 49% 50% 51% 52% 53% 54% 55% 56% 57% 58% 59% 60% 61% 62% 63% 64% 65% 66% 67% 68% 69% 70% 71% 72% 73% 74% 75% 76% 77% 78% 79% 80% 81% 82% 83% 84% 85% 86% 87% 88% 89% 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% Download Version{2.6.1}:Windows:B64 downloaded with 3320kb/s Download Version{2.6.1}:Windows:B64 DONEUpon starting a new test suite, the MongoDB pack is unzipped under a unique location in the OS temp folder Extract C:\Users\vlad\.embedmongo\win32\mongodb-win32-x86_64-2008plus-2.6.1.zip START Extract C:\Users\vlad\.embedmongo\win32\mongodb-win32-x86_64-2008plus-2.6.1.zip DONEThe embedded MongoDB instance is started. [mongod output]note: noprealloc may hurt performance in many applications [mongod output] 2014-10-09T23:25:16.889+0300 [DataFileSync] warning: --syncdelay 0 is not recommended and can have strange performance [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] MongoDB starting : pid=2384 port=51567 dbpath=D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest 64-bit host=VLAD [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] db version v2.6.1 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb552c9c726e8 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] allocator: system [mongod output] 2014-10-09T23:25:16.891+0300 [initandlisten] options: { net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 51567 }, security: { authorization: "disabled" }, storage: { dbPath: "D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest", journal: { enabled: false }, preallocDataFiles: false, smallFiles: true, syncPeriodSecs: 0.0 } } [mongod output] 2014-10-09T23:25:17.179+0300 [FileAllocator] allocating new datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.ns, filling with zeroes... [mongod output] 2014-10-09T23:25:17.179+0300 [FileAllocator] creating directory D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\_tmp [mongod output] 2014-10-09T23:25:17.240+0300 [FileAllocator] done allocating datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.ns, size: 16MB, took 0.059 secs [mongod output] 2014-10-09T23:25:17.240+0300 [FileAllocator] allocating new datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.0, filling with zeroes... [mongod output] 2014-10-09T23:25:17.262+0300 [FileAllocator] done allocating datafile D:\wrk\vladmihalcea\vladmihalcea.wordpress.com\mongodb-facts\target\mongotest\local.0, size: 16MB, took 0.021 secs [mongod output] 2014-10-09T23:25:17.262+0300 [initandlisten] build index on: local.startup_log properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" } [mongod output] 2014-10-09T23:25:17.262+0300 [initandlisten] added index to empty collection [mongod output] 2014-10-09T23:25:17.263+0300 [initandlisten] waiting for connections on port 51567 [mongod output] Oct 09, 2014 11:25:17 PM MongodExecutable start INFO: de.flapdoodle.embed.mongo.config.MongodConfigBuilder$ImmutableMongodConfig@26b3719cFor the life-time of the current test-suite you can see the embedded-mongo process: C:\Users\vlad>netstat -ano | findstr 51567 TCP 127.0.0.1:51567 0.0.0.0:0 LISTENING 8500 C:\Users\vlad>TASKLIST /FI "PID eq 8500"Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ extract-0eecee01-117b-4d2 8500 RDP-Tcp#0 1 44,532 KWhen the test-suite is finished the embeded-mongo is stopped [INFO] --- embedmongo-maven-plugin:0.1.12:stop (stop) @ mongodb-facts --- 2014-10-09T23:25:21.187+0300 [initandlisten] connection accepted from 127.0.0.1:64117 #11 (1 connection now open) [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] terminating, shutdown command received [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] dbexit: shutdown called [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to close listening sockets... [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] closing listening socket: 520 [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to flush diaglog... [mongod output] 2014-10-09T23:25:21.189+0300 [conn11] shutdown: going to close sockets... [mongod output] 2014-10-09T23:25:21.190+0300 [conn11] shutdown: waiting for fs preallocator... [mongod output] 2014-10-09T23:25:21.190+0300 [conn11] shutdown: closing all files... [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] closeAllFiles() finished [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] shutdown: removing fs lock... [mongod output] 2014-10-09T23:25:21.191+0300 [conn11] dbexit: really exiting now [mongod output] Oct 09, 2014 11:25:21 PM de.flapdoodle.embed.process.runtime.ProcessControl stopOrDestroyProcessConclusion The embed-mongo plugin is nowhere slower than any in-memory relation database systems. It makes me wonder why there isn’t such an option for open-source RDBMS (e.g. PostgreSQL). This is a great open-source project idea and maybe Flapdoodle OSS will offer support for relational databases too.Code available on GitHub.Reference: Integration testing done right with Embedded MongoDB from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
java-interview-questions-answers

Injecting domain objects instead of infrastructure components

Dependency Injection is a widely used software design pattern in Java (and many other programming languages) that is used to achieve Inversion of Control. It promotes reusability, testability, maintainability and helps building loosely coupled components. Dependency Injection is the de facto standard to wire Java objects together, these days. Various Java Frameworks like Spring or Guice can help implementing Dependency Injection. Since Java EE 6 there is also an official Java EE API for Dependency Injection available: Contexts and Dependency Injection (CDI). We use Dependency Injection to inject services, repositories, domain related components, resources or configuration values. However, in my experience, it is often overlooked that Dependency Injection can also be used to inject domain objects. A typical example of this, is the way the currently logged in user is obtained in Java many applications. Usually we end up asking some component or service for the logged in user. The code for this might look somehow like the following snippet: public class SomeComponent {  @Inject   private AuthService authService;      public void workWithUser() {     User loggedInUser = authService.getLoggedInUser();     // do something with loggedInUser   } } Here a AuthService instance is injected into SomeComponent. Methods of SomeComponent now use the AuthService object to obtain an instance of the logged in user. However, instead of injecting AuthService we could inject the logged in user directly into SomeComponent. This could look like this: public class SomeComponent {  @Inject   @LoggedInUser   private User loggedInUser;      public void workWithUser() {     // do something with loggedInUser   } } Here the User object is directly injected into SomeComponent and no instance of AuthService is required. The custom annotation @LoggedInUser is used to avoid conflicts if more than one (managed) bean of type User exist. Both, Spring and CDI are capable of this type of injection (and the configuration is actually very similar). In the following section we will see how domain objects can be injected using Spring. After this, I will describe what changes are necessary to do the same with CDI. Domain object injection with Spring To inject domain objects like shown in the example above, we only have to do two little steps. First we have to create the @LoggedInUser annotation: import java.lang.annotation.*; import org.springframework.beans.factory.annotation.Qualifier;@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface LoggedInUser {} Please note the @Qualifier annotation which turns @LoggedInUser into a custom qualifier. Qualifiers are used by Spring to avoid conflicts if multiple beans of the same type are available. Next we have to add a bean definition to our Spring configuration. We use Spring’s Java configuration here, the same can be done with xml configuration. @Configuration public class Application {  @Bean   @LoggedInUser   @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS)   public User getLoggedInUser() {     // retrieve and return user object from server/database/session   } } Inside getLoggedInUser() we have to retrieve and return an instance of the currently logged in user (e.g. by asking the AuthService from the first snippet). With @Scope we can control the scope of the returned object. The best scope depends on the domain objects and might differ among different domain objects. For a User object representing the logged in user, request or session scope would be valid choices. By annotating getLoggedInUser() with @LoggedInUser, we tell Spring to use this bean definition whenever a bean with type User annotated with @LoggedInUser should be injected. Now we can inject the logged in user into other components: @Component public class SomeComponent {  @Autowired   @LoggedInUser   private User loggedInUser;      ... } In this simple example the qualifier annotation is actually not necessary. As long as there is only one bean definition of type User available, Spring could inject the logged in user by type. However, when injecting domain objects it can easily happen that you have multiple bean definitions of the same type. So, using an additional qualifier annotation is a good idea. With their descriptive name qualifiers can also act as documentation (if named properly). Simplify Spring bean definitions When injecting many domain objects, there is chance that you end up repeating the scope and proxy configuration over and over again in your bean configuration. In such a situation it comes in handy that Spring annotations can be used on custom annotations. So, we can simply create our own @SessionScopedBean annotation that can be used instead of @Bean and @Scope: @Target({ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Bean @Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS) public @interface SessionScopedBean {} Now we can simplify the bean definition to this: @Configuration public class Application {  @LoggedInUser   @SessionScopedBean   public User getLoggedInUser() {     ...   } } Java EE and CDI The configuration with CDI is nearly the same. The only difference is that we have to replace Spring annotations with javax.inject and CDI annotations. So, @LoggedInUser should be annotated with javax.inject.Qualifier instead of org.springframework.beans.factory.annotation.Qualifier (see: Using Qualifiers). The Spring bean definition can be replaced with a CDI Producer method. Instead of @Scope the appropriate CDI scope annotation can be used. At the injection point Spring’s @Autowired can be replaced with @Inject. Note that Spring also supports javax.inject annotations. If you add the javax.inject dependency to your Spring project, you can also use @Inject and @javax.inject.Qualifier. It is actually a good idea to do this because it reduces Spring dependencies in your Java code. Conclusion We can use custom annotations and scoped beans to inject domain objects into other components. Injecting domain objects can make your code easier to read and can lead to cleaner dependencies. If you only inject AuthService to obtain the logged in user, you actually depend on the logged in user and not on AuthService. On the downside it couples your code stronger to the Dependency Injection framework, which has to manage bean scopes for you. If you want to keep the ability to use your classes outside a Dependency Injection container this can be a problem. Which types of domain objects are suitable for injection highly depends on the application you are working on. Good candidates are domain objects you often use and which not depend on any method or request parameters. The currently logged in user is an object that might often be suitable for injection.You can find the source of the shown example on GitHub.Reference: Injecting domain objects instead of infrastructure components from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close