Featured FREE Whitepapers

What's New Here?

javascript-logo

Server side logging from browser side JavaScript code

Application logging is something we all do in our applications that get deployed on an application server, right? Using frameworks like Log4J or Logback seems like a no-brainer to most Java developers. But what about the code we’ve written that is running in those pesky browsers? I guess that, apart from the occasional console.log() statement used during debugging, we don’t give much thought to JavaScript logging. I find this situation very regrettable since nowadays the trend appears to be to move our application logic to the browser. And with it, interesting events happening in the browser might go unnoticed, or any bugs that will happen, no matter how well we’ve developed and tested our client side code, might prove needlessly hard to reproduce and therefore fix. In this blog post I’ll demonstrate a very basic setup to log messages from the browser on the server using some very basic JavaScript with jQuery, and a simple Spring controller with Slf4J. Server side code Assuming you already have an existing Spring web application up and running and are using SLF4J for your application logging, all we have to do is add an additional @Controller that will take care of logging any incoming messages. Our JSLogger controller package it.jdev.demo;import java.lang.invoke.MethodHandles;import javax.servlet.http.HttpServletRequest;import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.HttpStatus; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseStatus;@Controller @RequestMapping(value = "/js-log") public class JSLogger {private static final Logger LOGGER = LoggerFactory.getLogger(MethodHandles.Lookup.class);@RequestMapping(method = RequestMethod.POST) @ResponseStatus(HttpStatus.NO_CONTENT) public void logError(final HttpServletRequest request, @RequestBody(required = true) final String logMessage) { final String ipAddress = request.getRemoteAddr(); final String hostname = request.getRemoteHost(); LOGGER.warn("Received client-side logmessage (" + ipAddress + "/" + hostname + "): " + logMessage); }} JavaScript code For the JavaScript part of our logging solution we’ll add a JS file called jdev.js. In it we’ll define a module named JDEV.logging that will contain a method called logToServer(). This method will send an Ajax message to our controller with a little bit of help from jQuery. Just make sure that the url variable points to the endpoint configured in our controller’s @RequestMapping. Our JavaScript logging module var JDEV = JDEV || {};JDEV.namespace = function(ns_string) { var parts = ns_string.split('.'); var parent = JDEV;// strip redundant leading global if (parts[0] === "JDEV") { parts = parts.slice(1); } for (var i = 0; i < parts.length; i += 1) { // create a property if it doesn't exist if (typeof parent[parts[i]] === "undefined") { parent[parts[i]] = {}; } parent = parent[parts[i]]; } return parent; };JDEV.namespace('logging'); JDEV.logging = (function() {var logToServer = function(logMessage) { var logEventObject = { "message" : logMessage, "location" : location.href, "browser" : navigator.userAgent, }; var logMsg = JSON.stringify(logEventObject); var url = "js-log"; $.ajax({ type : "POST", url : url, data : logMsg, contentType : "application/json", cache : "false", }); } return { logToServer : logToServer, }})(); All that is left to do, is include jQuery and our jdev.js file in our html pages, and instead of calling console.log() use our new logging method: Wiring up the JS code <script src="//code.jquery.com/jquery-1.11.0.min.js"></script> <script type="text/javascript" src="js/jdev.js"></script> <script type="text/javascript"> $(document).ready(function() { JDEV.logging.logToServer("Hi from the browser..."); }); </script> </body> </html> If everything is set up correctly, you should wind up with a similar log entry: WARN : Received client-side logmessage (127.0.0.1/localhost): {"message":"Hi from the browser...","location":"http://localhost:8080/demo/","browser":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36"} Wrapping up I’ve demonstrated a very simple design making it possible to log entries in your server side log that originate from browser side JavaScript code. Of course, you can elaborate on this example, e.g. by adding the possibility to send along the Log Level with the Ajax call.Reference: Server side logging from browser side JavaScript code from our JCG partner Wim van Haaren at the JDev blog....
dbunit-logo

Spring from the Trenches: Using Null Values in DbUnit Datasets

If we are writing integration tests for an application that uses Spring Framework, we can integrate DbUnit with the Spring testing framework by using Spring Test DbUnit. However, this integration is not problem free. Often we have to insert null values to the database before our tests are run or verify that the value saved to the specific table column is null. These are very basic use cases, but it is tricky to write integration tests that support them. This blog post identifies the problems related to null values and describes how we can solve them. Let’s start by taking a quick look at the system under test. If you don’t know how you can write integration tests for your repositories, you should read my blog post titled: Spring Data JPA Tutorial: Integration Testing. It explains how you can write integration tests for Spring Data JPA repositories, but you can use the same approach for writing test for other Spring powered repositories that use a relational database. The System Under Test The tested “application” has one entity and one Spring Data JPA repository that provides CRUD operations for that entity. Our entity class is called Todo and the relevant part of its source code looks as follows: import javax.persistence.*;@Entity @Table(name="todos") public class Todo {private static final int MAX_LENGTH_DESCRIPTION = 500; private static final int MAX_LENGTH_TITLE = 100;@Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id;@Column(name = "description", nullable = true, length = MAX_LENGTH_DESCRIPTION) private String description;@Column(name = "title", nullable = false, length = MAX_LENGTH_TITLE) private String title;@Version private long version; //Constructors, builder class, and getters are omitted. }You can get the full source code of the Todo class from Github.Also, we should not to use the builder pattern because our entity has only two String fields that are set when a new Todo object is created. However, I used it here because it makes our tests easier to read. Our Spring Data JPA repository interface is called TodoRepository, and it extends the CrudRepository<T, ID extends Serializable> interface. This repository provides CRUD operations for Todo objects. It also declares one query method which returns all todo entries whose description matches with the given search term. The source code of the TodoRepository interface looks as follows: import org.springframework.data.repository.CrudRepository;public interface TodoRepository extends CrudRepository<Todo, Long> {List<Todo> findByDescription(String description); } Additional Reading:The Javadoc of the CrudRepository interface Spring Data JPA Tutorial Spring Data JPA – Reference DocumentationLet’s move on and find out how we can deal with null values when we write integration tests for code that either reads information from a relational database or saves information to it. Dealing with Null Values When we write integration tests for our data access code, we have to initialize the database into a known state before each test case and ensure that the correct data is written to the database. This section identifies the problems we face when we are writing integration tests thatUse flat XML datasets. Write null values to the database or ensure that the value of a table column is null.We will also learn how we can solve these problems. Inserting Null Values to the Database When we write integration tests that read information from the database, we have to initialize that database into a known state before our tests are invoked, and sometimes we have to insert null values to the database. Because we use flat XML datasets, we can insert null value to a table column by omitting the corresponding the attribute value. This means that if we want to insert null value to the description column of the todos table, we can do this by using the following the following DbUnit dataset: <dataset> <todos id="1" title="FooBar" version="0"/> </dataset> However, often we have to insert more than one row to the used database table. The following DbUnit dataset (todo-entries.xml) inserts two rows to the todos table: <dataset> <todos id="1" title="FooBar" version="0"/> <todos id="2" description="description" title="title" version="0"/> </dataset> Let’s find out what happens when we write an integration test to the findByDescription() method of the TodoRepository interface and initialize our database by using the previous dataset (todo-entries.xml). The source code of our integration test looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) public class ITTodoRepositoryTest {private static final Long ID = 2L; private static final String DESCRIPTION = "description"; private static final String TITLE = "title"; private static final long VERSION = 0L;@Autowired private TodoRepository repository;@Test @DatabaseSetup("todo-entries.xml") public void findByDescription_ShouldReturnOneTodoEntry() { List<Todo> todoEntries = repository.findByDescription(DESCRIPTION); assertThat(todoEntries).hasSize(1);Todo found = todoEntries.get(0); assertThat(found.getId()).isEqualTo(ID); assertThat(found.getTitle()).isEqualTo(TITLE); assertThat(found.getDescription()).isEqualTo(DESCRIPTION); assertThat(found.getVersion()).isEqualTo(VERSION); } } When we run this integration test, we get the following assertion error: java.lang.AssertionError: Expected size:<1> but was:<0> in: <[]> This means that the correct todo entry was not found from the database. What happened? Our query method is so simple that it should have worked, especially since we inserted the correct data to the database before our test case was invoked. Well, actually the description columns of both rows are null. The DbUnit FAQ describes why this happened:DbUnit uses the first tag for a table to define the columns to be populated. If the following records for this table contain extra columns, these ones will therefore not be populated.It also provides a solution to this problem:Since DBUnit 2.3.0 there is a functionality called “column sensing” which basically reads in the whole XML into a buffer and dynamically adds new columns as they appear.We could solve this problem by reversing the order of todos elements but this is cumbersome because we would have to remember to do every time when we create new datasets. We should use column sensing because it eliminates the possibility of a human error. We can enable column sensing by following these steps:Create a dataset loader class that extends the AbstractDataSetLoader class. Override the protected IDateSet createDataSet(Resource resource) method of the AbstractDataSetLoader class. Implement this method by enabling column sensing and returning a new FlatXmlDataSet object.The source code of the ColumnSensingFlatXmlDataSetLoader class looks as follows: import com.github.springtestdbunit.dataset.AbstractDataSetLoader; import org.dbunit.dataset.IDataSet; import org.dbunit.dataset.xml.FlatXmlDataSetBuilder; import org.springframework.core.io.Resource; import java.io.InputStream;public class ColumnSensingFlatXMLDataSetLoader extends AbstractDataSetLoader { @Override protected IDataSet createDataSet(Resource resource) throws Exception { FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder(); builder.setColumnSensing(true); try (InputStream inputStream = resource.getInputStream()) { return builder.build(inputStream); } } } Additional Reading:The Javadoc of the FlatXmlDataSet classWe can now configure our test class to use this data et loader by annotating our test class with the @DbUnitConfiguration annotation and setting the value of its loader attribute to ColumnSensingFlatXmlDataSetLoader.class. The source code of our fixed integration test looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingFlatXMLDataSetLoader.class) public class ITTodoRepositoryTest {private static final Long ID = 2L; private static final String DESCRIPTION = "description"; private static final String TITLE = "title"; private static final long VERSION = 0L;@Autowired private TodoRepository repository;@Test @DatabaseSetup("todo-entries.xml") public void findByDescription_ShouldReturnOneTodoEntry() { List<Todo> todoEntries = repository.findByDescription(DESCRIPTION); assertThat(todoEntries).hasSize(1);Todo found = todoEntries.get(0); assertThat(found.getId()).isEqualTo(ID); assertThat(found.getTitle()).isEqualTo(TITLE); assertThat(found.getDescription()).isEqualTo(DESCRIPTION); assertThat(found.getVersion()).isEqualTo(VERSION); } } When we run our integration test for the second time, it passes. Let’s find out how we can verify that null values are saved to the database. Verifying that the Value of a Table Column Is Null When we write integration tests that save information to the database, we have to ensure that the correct information is really saved to the database, and sometimes we have to verify that the value of a table column is null. For example, if we write an integration test which verifies that the correct information is saved to the database when we create a todo entry that has no description, we have to ensure that a null value is inserted to the description column of the todos table. The source code of our integration test looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.DbUnitConfiguration; import com.github.springtestdbunit.annotation.ExpectedDatabase; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingFlatXMLDataSetLoader.class) public class ITTodoRepositoryTest {private static final String DESCRIPTION = "description"; private static final String TITLE = "title";@Autowired private TodoRepository repository;@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-without-description-expected.xml") public void save_WithoutDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(null) .build();repository.save(todoEntry); } } This is not a good integration test because it only tests that Spring Data JPA and Hibernate are working correctly. We shouldn’t waste our time by writing tests for frameworks. If we don’t trust a framework, we shouldn’t use it. If you want to learn to write good integration tests for your data access code, you should read my tutorial titled: Writing Tests for Data Access Code. The DbUnit dataset (no-todo-entries.xml) that is used to initialize our database looks as follows: <dataset> <todos/> </dataset> Because we don’t set the description of the saved todo entry, the description column of the todos table should be null. This means that we should omit it from the dataset which verifies that the correct information is saved to the database. This dataset (save-todo-entry-without-description-expected.xml) looks as follows: <dataset> <todos id="1" title="title" version="0"/> </dataset> When we run our integration test, it fails and we see the following error message: junit.framework.ComparisonFailure: column count (table=todos, expectedColCount=3, actualColCount=4) Expected :[id, title, version] Actual :[DESCRIPTION, ID, TITLE, VERSION] The problem is that DbUnit expects that the todos table has only id, title, and version columns. The reason for this is that these columns are the only columns that are found from the first (and the only) row of our dataset. We can solve this problem by using a ReplacementDataSet. A ReplacementDataSet is a decorator that replaces the placeholders found from a flat XML dataset file with the replacement objects. Let’s modify our custom dataset loader class to return a ReplacementDataSet object that replaces ‘[null]‘ strings with null. We can do this by making the following changes to our custom dataset loader:Add a private createReplacementDataSet() method to the dataset loader class. This method returns a ReplacementDataSet object and takes a FlatXmlDataSet object as a method parameter. Implement this method by creating a new ReplacementDataSet object and returning the created object. Modify the createDataSet() method to invoke the private createReplacementDataSet() method and return the created ReplacementDataSet object.The source code of the ColumnSensingReplacementDataSetLoader class looks as follows: import com.github.springtestdbunit.dataset.AbstractDataSetLoader; import org.dbunit.dataset.IDataSet; import org.dbunit.dataset.ReplacementDataSet; import org.dbunit.dataset.xml.FlatXmlDataSet; import org.dbunit.dataset.xml.FlatXmlDataSetBuilder; import org.springframework.core.io.Resource;import java.io.InputStream;public class ColumnSensingReplacementDataSetLoader extends AbstractDataSetLoader {@Override protected IDataSet createDataSet(Resource resource) throws Exception { FlatXmlDataSetBuilder builder = new FlatXmlDataSetBuilder(); builder.setColumnSensing(true); try (InputStream inputStream = resource.getInputStream()) { return createReplacementDataSet(builder.build(inputStream)); } }private ReplacementDataSet createReplacementDataSet(FlatXmlDataSet dataSet) { ReplacementDataSet replacementDataSet = new ReplacementDataSet(dataSet); //Configure the replacement dataset to replace '[null]' strings with null. replacementDataSet.addReplacementObject("[null]", null); return replacementDataSet; } } Additional Reading:The most commonly used implementations of the IDataSet interface The Javadoc of the ReplacementDataSet classWe can fix our integration test by following these steps:Configure our test class to load the used DbUnit datasets by using the ColumnSensingReplacementDataSetLoader class. Modify our dataset to verify that the value of the description column is null.First, we have to configure our test class to load the DbUnit datasets by using the ColumnSensingReplacementDataSetLoader class. Because we have already annotated our test class with the @DbUnitConfiguration, we have to change the value of its loader attribute to ColumnSensingReplacementDataSetLoader.class. The source code of the fixed test class looks as follows: import com.github.springtestdbunit.DbUnitTestExecutionListener; import com.github.springtestdbunit.annotation.DatabaseSetup; import com.github.springtestdbunit.annotation.DbUnitConfiguration; import com.github.springtestdbunit.annotation.ExpectedDatabase; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener; import org.springframework.test.context.support.DirtiesContextTestExecutionListener; import org.springframework.test.context.transaction.TransactionalTestExecutionListener;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = {PersistenceContext.class}) @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class, TransactionalTestExecutionListener.class, DbUnitTestExecutionListener.class }) @DbUnitConfiguration(dataSetLoader = ColumnSensingReplacementDataSetLoader.class) public class ITTodoRepositoryTest {private static final String DESCRIPTION = "description"; private static final String TITLE = "title";@Autowired private TodoRepository repository;@Test @DatabaseSetup("no-todo-entries.xml") @ExpectedDatabase("save-todo-entry-without-description-expected.xml") public void save_WithoutDescription_ShouldSaveTodoEntryToDatabase() { Todo todoEntry = Todo.getBuilder() .title(TITLE) .description(null) .build();repository.save(todoEntry); } } Second, we have to verify that a null value is saved to the description column of the todos table. We can do this by adding a description attribute to the only todos element of our dataset, and setting the value of the description attribute to ‘[null]‘. Our fixed dataset (save-todo-entry-without-description-expected.xml) looks as follows: <dataset> <todos id="1" description="[null]" title="title" version="0"/> </dataset> When we run our integration test, it passes. Let’s move on and summarize what we learned from this blog post. Summary This blog post has taught us four things:DbUnit assumes that a database table contains only those columns that are found from the first tag that specifies the columns of a table row. If we want to override this behavior, we have to enable the column sensing feature of DbUnit. If we want to ensure that the a null value is saved to the database, we have to use replacement datasets. We learned how we can create a custom dataset loader that creates replacement datasets and uses column sensing. We learned how we can configure the dataset loader that is used to load our DbUnit datasets.You can get the example application of this blog post from Github.Reference: Spring from the Trenches: Using Null Values in DbUnit Datasets from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-logo

How to Upload Images to DropBox In Java

This Tutorial explains how to upload images to drop box and get the public url of uploaded image . First of all we have to create a DropBox API app using app console. Once you create the app , you can get App key and secret key in the app properties . Now add following dependency in your pom file.         <dependency> <groupId>com.dropbox.core</groupId> <artifactId>dropbox-core-sdk</artifactId> <version>1.7.7</version> </dependency> Now this java program will do the rest . Replace your app ket and secret key in program . Run this java program from command line and it will ask for the code , you will get the code by following the url printed on the console. For getting the public url , we just need to use createShareableUrl of the dropboxClient class. import java.io.BufferedReader; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.Locale;import com.dropbox.core.DbxAppInfo; import com.dropbox.core.DbxAuthFinish; import com.dropbox.core.DbxClient; import com.dropbox.core.DbxEntry; import com.dropbox.core.DbxException; import com.dropbox.core.DbxRequestConfig; import com.dropbox.core.DbxWebAuthNoRedirect; import com.dropbox.core.DbxWriteMode;public class UploadImages { public static void main(String[] args) throws IOException, DbxException { final String DROP_BOX_APP_KEY = "APPKEY"; final String DROP_BOX_APP_SECRET = "SECRETKEY"; String rootDir = "C:\\Users\\Downloads\\";DbxAppInfo dbxAppInfo = new DbxAppInfo(DROP_BOX_APP_KEY, DROP_BOX_APP_SECRET);DbxRequestConfig reqConfig = new DbxRequestConfig("javarootsDropbox/1.0", Locale.getDefault().toString()); DbxWebAuthNoRedirect webAuth = new DbxWebAuthNoRedirect(reqConfig, dbxAppInfo); String authorizeUrl = webAuth.start(); System.out.println("1. Go to this URL : " + authorizeUrl); System.out.println("2. Click \"Allow\" (you might have to log in first)"); System.out.println("3. Copy the authorization code and paste here "); String code = new BufferedReader(new InputStreamReader(System.in)).readLine().trim(); DbxAuthFinish authFinish = webAuth.finish(code); String accessToken = authFinish.accessToken;DbxClient client = new DbxClient(reqConfig, accessToken);System.out.println("account name is : " + client.getAccountInfo().displayName); File inputFile = new File(rootDir +"images\\"+ "javaroots.jpg"); FileInputStream inputStream = new FileInputStream(inputFile); try { DbxEntry.File uploadedFile = client.uploadFile("/javaroots.jpg", DbxWriteMode.add(), inputFile.length(), inputStream); String sharedUrl = client.createShareableUrl("/javaroots.jpg"); System.out.println("Uploaded: " + uploadedFile.toString() + " URL " + sharedUrl); } finally { inputStream.close(); } } } Take reference from the this official drop box link.You can download the full project from this link . Post Comments and Suggestions !!Reference: How to Upload Images to DropBox In Java from our JCG partner Abhishek Somani at the Java, J2EE, Server blog....
jcg-logo

FREE Programming books with the JCG Newsletter

Here at Java Code Geeks we know how much you love books about programming; we are geeks ourselves. After all, a programmer that respects himself should always have his face in a book, he has to keep up with the latest technologies and developments. For this reason, we have decided to distribute 8 of our books for free. You can get access to them by joining our Newsletter. Additionally, you will also receive weekly news, tips and special offers delivered to your inbox courtesy of Java Code Geeks! The material covers a wide array of topics, from the new Java 8 release to JVM and Android programming. So let’s see what you get in detail!  JPA Mini Book One of the problems of Object Orientation is how to map the objects as the database requires. JPA allows us to work with Java classes as it provides a transparent layer to each database specific details; JPA will do the hard work of mapping table to class structure and semantics for the developer. Learn how to leverage the power of JPA in order to create robust and flexible Java applications. With this Mini Book, you will get introduced to JPA and smoothly transition to more advanced concepts.    JVM Troubleshooting Guide The Java bytecode produced when application are compiled, is eventually executed by the Java Virtual Machine (JVM). The JVM has grown to be a sophisticated tool, but it essentially remains a “black box” for most Java programmers. This is especially true when problems and issues arise from its erroneous use. With this guide, you will delve into the intricacies of the JVM and learn how to perform troubleshooting and problem resolution.  Android UI Design Android is an operating system based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablet computers. Android’s user interface is based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching and reverse pinching to manipulate on-screen objects. In this book, you will get a look at the fundamentals of Android UI design. You will understand user input, views and layouts, as well as adapters and fragments. Furthermore, you will learn how to add multimedia to an app and also leverage themes and styles.  Java 8 Features With no doubts, Java 8 release is the greatest thing in the Java world since Java 5 (released quite a while ago, back in 2004). It brings tons of new features to the Java as a language, its compiler, libraries, tools and the JVM (Java virtual machine) itself. In this guide we are going to take a look on all these changes and demonstrate the different usage scenarios on real examples. The tutorial consists of several parts where each one touches the specific side of the platform: language, compiler, libraries, tools, runtime (JVM)  Java Interview Questions In this guide we will discuss about different types of questions that can be used in a Java interview, in order for the employer to test your skills in Java and object-oriented programming in general. In the book’s sections we will discuss about object-oriented programming and its characteristics, general questions regarding Java and its functionality, collections in Java, garbage collectors, exception handling, Java applets, Swing, JDBC, Remote Method Invocation (RMI), Servlets and JSP.    Spring Interview Questions This is a summary of some of the most important questions concerning the Spring Framework, that you may be asked to answer in an interview or in an interview test procedure! There is no need to worry for your next interview test, because Java Code Geeks are here for you! The majority of the things you may be asked is collected in this guide. All core modules, from basic Spring functionality such as Spring Beans, up to Spring MVC framework are presented and described in short.  Java Annotations Tutorial Annotations in Java are a major feature and every Java developer should know how to utilize them. Annotations were introduced in Java in the J2SE update 5 already and the main reason was the need to provide a mechanism that allows programmers to write metadata about their code directly in the code itself. We have provided an abundance of tutorials here at Java Code Geeks and now it is time to gather all the information around Annotations under one reference guide for your reading pleasure!  JUnit Tutorial for Unit Testing A unit can be a function, a class, a package, or a subsystem. So, the term unit testing refers to the practice of testing such small units of your code, so as to ensure that they work as expected. For example, we can test whether an output is what we expected to see given some inputs or if a condition is true or false. The most popular testing framework in Java is JUnit and we have provided plenty of JUnit tutorials. Now, we decided to gather all the JUnit features in one detailed guide for your convenience. We hope you like it!   So, fellow geeks, hop on our newsletter and enjoy our kick-ass books!...
jboss-drools-logo

The Drools and jBPM KIE Apps platform

With the Drools and jBPM (KIE) 6 series came a new workbench, with the promise of eventual end user extensibility. I finally have some teaser videos to show this working and what’s in store. Make sure you select 1080p and go full screen to see them at their best.               What you seen in these videos is the same workbench available on the Drools video’s page. Once this stuff is released you’ll be able to extend an existing Drools or JBPM (KIE) installation or make a new one from scratch that doesn’t have Drools or jBPM in it – i.e. the workbench and it’s extension stuff is available standalone, and you get to chose which plugins you do or don’t want. Here is demo showing the new Bootstrap dynamic grid view builder used to build a perspective, which now doubles as an app. It uses the new RAD, JSFiddle inspired, environment to author a simple AngularJS plugin extension. This all writes to a GIT backend, so you could author these with Intellij or Eclipse and just push it back into the GIT repo. It then demonstrates the creation of a dynamic menu and registers our app there. It then also demonstrates the new app directory. Apps are given labels and can then be discovered in the apps directory – instead, or as well as, top menu entries. Over 2015 we’ll be building a case management system which will compliment this perfect as the domain front end – all creating a fantastic Self Service Software platform. http://youtu.be/KoJ5A5g7y4E Here is a slightly early video showing our app builder working with DashBuilder, http://youtu.be/Yhg31m4kRsM Other components such as our Human Tasks and Forms will be available too. We Also have some cool infrastructure coming event publication and capture and timeline reporting, so you visualise social activity within your organization – you’ll be able to place time timeline components you see in this blog, on your app pages: http://blog.athico.com/2014/09/activity-insight-coming-in-drools-jbpm.html All this is driven by our new project UberFire, which provides the workbench infrastructure for all of this. The project is not yet announced or released, but will do so soon – the website is currently just a placeholder, we’ll blog as soon as there is something to see!Reference: The Drools and jBPM KIE Apps platform from our JCG partner Mark Proctor at the Drools & jBPM blog....
hazelcast-logo

Beginner’s Guide To Hazelcast Part 1

Introduction I am going to be doing a series on Hazelcast. I learned about this product from Twitter. They decided to follow me and after some research into what they do, I decided to follow them. I tweeted that Hazelcast would be a great backbone for a distributed password cracker. This got some interest and I decided to go make one. A vice president of Hazelcast started corresponding with me and we decided that while a cracker was a good project, the community (and me) would benefit from having a series of posts for beginners. I have been getting a lot of good information in the book preview The Book of Hazelcast found on www.hazelcast.com.   What is Hazelcast? Hazelcast is a distributed, in-memory database. There are projects all over the world using Hazelcast. The code is open source under the Apache License 2.0. Features There are a lot of features already built into Hazelcast. Here are some of them:Auto discovery of nodes on a network High Availablity In memory backups The ability to cache data Distributed thread poolsDistributed Executor ServiceThe ability to have data in different partitions. The ability to persist data asynchronously or synchronously. Transactions SSL support Structures to store data:IList IMap MultiMap ISetStructures for communication among different processesIQueue ITopicAtomic OperationsIAtomicLongId GenerationIdGeneratorLockingISemaphore ICondition ILock ICountDownLatchWorking with Hazelcast Just playing around with Hazelcast and reading has taught me to assume these things.The data will be stored as an array of bytes. (This is not an assumption, I got this directly from the book) The data will go over the network. The data is remote. If the data is not in memory, it doesn’t exist.Let me explain these assumptions: The data will be stored as an array of bytes I got this information from The Book of Hazelcast so it is really not an assumption. This is important because not only is the data stored that way, so is the key. This makes life very interesting if one uses something other than a primitive or a String as a key. The developer of hash() and equals() must think about it in terms of the key as an array of bytes instead of as a class. The data will go over the network This is a distributed database and so parts of the data will be stored in other nodes. There are also backups and caching that happen too. There are techniques and settings to reduce transferring data over the network but if one wants high availability, backups must be done. The data is remote This is a distributed database and so parts of the database will be stored on other nodes. I put in this assumption not to resign to the fact that the data is remote but to motivate designs that make sure operations are preformed where most of the data is located. If the developer is skilled enough, this can be kept to a minimum. If the data is not in memory, it doesn’t exist Do not forget that this is an in-memory database. If it doesn’t get loaded into memory, the database will not know that data is stored somewhere else. This database doesn’t persist data to bring it up later. It persists because the data is important. There is no bringing it back from disk once it is out of memory like a conventional database (MySQL) would do. Data Storage Java developers will be happy to know that Hazelcast’s data storage containers except one are extensions of the java.util.Collections interfaces. For example, an IList follows the same method contracts as java.util.List. Here is a list of the different data storage types:IList – This keeps a number of objects in the order they were put in IQueue – This follows BlockingQueue and can be used as alternative to a Message Queue in JMS. This can be persisted via a QueueStore IMap – This extends ConcurrentMap. It can also be persisted by a MapStore. It also has a number of other features that I will talk about in another post. ISet – The keeps a set of unique elements where order is not guaranteed. MultiMap – This does not follow a typical map as there can be multiple values per key.Example Setup For all the features that Hazelcast contains, the initial setup steps are really easy.Download the Hazelcast zip file at www.hazelcast.org and extract contents. Add the jar files found in the lib directory into one’s classpath. Create a file named hazelcast.xml and put the following into the file <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation ="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd " xmlns ="http://www.hazelcast.com/schema/config " xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance">     <network>         <join><multicast enabled="true"/></join>     </network>          <map name="a"></map> </hazelcast> Hazelcast looks in a few places for a configuration file:The path defined by the property hazelcast.config hazelcast.xml in the classpath if classpath is included in the hazelcast.config The working directory If all else fails, hazelcast-default.xml is loaded witch is in the hazelcast.jar. If one dose not want to deal with a configuration file at all, the configuration can be done programmatically.The configuration example here defines multicast for joining together.  It also defines the IMap “a.” A Warning About Configuration Hazelcast does not copy configurations to each node.  So if one wants to be able to share a data structure, it needs to be defined in every node exactly the same. Code This code brings up two nodes and places values in instance’s IMap using an IdGenerator to generate keys and reads the data from instance2. package hazelcastsimpleapp;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IdGenerator; import java.util.Map;/** * * @author Daryl */ public class HazelcastSimpleApp {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); Map map = instance.getMap("a"); IdGenerator gen = instance.getIdGenerator("gen"); for(int i = 0; i < 10; i++) { map.put(gen.newId(), "stuff " + i); } Map map2 = instance2.getMap("a"); for(Map.Entry entry: map2.entrySet()) { System.out.printf("entry: %d; %s\n", entry.getKey(), entry.getValue()); } System.exit(0); } } Amazingly simple isn’t it!  Notice that I didn’t even use the IMap interface when I retrieved an instance of the map.  I just used the java.util.Map interface.  This isn’t good for using the distributed features of Hazelcast but for this example, it works fine. One can observe the assumptions at work here.  The first assumption is storing the information as an array of bytes.  Notice the data and keys are serializable.  This is important because that is needed to store the data.  The second and third assumptions hold true with the data being being accessed by the instance2 node.  The fourth assumption holds true because every value that was put into the “a” map was displayed when read.  All of this example can be found at http://darylmathisonblog.googlecode.com/svn/trunk/HazelcastSimpleApp using subversion. The project was made using Netbeans 8.0. Conclusion An quick overview of the numerous features of Hazelcast were reviewed with a simple example showing IMap and IdGenerator.  A list of assumptions were discussed that apply when developing in a distributed, in-memory database environment. ReferencesThe Book of Hazelcast. Download from http://www.hazelcast.comReference: Beginner’s Guide To Hazelcast Part 1 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
software-development-2-logo

5 Ways Software Developers Can Become Better at Estimation

In my last post, I detailed four of the biggest reasons why software developers suck at estimation, but I didn’t talk about how to solve any of the problems I presented. While estimation will always be inherently difficult for software developers, all hope is not lost. In this post, I am going to give you five real tips you can utilize to become better at estimation–even for complex software development tasks.       Tip 1: Break Things Down Smaller In my last post, I talked about how lengthy time periods, that are so common with software development projects, tend to make estimation very difficult and inaccurate. If you are asked to estimate something that will take you five minutes, you are much more likely to be accurate than if you are asked to estimate something that will take you five months. So, how can we solve this problem? There is actually a relatively simple fix: Break things down into smaller chunks and estimate those smaller chunks.Yes, I know this seems simple and obvious–and I know that this approach is often met with skepticism. There are plenty of excuses you can make about why you can’t break things down into smaller pieces, but the truth is, most things can be broken down–if you are willing to put forth the effort. I’ve actually talked about why smaller is better and how to break down a backlog in the past, so I won’t rehash all the details again here. The key point to realize is that you are never likely to get good at estimating large things. Well, let me rephrase that: The only way you are going to get good at estimating large things is to be learning how to break them down into many smaller things. If you really need to accurately estimate something, it is well worth the effort to spend the time breaking down what you are estimating into much smaller pieces. For example, suppose I was going to estimate how long it will take me to write a blog post. It’s not a very large task, but it’s big enough that estimates can be a bit inaccurate. If I want to be more accurate, I can break down the task into smaller pieces. Consider the difference between trying to estimate:Write and publish a blog postAnd:Research blog post and brainstorm Outline blog post Write first draft of blog post Add images, links and call-outs Schedule post for publishingBy breaking things down into smaller pieces, I can more accurately estimate each piece. In fact–here is a little trick–when things are broken down this small, I can actually time-box certain parts of the process–which is effectively ensuring my estimate is accurate (but, we are jumping ahead, we’ll talk more about time-boxing in a little bit.) The next time you are asked to implement some feature, instead of estimating how long you think it will take you to do it as a whole, try breaking down the task into very small pieces and estimating each piece individually. You can always add up the smaller estimates to give a more accurate estimate of the whole. But wait! I know exactly what you are going to say is wrong with this kind of estimation. Sure, each individual piece’s estimation may be more accurate, but when you add them back together, in aggregate, you’ll still get the same level of error as you would from estimating one large thing. All I can say to that argument is “try it.” To some degree you are right, the smaller errors in the smaller pieces will add up and cause the whole to be off by more in aggregate, but the smaller pieces also tend to average out. So, some take less time and some take more, which means that overall, you end up a lot more accurate than estimating one large thing with a large margin of error. Tip 2: Taking time to research Why do you suck at estimation? Because you don’t know enough about what you are estimating. In the previous post, I talked about how the unknown unknowns, that plague many software development projects, make estimation extremely difficult, but I didn’t really talk about how to deal with these things that we don’t know that we don’t know. Again, the answer is really quite simple: research.The best way to get rid of an unknown unknown is to know about it. Whenever you are tasked with estimating something, your first instinct should be to want to do some research–to try and discover what it is that you don’t know that you don’t know yet. Unfortunately, most software developers don’t immediately think about doing research when trying to estimate something. Instead, they rely on past experience. If they’ve done something in the past that they deem similar enough, they will confidently estimate it–ignoring the possible pitfalls of the unknown unknowns. If they’ve never done something similar to what they are being asked to estimate, they’ll assume there are unknown unknowns everywhere and come up with estimates full of padding. Neither approach is good. Instead, you should first try and estimate how long it will take you to research a task before giving an estimate of how long the actual task will take. I’ve found that most software developers are pretty good at estimating how long it will take to research a task, even though they may be very bad at estimating how long it will take to complete the task itself. Once you are armed with research, you should have fewer unknown unknowns to deal with. You may still have some unknowns, but at least you’ll know about them. But, how does this look in reality? How do you actually research tasks that you are supposed to be estimating? Well, sometimes it involves pushing back and planning things out a bit ahead of time. I’ll give you an example of how this might work on a Scrum or Agile team. Suppose you want to start improving your estimates by doing research before estimating tasks. The problem is that when you are working on an Agile project, you usually need to estimate the tasks in an iteration and don’t really have the time to research each and every task before you estimate it–especially the big ones. I’ve found the best thing to do in this scenario is instead of estimating the big tasks right up front, to push the tasks back one iteration and instead estimate how long it will take to research each big tasks. So, you might have in your iteration any number of small research tasks which only have the purpose of getting you enough information to have a more accurate estimate for the big task in the next iteration. During these research tasks, you can also break down large tasks into smaller ones as you know more about them. Wait… wait.. wait… I know what you are thinking. I can’t just push a task into the next iteration. My boss and the business people will not like that. They want it done this iteration. Right you are, so how do you deal with this problem? Simple. You just start planning the bigger tasks one iteration in advance of when they need to be done. If you are working on an Agile team, you should adopt the habit of looking ahead and picking up research tasks for large tasks that will be coming up in future iterations. By always looking forward and doing research before estimating anything substantial, you’ll get into the habit of producing much more accurate estimates. This technique can also be applied to smaller tasks, by taking, sometimes, just five or ten minutes to do a minor amount of research on a task, before giving an estimation. The next time you are trying to estimate a task, devote some time upfront to doing some research. You’ll be amazed at how much more accurate your estimates become. Tip 3: Track your time One of the big problems we have with estimating things is that we don’t have an accurate sense of time. My memory of how long past projects took tends to be skewed based on factors like how much I was enjoying the work and how hungry I was. This skewed time in our heads can result in some pretty faulty estimations. For this reason it is important to track that actual time things take you. It is a very good idea to get into the habit of always tracking your time on whatever task you are doing. Right now, as I am writing this blog post, my Pomodoro timer is ticking down, tracking my time, so that I’ll have a better idea of how long blog posts take me to write. I’ll also have an idea if I am spending too much time on part of the process. Once you get into the habit of tracking your time, you’ll have a better idea of how long things actually take you and where you are spending your time. It’s crazy to think that you’ll be good at estimating things that haven’t happened yet, if you can’t even accurately say how long things that have happened took. Seriously, think about that for a minute. No, really. I want you to think about how absurd it is to believe that you can be good at estimating anything when you don’t have an accurate idea of how long past things you have done have taken. Many people argue that software development is unlike other work and it can’t be accurately estimated. While, I agree that software development is more difficult to estimate than installing carpets or re-roofing houses, I think that many software developer’s suck at estimation because they have no idea how long things actually take. Do yourself a favor and start tracking your time. There are a ton of good tools for doing this, like:RescueTime Toggl PayMoIf you are curious about how I track my time and plan my week, check out this video I did explaining the process I developed:By the way, following this process has caused me to become extremely good at estimating. I can usually estimate an entire week worth of work within one-to-two hours of accuracy. And I know this for a fact, because I track it. Tip 4: Time-box things I said I’d get back to this one, and here it is. One big secret to becoming a software developer who is better at estimating tasks is to time-box those tasks. It’s almost like cheating. When you time-box a task, you ensure it will take exactly as long as you have planned for it to take. You might think that most software development tasks can’t be time-boxed, but you are wrong. I use the technique very frequently, and I have found that many tasks we do tend to be quite variable in the time it takes us to do them. I’ve found that if you give a certain amount of time to a task–and only that amount of time–you can work in a way to make sure the work gets done in that amount of time. Consider the example of writing unit tests: For most software developers, writing unit tests is a very subjective thing. Unless you are going for 100% code coverage, you usually just write unit tests until you feel that you have adequately tested the code you are trying to test. (If you do test driven development, TDD, that might not be true either.) If you set a time-box for how long you are going to spend on writing unit tests, you can force yourself to work on the most important unit tests first and operate on the 80 / 20 principle to ensure you are getting the biggest bang for your buck. For many tasks, you can end up spending hours of extra time working on minor details that don’t really make that much of a difference. Time-boxing forces you to work on what is important first and to avoid doing things like premature optimization or obsessively worrying about variable names. Sure, sometimes, you’ll have to run over the time-box you set for a task, but many times, you’ll find that you actually got done what needed to be done and you can always come back and gold-plate things later if there is time for it. Again, just like tracking your time, time-boxing is a habit you have to develop, but once you get used to it, you’ll be able to use it as a cheat to become more accurate at estimates than you ever imagined possible. You may want to get yourself a Pomodoro or kitchen timer to help you track your time and time-box tasks. Sometimes it is nice to have a physical timer. Tip 5: Revise your estimates Here is a little secret: You don’t have to get it right on the first go. Instead, you can actually revise your estimates as you progress through a task.Yes, I know that your boss wants you to give an accurate estimate right now, not as you get closer to being done, but you can always give you best estimate right now and revise it as you progress through the task. I can’t image any situation where giving more up-to-date information is not appreciated. Use the other four tips to make sure your original estimate is as accurate as possible, but every so often, you should take a moment to reevaluate what the actual current estimate is. Think about it this way: You know when you download a file and it tells you how long it will take? Would you prefer that it calculated that duration just at the beginning of the download process and never updated? Of course not. Instead, most download managers show a constantly updated estimate of how much time is left. Just going through this process can make you better at estimations in general. When you constantly are updating and revising your estimates, you are forced to face the reasons why your original estimates were off. What about you? These are just a few of the most useful tips that I use to improve the accuracy of my estimates, but what about you? Is there something I am leaving out here? Let me know in the comments below.Reference: 5 Ways Software Developers Can Become Better at Estimation from our JCG partner John Sonmez at the Making the Complex Simple blog....
apache-maven-logo

Configure JBoss / Wildfly Datasource with Maven

Most Java EE applications use database access in their business logic, so developers are often faced with the need to configure drivers and database connection properties in the application server. In this post, we are going to automate that task for JBoss / Wildfly and a Postgre database using Maven. The work is based on my World of Warcraft Auctions Batch application from the previous post.             Maven Configuration Let’s start by adding the following to our pom.xml: Wildfly Maven Pluginorg.wildfly.plugins wildfly-maven-plugin 1.0.2.Final false target/scripts/${cli.file} org.postgresql postgresql 9.3-1102-jdbc41We are going to use the Wildfly Maven Plugin to execute scripts with commands in the application server. Note that we also added a dependency to the Postgre driver. This is for Maven to download the dependency, because we are going to need it later to add it to the server. There is also a ${cli.file} property that is going to be assigned to a profile. This is to indicate which script we want to execute. Let’s also add the following to the pom.xml: Maven Resources Pluginorg.apache.maven.plugins maven-resources-plugin 2.6 copy-resources process-resources copy-resources ${basedir}/target/scripts src/main/resources/scripts true ${basedir}/src/main/resources/configuration.propertiesWith the Resources Maven Plugin we are going to filter the script files contained in the src/main/resources/scripts and replace them with the properties contained in ${basedir}/src/main/resources/configuration.properties file. Finally lets add a few Maven profiles to the pom.xml, with the scripts that we want to run: Maven Profiles install-driver wildfly-install-postgre-driver.cli remove-driver wildfly-remove-postgre-driver.cli install-wow-auctions wow-auctions-install.cli remove-wow-auctions wow-auctions-remove.cliWildfly Script Files Add Driver The scripts with the commands to add a Driver: wildfly-install-postgre-driver.cli # Connect to Wildfly instance connect# Create Oracle JDBC Driver Module # If the module already exists, Wildfly will output a message saying that the module already exists and the script exits. module add \ --name=org.postgre \ --resources=${settings.localRepository}/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-jdbc41.jar \ --dependencies=javax.api,javax.transaction.api# Add Driver Properties /subsystem=datasources/jdbc-driver=postgre: \ add( \ driver-name="postgre", \ driver-module-name="org.postgre") Database drivers are added to Wildfly as a module. In this was, the driver is widely available to all the applications deployed in the server. With ${settings.localRepository} we are pointing into the database driver jar downloaded to your local Maven repository. Remember the dependency that we added into the Wildfly Maven Plugin? It’s to download the driver when you run the plugin and add it to the server. Now, to run the script we execute (you need to have the application server running): mvn process-resources wildfly:execute-commands -P "install-driver" The process-resources lifecycle is needed to replace the properties in the script file. In my case ${settings.localRepository} is replaced by /Users/radcortez/.m3/repository/. Check the target/scripts folder. After running the command, you should see the following output in the Maven log: {"outcome" => "success"} And on the server:INFO [org.jboss.as.connector.subsystems.datasources] (management-handler-thread - 4) JBAS010404: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 9.3) INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) JBAS010417: Started Driver service with driver-name = postgre wildfly-remove-postgre-driver.cli # Connect to Wildfly instance connectif (outcome == success) of /subsystem=datasources/jdbc-driver=postgre:read-attribute(name=driver-name)# Remove Driver /subsystem=datasources/jdbc-driver=postgre:removeend-if# Remove Oracle JDBC Driver Module module remove --name=org.postgre This script is to remove the driver from the application server. Execute mvn wildfly:execute-commands -P "remove-driver". You don’t need process-resources if you already executed the command before, unless you change the scripts. Add Datasource wow-auctions-install.cli The scripts with the commands to add a Datasource: wow-auctions-install.cli # Connect to Wildfly instance connect# Create Datasource /subsystem=datasources/data-source=WowAuctionsDS: \ add( \ jndi-name="${datasource.jndi}", \ driver-name=postgre, \ connection-url="${datasource.connection}", \ user-name="${datasource.user}", \ password="${datasource.password}")/subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}") We also need a a file to define the properties: configuration.properties datasource.jndi=java:/datasources/WowAuctionsDS datasource.connection=jdbc:postgresql://localhost:5432/wowauctions datasource.user=wowauctions datasource.password=wowauctions Default Java EE 7 Datasource Java EE 7, specifies that the container should provide a default Datasource. Instead of defining a Datasource with the JNDI name java:/datasources/WowAuctionsDS in the application, we are going to point our newly created datasource to the default one with /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}"). In this way, we don’t need to change anything in the application. Execute the script with mvn wildfly:execute-commands -P "install-wow-auctions". You should get the following Maven output:org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} And on the server: INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) JBAS010400: Bound data source wow-auctions-remove.cli # Connect to Wildfly instance connect# Remove Datasources /subsystem=datasources/data-source=WowAuctionsDS:remove/subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="java:jboss/datasources/ExampleDS") This is the script to remove the Datasource and revert the Java EE 7 default Datasource. Run it by executing mvn wildfly:execute-commands -P "remove-wow-auctions" Conclusion This post demonstrated how to automate add / remove Drivers to Wildfly instances and also add / remove Datasources. This is useful if you want to switch between databases or if you’re configuring a server from the ground up. Think about CI environments. These scripts are also easily adjustable to other drivers.You can get the code from the WoW Auctions Github repo, which uses this setup.Enjoy!Reference: Configure JBoss / Wildfly Datasource with Maven from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
jboss-wildfly-logo

WebSocket Chat on WildFly and OpenShift

Chat is one of the most canonical sample to explain WebSocket. Its a fairly commonly used interface and allows to explain the fundamental WebSocket concepts very easily. Of course, Java EE 7 WebSocket has one too, available here! You can easily run it on WildFly using the following steps:               curl -O http://download.jboss.org/wildfly/8.1.0.Final/wildfly-8.1.0.Final.zip unzip wildfly-8.1.0.Final.zip ./wildfly-8.1.0.Final/bin/standalone.sh git clone https://github.com/javaee-samples/javaee7-samples.git cd javaee7-samples mvn -f websocket/chat/pom.xml wildfly:deploy And then access it at http://localhost:8080/chat/. One of the biggest advantage of WebSocket is how it opens up a socket over the same port as HTTP, 8080 in this case. If you want to deploy this application to OpenShift, then WebSocket is available on port 8000 for regular access, and 8443 for secure access. This is explained in the figure below:If you want to run this Chat application on OpenShift, then use the following steps:Click here to provision a WildFly instance in OpenShift. Change the name to “chatserver” and everything else as default. Click on “Create Application” to create the application. Clone the workspace: git clone ssh://544f08a850044670df00009e@chatserver-milestogo.rhcloud.com/~/git/chatserver.git/Edit the first line of “javaee7-samples/websocket/chat/src/main/webapp/websocket.js”from: var wsUri = "ws://" + document.location.hostname + ":" + document.location.port + document.location.pathname + "chat"; to: var wsUri = "ws://" + document.location.hostname + ":8000" + document.location.pathname + "chat";Create the WAR file: cd javaee7-samples mvn -f websocket/chat/pom.xmlCopy the generated WAR file to the workspace cloned earlier: cd .. cp javaee7-samples/websocket/chat/target/chat.war chatserver/deployments/ROOT.warRemove existing files and add the WAR file to git repository: cd chatserver git rm -rf src pom.xml git add deployments/ROOT.war git commit . -m"updating files" git push And this shows the output as: Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 6.88 KiB | 0 bytes/s, done. Total 4 (delta 1), reused 0 (delta 0) remote: Stopping wildfly cart remote: Sending SIGTERM to wildfly:285130 ... remote: Building git ref 'master', commit 05a7978 remote: Preparing build for deployment remote: Deployment id is 14bcec20 remote: Activating deployment remote: Deploying WildFly remote: Starting wildfly cart remote: Found 127.2.87.1:8080 listening port remote: Found 127.2.87.1:9990 listening port remote: /var/lib/openshift/544f08a850044670df00009e/wildfly/standalone/deployments /var/lib/openshift/544f08a850044670df00009e/wildfly remote: /var/lib/openshift/544f08a850044670df00009e/wildfly remote: CLIENT_MESSAGE: Artifacts deployed: ./ROOT.war remote: ------------------------- remote: Git Post-Receive Result: success remote: Activation status: success remote: Deployment completed with status: success To ssh://544f08a850044670df00009e@chatserver-milestogo.rhcloud.com/~/git/chatserver.git/ 454bba9..05a7978  master -> masterAnd now your chat server is available at: http://chatserver-milestogo.rhcloud.com and looks like:Enjoy!Reference: WebSocket Chat on WildFly and OpenShift from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Securing WebSocket using wss and HTTPS/TLS

50th tip on this blog, yaay! Tech Tip #49 explained how to secure WebSockets using username/password and Servlet Security mechanisms. This Tech Tip will explain how to secure WebSockets using HTTPS/TLS on WildFly. Lets get started!        Create a new keystore: keytool -genkey -alias websocket -keyalg RSA -keystore websocket.keystore -validity 10950 Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Arun Gupta What is the name of your organizational unit? [Unknown]: JBoss Middleware What is the name of your organization? [Unknown]: Red Hat What is the name of your City or Locality? [Unknown]: San Jose What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Arun Gupta, OU=JBoss Middleware, O=Red Hat, L=San Jose, ST=CA, C=US correct? [no]: yesEnter key password for <websocket> (RETURN if same as keystore password): Re-enter new password: Used “websocket” as the convenience password. Download WildFly 8.1, unzip, and copy “websocket.keystore” file in standalone/configuration directory. Start WildFly as: ./bin/standalone.shConnect to it using jboss-cli as: ./bin/jboss-cli.sh -cAdd a new security realm as: [standalone@localhost:9990 /] /core-service=management/security-realm=WebSocketRealm:add() {"outcome" => "success"} And configure it: [standalone@localhost:9990 /] /core-service=management/security-realm=WebSocketRealm/server-identity=ssl:add(keystore-path=websocket.keystore, keystore-relative-to=jboss.server.config.dir, keystore-password=websocket) {     "outcome" => "success", "response-headers" => { "operation-requires-reload" => true, "process-state" => "reload-required" } }Add a new HTTPS listener as: [standalone@localhost:9990 /] /subsystem=undertow/server=default-server/https-listener=https:add(socket-binding=https, security-realm=WebSocketRealm) { "outcome" => "success",   "response-headers" => {"process-state" => "reload-required"} }A simple sample to show TLS-based security for WebSocket is available at github.com/javaee-samples/javaee7-samples/tree/master/websocket/endpoint-wss. Clone the workspace and change directory to “websocket/endpoint-wss”. The sample’s deployment descriptor has: <security-constraint> <web-resource-collection> <web-resource-name>Secure WebSocket</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> This ensures that any request coming to this application will be auto-directed to an HTTPS URL. Deploy the sample by giving the command: mvn wildfly:deployNow accessing http://localhost:8080/endpoint-wss redirects to https://localhost:8080/endpoint-wss. The browsers may complain about self-signed certificate. For example, Chrome shows the following warning:And Safari shows the following warning:In either case, click on “Proceed to localhost” or “Continue” to proceed further. And then a secure WebSocket connection is established. Another relevant point to understand is that a non-secure WebSocket connection cannot be made from an https-protected page. For example the following code in our sample: new WebSocket("ws://localhost:8080/endpoint-wss/websocket"); will throw the following exception in Chrome Developer Tools: [blocked] The page at 'https://localhost:8443/endpoint-wss/index.jsp' was loaded over HTTPS, but ran insecure content from 'ws://localhost:8080/endpoint-wss/websocket': this content should also be loaded over HTTPS. Uncaught SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. Enjoy!Reference: Securing WebSocket using wss and HTTPS/TLS from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close