Featured FREE Whitepapers

What's New Here?

jooq-logo-black-100x80

Write SQL in Java with jOOQ

jOOQ is a “database first”, type safe SQL API that allows you to write SQL in Java intuitively as if the SQL language were supported natively by the Java compiler. All database schemas, tables, columns, procedures, and other objects are made available as Java objects that can be used directly in the jOOQ SQL API. Let’s see how it works…       For instance, let’s assume your database consists of this table: CREATE TABLE CUSTOMER ( ID INT, FIRST_NAME VARCHAR(50), LAST_NAME VARCHAR(50), AGE INT );When you run jOOQ’s code generator against it, then you will be able to interact with your database as follows: dsl.select(CUSTOMER.FIRST_NAME, CUSTOMER.LAST_NAME) .from(CUSTOMER) .where(CUSTOMER.AGE.gt(20)) .and(CUSTOMER.LAST_NAME.like("S%")) .fetch();jOOQ’s main features are:Database first: Your database holds your most important asset – your data. You want to be in control of your SQL. Typesafe SQL: Use your IDE to write SQL efficiently in Java. Code generation: Your Java compiler will detect errors early. Active Records: Don’t write repetitive CRUD, just easily store modified records.But jOOQ also ships with a variety of secondary features:Multi-tenancy: Configure schema and table names at runtime, and implement row-level security. Standardization: Write SQL that works on all your databases without wasting time on concrete syntax. Query Lifecycle: Hook into the SQL code generation lifecycle, for logging, transaction handling, ID generation, SQL transformation and much more. Stored Procedures: Calling them or embedding them in your SQL is a one-liner. Don’t waste time with JDBC.Curious? Get started with the FREE JCG Academy Course on jOOQ! ...
apache-commons-io-logo

Apache Commons IO Tutorial: A beginner’s guide

Apache Commons IO is a Java library created and maintained by the Apache Foundation. It provides a multitude of classes that enable developers to do common tasks easily and with much less boiler-plate code, that needs to be written over and over again for every single project.The importance of libraries like that is huge, because they are mature and maintained by experienced developers, who have thought of every possible edge-case, or fixed the various bugs when they appeared. In this example, we are going to present some methods with varying functionality, depending on the package of org.apache.commons.io that they belong to. We are not going to delve too deep inside the library, as it is enormous, but we are going to provide examples for some common usage that can definitely come in handy for every developer, beginner or not.1. Apache Commons IO Example The code for this example will be broken into several classes, and each of them will be representative of a particular area that Apache Commons IO covers. These areas are:Utility classes Input Output Filters Comparators File MonitorTo make things even clearer, we are going to break down the output in chunks, one for each of the classes that we have created. We have also created a directory inside the project folder (named ExampleFolder) which will contain the various files that will be used in this example to show the functionality of the various classes. NOTE: In order to use org.apache.commons.io, you need to download the jar files (found here) and add them to the build path of your Eclipse project, by right clicking on the project folder -> Build Path -> Add external archives. ApacheCommonsExampleMain.java public class ApacheCommonsExampleMain {public static void main(String[] args) { UtilityExample.runExample(); FileMonitorExample.runExample(); FiltersExample.runExample(); InputExample.runExample(); OutputExample.runExample(); ComparatorExample.runExample(); } }This is the main class that will be used to run the methods from the other classes of our example. You can comment certain classes in order to see the output that you want to.1.1 Utility Classes There are various Utility classes, inside the package org.apache.commons.io, most of which have to do with file manipulation and String comparison. We have used some of the most important ones here:FilenameUtils: This class has methods that work with file names, and the main point is to make life easier in every OS (works equally well in Unix and Windows systems). FileUtils: It provides methods for file manipulation (moving, opening and reading a file, checking if a file exists, etc). IOCase: String manipulation and comparison methods. FileSystemUtils: Its methods return the free space of a designated drive.UtilityExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileSystemUtils; import org.apache.commons.io.FileUtils; import org.apache.commons.io.FilenameUtils; import org.apache.commons.io.LineIterator; import org.apache.commons.io.IOCase;public final class UtilityExample { // We are using the file exampleTxt.txt in the folder ExampleFolder, // and we need to provide the full path to the Utility classes. private static final String EXAMPLE_TXT_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample";public static void runExample() throws IOException { System.out.println("Utility Classes example..."); // FilenameUtils System.out.println("Full path of exampleTxt: " + FilenameUtils.getFullPath(EXAMPLE_TXT_PATH)); System.out.println("Full name of exampleTxt: " + FilenameUtils.getName(EXAMPLE_TXT_PATH)); System.out.println("Extension of exampleTxt: " + FilenameUtils.getExtension(EXAMPLE_TXT_PATH)); System.out.println("Base name of exampleTxt: " + FilenameUtils.getBaseName(EXAMPLE_TXT_PATH)); // FileUtils // We can create a new File object using FileUtils.getFile(String) // and then use this object to get information from the file. File exampleFile = FileUtils.getFile(EXAMPLE_TXT_PATH); LineIterator iter = FileUtils.lineIterator(exampleFile); System.out.println("Contents of exampleTxt..."); while (iter.hasNext()) { System.out.println("\t" + iter.next()); } iter.close(); // We can check if a file exists somewhere inside a certain directory. File parent = FileUtils.getFile(PARENT_DIR); System.out.println("Parent directory contains exampleTxt file: " + FileUtils.directoryContains(parent, exampleFile)); // IOCase String str1 = "This is a new String."; String str2 = "This is another new String, yes!"; System.out.println("Ends with string (case sensitive): " + IOCase.SENSITIVE.checkEndsWith(str1, "string.")); System.out.println("Ends with string (case insensitive): " + IOCase.INSENSITIVE.checkEndsWith(str1, "string.")); System.out.println("String equality: " + IOCase.SENSITIVE.checkEquals(str1, str2)); // FileSystemUtils System.out.println("Free disk space (in KB): " + FileSystemUtils.freeSpaceKb("C:")); System.out.println("Free disk space (in MB): " + FileSystemUtils.freeSpaceKb("C:") / 1024); } }Output Utility Classes example... Full path of exampleTxt: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\ Full name of exampleTxt: exampleTxt.txt Extension of exampleTxt: txt Base name of exampleTxt: exampleTxt Contents of exampleTxt... This is an example text file. We will use it for experimenting with Apache Commons IO. Parent directory contains exampleTxt file: true Ends with string (case sensitive): false Ends with string (case insensitive): true String equality: false Free disk space (in KB): 32149292 Free disk space (in MB): 313951.2 File Monitor The org.apache.commons.io.monitor package contains methods that can get specific information about a File, but more importantly, it can create handlers that can be used to track changes in a specific file or folder and take action depending on the changes. Let’s take a look on the code: FileMonitorExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileDeleteStrategy; import org.apache.commons.io.FileUtils; import org.apache.commons.io.monitor.FileAlterationListenerAdaptor; import org.apache.commons.io.monitor.FileAlterationMonitor; import org.apache.commons.io.monitor.FileAlterationObserver; import org.apache.commons.io.monitor.FileEntry;public final class FileMonitorExample { private static final String EXAMPLE_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleFileEntry.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String NEW_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newDir"; private static final String NEW_FILE = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newFile.txt";public static void runExample() { System.out.println("File Monitor example..."); // FileEntry // We can monitor changes and get information about files // using the methods of this class. FileEntry entry = new FileEntry(FileUtils.getFile(EXAMPLE_PATH)); System.out.println("File monitored: " + entry.getFile()); System.out.println("File name: " + entry.getName()); System.out.println("Is the file a directory?: " + entry.isDirectory()); // File Monitoring // Create a new observer for the folder and add a listener // that will handle the events in a specific directory and take action. File parentDir = FileUtils.getFile(PARENT_DIR); FileAlterationObserver observer = new FileAlterationObserver(parentDir); observer.addListener(new FileAlterationListenerAdaptor() { @Override public void onFileCreate(File file) { System.out.println("File created: " + file.getName()); } @Override public void onFileDelete(File file) { System.out.println("File deleted: " + file.getName()); } @Override public void onDirectoryCreate(File dir) { System.out.println("Directory created: " + dir.getName()); } @Override public void onDirectoryDelete(File dir) { System.out.println("Directory deleted: " + dir.getName()); } }); // Add a monior that will check for events every x ms, // and attach all the different observers that we want. FileAlterationMonitor monitor = new FileAlterationMonitor(500, observer); try { monitor.start(); // After we attached the monitor, we can create some files and directories // and see what happens! File newDir = new File(NEW_DIR); File newFile = new File(NEW_FILE); newDir.mkdirs(); newFile.createNewFile(); Thread.sleep(1000); FileDeleteStrategy.NORMAL.delete(newDir); FileDeleteStrategy.NORMAL.delete(newFile); Thread.sleep(1000); monitor.stop(); } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } } Output File Monitor example... File monitored: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt File name: exampleFileEntry.txt Is the file a directory?: false Directory created: newDir File created: newFile.txt Directory deleted: newDir File deleted: newFile.txtLet’s take a look on what happened here. We used some classes of the org.apache.commons.io.monitor package, that enable us to create handlers that listen to specific events (in our case, everything that has to do with files, folders, directories etc). In order to achieve that, there are certain steps that need to be taken:Create a File object, that is a reference to the directory that we want to listen to for changes. Create a FileAlterationObserver object, that will observe for those changes. Add a FileAlterationListenerAdaptor to the observer using the addListener() method. You can create the adaptor using various ways, but in our example we used a nested class that implements only some of the methods (the ones we need for the example requirements). Create a FileAlterationMonitor and add the observers that you have, as well as the interval (in ms). Start the monitor using the start() method and stop it when necessary using the stop() method.1.3 Filters Filters can be used in a variety of combinations and ways. Their job is to allow us to easily make distinctions between files and get the ones that satisfy certain criteria. We can also combine filters to perform logical comparisons and get our files much more precisely, without using tedious String comparisons afterwards. FiltersExample.java import java.io.File;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.filefilter.AndFileFilter; import org.apache.commons.io.filefilter.NameFileFilter; import org.apache.commons.io.filefilter.NotFileFilter; import org.apache.commons.io.filefilter.OrFileFilter; import org.apache.commons.io.filefilter.PrefixFileFilter; import org.apache.commons.io.filefilter.SuffixFileFilter; import org.apache.commons.io.filefilter.WildcardFileFilter;public final class FiltersExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder";public static void runExample() { System.out.println("File Filter example..."); // NameFileFilter // Right now, in the parent directory we have 3 files: // directory example // file exampleEntry.txt // file exampleTxt.txt // Get all the files in the specified directory // that are named "example". File dir = FileUtils.getFile(PARENT_DIR); String[] acceptedNames = {"example", "exampleTxt.txt"}; for (String file: dir.list(new NameFileFilter(acceptedNames, IOCase.INSENSITIVE))) { System.out.println("File found, named: " + file); } //WildcardFileFilter // We can use wildcards in order to get less specific results // ? used for 1 missing char // * used for multiple missing chars for (String file: dir.list(new WildcardFileFilter("*ample*"))) { System.out.println("Wildcard file found, named: " + file); } // PrefixFileFilter // We can also use the equivalent of startsWith // for filtering files. for (String file: dir.list(new PrefixFileFilter("example"))) { System.out.println("Prefix file found, named: " + file); } // SuffixFileFilter // We can also use the equivalent of endsWith // for filtering files. for (String file: dir.list(new SuffixFileFilter(".txt"))) { System.out.println("Suffix file found, named: " + file); } // OrFileFilter // We can use some filters of filters. // in this case, we use a filter to apply a logical // or between our filters. for (String file: dir.list(new OrFileFilter( new WildcardFileFilter("*ample*"), new SuffixFileFilter(".txt")))) { System.out.println("Or file found, named: " + file); } // And this can become very detailed. // Eg, get all the files that have "ample" in their name // but they are not text files (so they have no ".txt" extension. for (String file: dir.list(new AndFileFilter( // we will match 2 filters... new WildcardFileFilter("*ample*"), // ...the 1st is a wildcard... new NotFileFilter(new SuffixFileFilter(".txt"))))) { // ...and the 2nd is NOT .txt. System.out.println("And/Not file found, named: " + file); } } }Output File Filter example... File found, named: example File found, named: exampleTxt.txt Wildcard file found, named: example Wildcard file found, named: exampleFileEntry.txt Wildcard file found, named: exampleTxt.txt Prefix file found, named: example Prefix file found, named: exampleFileEntry.txt Prefix file found, named: exampleTxt.txt Suffix file found, named: exampleFileEntry.txt Suffix file found, named: exampleTxt.txt Or file found, named: example Or file found, named: exampleFileEntry.txt Or file found, named: exampleTxt.txt And/Not file found, named: example1.4 Comparators The org.apache.commons.io.comparator package contains classes that allow us to easily compare and sort files and directories. We just need to provide a list of files and, depending on the class, compare them in various ways. ComparatorExample.java import java.io.File; import java.util.Date;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.comparator.LastModifiedFileComparator; import org.apache.commons.io.comparator.NameFileComparator; import org.apache.commons.io.comparator.SizeFileComparator;public final class ComparatorExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String FILE_1 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\example"; private static final String FILE_2 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; public static void runExample() { System.out.println("Comparator example..."); //NameFileComparator // Let's get a directory as a File object // and sort all its files. File parentDir = FileUtils.getFile(PARENT_DIR); NameFileComparator comparator = new NameFileComparator(IOCase.SENSITIVE); File[] sortedFiles = comparator.sort(parentDir.listFiles()); System.out.println("Sorted by name files in parent directory: "); for (File file: sortedFiles) { System.out.println("\t"+ file.getAbsolutePath()); } // SizeFileComparator // We can compare files based on their size. // The boolean in the constructor is about the directories. // true: directory's contents count to the size. // false: directory is considered zero size. SizeFileComparator sizeComparator = new SizeFileComparator(true); File[] sizeFiles = sizeComparator.sort(parentDir.listFiles()); System.out.println("Sorted by size files in parent directory: "); for (File file: sizeFiles) { System.out.println("\t"+ file.getName() + " with size (kb): " + file.length()); } // LastModifiedFileComparator // We can use this class to find which file was more recently modified. LastModifiedFileComparator lastModified = new LastModifiedFileComparator(); File[] lastModifiedFiles = lastModified.sort(parentDir.listFiles()); System.out.println("Sorted by last modified files in parent directory: "); for (File file: lastModifiedFiles) { Date modified = new Date(file.lastModified()); System.out.println("\t"+ file.getName() + " last modified on: " + modified); } // Or, we can also compare 2 specific files and find which one was last modified. // returns > 0 if the first file was last modified. // returns 0) System.out.println("File " + file1.getName() + " was modified last because..."); else System.out.println("File " + file2.getName() + "was modified last because..."); System.out.println("\t"+ file1.getName() + " last modified on: " + new Date(file1.lastModified())); System.out.println("\t"+ file2.getName() + " last modified on: " + new Date(file2.lastModified())); } }Output Comparator example... Sorted by name files in parent directory: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comparator1.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comperator2.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\example C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleTxt.txt Sorted by size files in parent directory: example with size (kb): 0 exampleTxt.txt with size (kb): 87 exampleFileEntry.txt with size (kb): 503 comperator2.txt with size (kb): 1458 comparator1.txt with size (kb): 4436 Sorted by last modified files in parent directory: exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014 example last modified on: Sun Oct 26 23:42:55 EET 2014 comparator1.txt last modified on: Tue Oct 28 14:48:28 EET 2014 comperator2.txt last modified on: Tue Oct 28 14:48:52 EET 2014 exampleFileEntry.txt last modified on: Tue Oct 28 14:53:50 EET 2014 File example was modified last because... example last modified on: Sun Oct 26 23:42:55 EET 2014 exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014Let’s see what classes were used here:NameFileComparator: Compares files according to their name. SizeFileComparator: Compares files according to their size. LastModifiedFileComparator: Compares files according to the date they were last modified.You should also take notice here, that the comparisons can happen either in whole directories (were they are sorted using the sort() method), or separately for 2 files specifically (using compare()).1.5 Input There  are various implementations of InputStream in the org.apache.commons.io.input package. We are going to examine one of the most useful, TeeInputStream, which takes as arguments both an InputStream and an OutputStream, and automatically copies the read bytes from the input, to the output. Moreover, by using a third, boolean argument, by closing just the TeeInputStream in the end, the two additional streams close as well. InputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException;import org.apache.commons.io.FileUtils; import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.input.XmlStreamReader;public final class InputExample { private static final String XML_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\InputOutputExampleFolder\\web.xml"; private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Input example..."); XmlStreamReader xmlReader = null; TeeInputStream tee = null; try { // XmlStreamReader // We can read an xml file and get its encoding. File xml = FileUtils.getFile(XML_PATH); xmlReader = new XmlStreamReader(xml); System.out.println("XML encoding: " + xmlReader.getEncoding()); // TeeInputStream // This very useful class copies an input stream to an output stream // and closes both using only one close() method (by defining the 3rd // constructor parameter as true). ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out = new ByteArrayOutputStream(); tee = new TeeInputStream(in, out, true); tee.read(new byte[INPUT.length()]);System.out.println("Output stream: " + out.toString()); } catch (IOException e) { e.printStackTrace(); } finally { try { xmlReader.close(); } catch (IOException e) { e.printStackTrace(); } try { tee.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Input example... XML encoding: UTF-8 Output stream: This should go to the output.1.6 Output Similar to the org.apache.commons.io.input, org.apache.commons.io.output has implementations of OutputStream, that can be used in many situations. A very interesting one is TeeOutputStream, which allows an output stream to be branched, or in other words, we can send an input stream to 2 different outputs. OutputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException;import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.output.TeeOutputStream;public final class OutputExample { private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Output example..."); TeeInputStream teeIn = null; TeeOutputStream teeOut = null; try { // TeeOutputStream ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out1 = new ByteArrayOutputStream(); ByteArrayOutputStream out2 = new ByteArrayOutputStream(); teeOut = new TeeOutputStream(out1, out2); teeIn = new TeeInputStream(in, teeOut, true); teeIn.read(new byte[INPUT.length()]);System.out.println("Output stream 1: " + out1.toString()); System.out.println("Output stream 2: " + out2.toString()); } catch (IOException e) { e.printStackTrace(); } finally { // No need to close teeOut. When teeIn closes, it will also close its // Output stream (which is teeOut), which will in turn close the 2 // branches (out1, out2). try { teeIn.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Output example... Output stream 1: This should go to the output. Output stream 2: This should go to the output.2. Download the Complete Example This was an introduction to Apache Commons IO, covering most of the important classes that provide easy solutions to developers. There are many other capabilities in this vast package,but using this intro you get the general idea and a handful of useful tools for your future projects! DownloadYou can download the full source code of this example here: ApacheCommonsIOExample.rar ...
jcg-logo

Java Code Geeks and ZK are giving away FREE ZK Charts Licenses (worth over $2700)!

Struggling with the creation of fancy charts for your Java based UI? Then we have something especially for you! We are partnering with ZK, creator of cool Java tools, and we are running a contest giving away FREE perpetual licenses for the kick-ass ZK Charts library. ZK Charts is an interactive charting library for your Java Web Applications. ZK Charts is built upon and works with the ZK Framework which is one of the most established and recognizable Java Web Frameworks on the market. Since its first release in 2005 ZK has amassed well over 1.5 million downloads and is deployed in a large number of Fortune 500 companies including Barclays, eBay, Roche, Deutsche Bank, Sony and Audi.! ZK Charts provides a comprehensive Java API for controlling your charts from the server-side with the communication between the client and server being taken care of transparently by the library. Additionally, ZK Charts is powered by models which makes creating and updating charts intuitive and effortless for developers. ZK’s model based system is used throughout our components to provide the best development experience available to all our users.ZK Charts comes with an extensive chart set, including but not limited to:Line charts Area charts Column & bar charts Pie charts Scatter charts Bubble charts Master-detail chart Angular gauge chart Dual axes guage chart Spider web chart Polar chart Waterfall chart Funnel chart and many more, along with the ability to combine themEnter the contest now to win your very own FREE ZK Charts Perpetual License. There will be a total of 3 winners! In addition, we will send you free tips and the latest news from the Java community to master your technical knowledge (you can unsubscribe at any time). In order to increase your chances of winning, don’t forget to refer as much of your friends as possible! You will get 3 more entries for every friend you refer, that is 3 times more chances! Make sure to use your lucky URL to spread the word! You can share it on your social media channels, or even mention it on a blog post if you are a blogger! Good luck and may the force be with you!   Join the Contest!   ...
jboss-drools-logo

The Drools and jBPM KIE Apps platform

With the Drools and jBPM (KIE) 6 series came a new workbench, with the promise of eventual end user extensibility. I finally have some teaser videos to show this working and what’s in store. Make sure you select 1080p and go full screen to see them at their best.               What you seen in these videos is the same workbench available on the Drools video’s page. Once this stuff is released you’ll be able to extend an existing Drools or JBPM (KIE) installation or make a new one from scratch that doesn’t have Drools or jBPM in it – i.e. the workbench and it’s extension stuff is available standalone, and you get to chose which plugins you do or don’t want. Here is demo showing the new Bootstrap dynamic grid view builder used to build a perspective, which now doubles as an app. It uses the new RAD, JSFiddle inspired, environment to author a simple AngularJS plugin extension. This all writes to a GIT backend, so you could author these with Intellij or Eclipse and just push it back into the GIT repo. It then demonstrates the creation of a dynamic menu and registers our app there. It then also demonstrates the new app directory. Apps are given labels and can then be discovered in the apps directory – instead, or as well as, top menu entries. Over 2015 we’ll be building a case management system which will compliment this perfect as the domain front end – all creating a fantastic Self Service Software platform. http://youtu.be/KoJ5A5g7y4E Here is a slightly early video showing our app builder working with DashBuilder, http://youtu.be/Yhg31m4kRsM Other components such as our Human Tasks and Forms will be available too. We Also have some cool infrastructure coming event publication and capture and timeline reporting, so you visualise social activity within your organization – you’ll be able to place time timeline components you see in this blog, on your app pages: http://blog.athico.com/2014/09/activity-insight-coming-in-drools-jbpm.html All this is driven by our new project UberFire, which provides the workbench infrastructure for all of this. The project is not yet announced or released, but will do so soon – the website is currently just a placeholder, we’ll blog as soon as there is something to see!Reference: The Drools and jBPM KIE Apps platform from our JCG partner Mark Proctor at the Drools & jBPM blog....
hazelcast-logo

Beginner’s Guide To Hazelcast Part 1

Introduction I am going to be doing a series on Hazelcast. I learned about this product from Twitter. They decided to follow me and after some research into what they do, I decided to follow them. I tweeted that Hazelcast would be a great backbone for a distributed password cracker. This got some interest and I decided to go make one. A vice president of Hazelcast started corresponding with me and we decided that while a cracker was a good project, the community (and me) would benefit from having a series of posts for beginners. I have been getting a lot of good information in the book preview The Book of Hazelcast found on www.hazelcast.com.   What is Hazelcast? Hazelcast is a distributed, in-memory database. There are projects all over the world using Hazelcast. The code is open source under the Apache License 2.0. Features There are a lot of features already built into Hazelcast. Here are some of them:Auto discovery of nodes on a network High Availablity In memory backups The ability to cache data Distributed thread poolsDistributed Executor ServiceThe ability to have data in different partitions. The ability to persist data asynchronously or synchronously. Transactions SSL support Structures to store data:IList IMap MultiMap ISetStructures for communication among different processesIQueue ITopicAtomic OperationsIAtomicLongId GenerationIdGeneratorLockingISemaphore ICondition ILock ICountDownLatchWorking with Hazelcast Just playing around with Hazelcast and reading has taught me to assume these things.The data will be stored as an array of bytes. (This is not an assumption, I got this directly from the book) The data will go over the network. The data is remote. If the data is not in memory, it doesn’t exist.Let me explain these assumptions: The data will be stored as an array of bytes I got this information from The Book of Hazelcast so it is really not an assumption. This is important because not only is the data stored that way, so is the key. This makes life very interesting if one uses something other than a primitive or a String as a key. The developer of hash() and equals() must think about it in terms of the key as an array of bytes instead of as a class. The data will go over the network This is a distributed database and so parts of the data will be stored in other nodes. There are also backups and caching that happen too. There are techniques and settings to reduce transferring data over the network but if one wants high availability, backups must be done. The data is remote This is a distributed database and so parts of the database will be stored on other nodes. I put in this assumption not to resign to the fact that the data is remote but to motivate designs that make sure operations are preformed where most of the data is located. If the developer is skilled enough, this can be kept to a minimum. If the data is not in memory, it doesn’t exist Do not forget that this is an in-memory database. If it doesn’t get loaded into memory, the database will not know that data is stored somewhere else. This database doesn’t persist data to bring it up later. It persists because the data is important. There is no bringing it back from disk once it is out of memory like a conventional database (MySQL) would do. Data Storage Java developers will be happy to know that Hazelcast’s data storage containers except one are extensions of the java.util.Collections interfaces. For example, an IList follows the same method contracts as java.util.List. Here is a list of the different data storage types:IList – This keeps a number of objects in the order they were put in IQueue – This follows BlockingQueue and can be used as alternative to a Message Queue in JMS. This can be persisted via a QueueStore IMap – This extends ConcurrentMap. It can also be persisted by a MapStore. It also has a number of other features that I will talk about in another post. ISet – The keeps a set of unique elements where order is not guaranteed. MultiMap – This does not follow a typical map as there can be multiple values per key.Example Setup For all the features that Hazelcast contains, the initial setup steps are really easy.Download the Hazelcast zip file at www.hazelcast.org and extract contents. Add the jar files found in the lib directory into one’s classpath. Create a file named hazelcast.xml and put the following into the file <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation ="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd " xmlns ="http://www.hazelcast.com/schema/config " xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance">     <network>         <join><multicast enabled="true"/></join>     </network>          <map name="a"></map> </hazelcast> Hazelcast looks in a few places for a configuration file:The path defined by the property hazelcast.config hazelcast.xml in the classpath if classpath is included in the hazelcast.config The working directory If all else fails, hazelcast-default.xml is loaded witch is in the hazelcast.jar. If one dose not want to deal with a configuration file at all, the configuration can be done programmatically.The configuration example here defines multicast for joining together.  It also defines the IMap “a.” A Warning About Configuration Hazelcast does not copy configurations to each node.  So if one wants to be able to share a data structure, it needs to be defined in every node exactly the same. Code This code brings up two nodes and places values in instance’s IMap using an IdGenerator to generate keys and reads the data from instance2. package hazelcastsimpleapp;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IdGenerator; import java.util.Map;/** * * @author Daryl */ public class HazelcastSimpleApp {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); Map map = instance.getMap("a"); IdGenerator gen = instance.getIdGenerator("gen"); for(int i = 0; i < 10; i++) { map.put(gen.newId(), "stuff " + i); } Map map2 = instance2.getMap("a"); for(Map.Entry entry: map2.entrySet()) { System.out.printf("entry: %d; %s\n", entry.getKey(), entry.getValue()); } System.exit(0); } } Amazingly simple isn’t it!  Notice that I didn’t even use the IMap interface when I retrieved an instance of the map.  I just used the java.util.Map interface.  This isn’t good for using the distributed features of Hazelcast but for this example, it works fine. One can observe the assumptions at work here.  The first assumption is storing the information as an array of bytes.  Notice the data and keys are serializable.  This is important because that is needed to store the data.  The second and third assumptions hold true with the data being being accessed by the instance2 node.  The fourth assumption holds true because every value that was put into the “a” map was displayed when read.  All of this example can be found at http://darylmathisonblog.googlecode.com/svn/trunk/HazelcastSimpleApp using subversion. The project was made using Netbeans 8.0. Conclusion An quick overview of the numerous features of Hazelcast were reviewed with a simple example showing IMap and IdGenerator.  A list of assumptions were discussed that apply when developing in a distributed, in-memory database environment. ReferencesThe Book of Hazelcast. Download from http://www.hazelcast.comReference: Beginner’s Guide To Hazelcast Part 1 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
software-development-2-logo

5 Ways Software Developers Can Become Better at Estimation

In my last post, I detailed four of the biggest reasons why software developers suck at estimation, but I didn’t talk about how to solve any of the problems I presented. While estimation will always be inherently difficult for software developers, all hope is not lost. In this post, I am going to give you five real tips you can utilize to become better at estimation–even for complex software development tasks.       Tip 1: Break Things Down Smaller In my last post, I talked about how lengthy time periods, that are so common with software development projects, tend to make estimation very difficult and inaccurate. If you are asked to estimate something that will take you five minutes, you are much more likely to be accurate than if you are asked to estimate something that will take you five months. So, how can we solve this problem? There is actually a relatively simple fix: Break things down into smaller chunks and estimate those smaller chunks.Yes, I know this seems simple and obvious–and I know that this approach is often met with skepticism. There are plenty of excuses you can make about why you can’t break things down into smaller pieces, but the truth is, most things can be broken down–if you are willing to put forth the effort. I’ve actually talked about why smaller is better and how to break down a backlog in the past, so I won’t rehash all the details again here. The key point to realize is that you are never likely to get good at estimating large things. Well, let me rephrase that: The only way you are going to get good at estimating large things is to be learning how to break them down into many smaller things. If you really need to accurately estimate something, it is well worth the effort to spend the time breaking down what you are estimating into much smaller pieces. For example, suppose I was going to estimate how long it will take me to write a blog post. It’s not a very large task, but it’s big enough that estimates can be a bit inaccurate. If I want to be more accurate, I can break down the task into smaller pieces. Consider the difference between trying to estimate:Write and publish a blog postAnd:Research blog post and brainstorm Outline blog post Write first draft of blog post Add images, links and call-outs Schedule post for publishingBy breaking things down into smaller pieces, I can more accurately estimate each piece. In fact–here is a little trick–when things are broken down this small, I can actually time-box certain parts of the process–which is effectively ensuring my estimate is accurate (but, we are jumping ahead, we’ll talk more about time-boxing in a little bit.) The next time you are asked to implement some feature, instead of estimating how long you think it will take you to do it as a whole, try breaking down the task into very small pieces and estimating each piece individually. You can always add up the smaller estimates to give a more accurate estimate of the whole. But wait! I know exactly what you are going to say is wrong with this kind of estimation. Sure, each individual piece’s estimation may be more accurate, but when you add them back together, in aggregate, you’ll still get the same level of error as you would from estimating one large thing. All I can say to that argument is “try it.” To some degree you are right, the smaller errors in the smaller pieces will add up and cause the whole to be off by more in aggregate, but the smaller pieces also tend to average out. So, some take less time and some take more, which means that overall, you end up a lot more accurate than estimating one large thing with a large margin of error. Tip 2: Taking time to research Why do you suck at estimation? Because you don’t know enough about what you are estimating. In the previous post, I talked about how the unknown unknowns, that plague many software development projects, make estimation extremely difficult, but I didn’t really talk about how to deal with these things that we don’t know that we don’t know. Again, the answer is really quite simple: research.The best way to get rid of an unknown unknown is to know about it. Whenever you are tasked with estimating something, your first instinct should be to want to do some research–to try and discover what it is that you don’t know that you don’t know yet. Unfortunately, most software developers don’t immediately think about doing research when trying to estimate something. Instead, they rely on past experience. If they’ve done something in the past that they deem similar enough, they will confidently estimate it–ignoring the possible pitfalls of the unknown unknowns. If they’ve never done something similar to what they are being asked to estimate, they’ll assume there are unknown unknowns everywhere and come up with estimates full of padding. Neither approach is good. Instead, you should first try and estimate how long it will take you to research a task before giving an estimate of how long the actual task will take. I’ve found that most software developers are pretty good at estimating how long it will take to research a task, even though they may be very bad at estimating how long it will take to complete the task itself. Once you are armed with research, you should have fewer unknown unknowns to deal with. You may still have some unknowns, but at least you’ll know about them. But, how does this look in reality? How do you actually research tasks that you are supposed to be estimating? Well, sometimes it involves pushing back and planning things out a bit ahead of time. I’ll give you an example of how this might work on a Scrum or Agile team. Suppose you want to start improving your estimates by doing research before estimating tasks. The problem is that when you are working on an Agile project, you usually need to estimate the tasks in an iteration and don’t really have the time to research each and every task before you estimate it–especially the big ones. I’ve found the best thing to do in this scenario is instead of estimating the big tasks right up front, to push the tasks back one iteration and instead estimate how long it will take to research each big tasks. So, you might have in your iteration any number of small research tasks which only have the purpose of getting you enough information to have a more accurate estimate for the big task in the next iteration. During these research tasks, you can also break down large tasks into smaller ones as you know more about them. Wait… wait.. wait… I know what you are thinking. I can’t just push a task into the next iteration. My boss and the business people will not like that. They want it done this iteration. Right you are, so how do you deal with this problem? Simple. You just start planning the bigger tasks one iteration in advance of when they need to be done. If you are working on an Agile team, you should adopt the habit of looking ahead and picking up research tasks for large tasks that will be coming up in future iterations. By always looking forward and doing research before estimating anything substantial, you’ll get into the habit of producing much more accurate estimates. This technique can also be applied to smaller tasks, by taking, sometimes, just five or ten minutes to do a minor amount of research on a task, before giving an estimation. The next time you are trying to estimate a task, devote some time upfront to doing some research. You’ll be amazed at how much more accurate your estimates become. Tip 3: Track your time One of the big problems we have with estimating things is that we don’t have an accurate sense of time. My memory of how long past projects took tends to be skewed based on factors like how much I was enjoying the work and how hungry I was. This skewed time in our heads can result in some pretty faulty estimations. For this reason it is important to track that actual time things take you. It is a very good idea to get into the habit of always tracking your time on whatever task you are doing. Right now, as I am writing this blog post, my Pomodoro timer is ticking down, tracking my time, so that I’ll have a better idea of how long blog posts take me to write. I’ll also have an idea if I am spending too much time on part of the process. Once you get into the habit of tracking your time, you’ll have a better idea of how long things actually take you and where you are spending your time. It’s crazy to think that you’ll be good at estimating things that haven’t happened yet, if you can’t even accurately say how long things that have happened took. Seriously, think about that for a minute. No, really. I want you to think about how absurd it is to believe that you can be good at estimating anything when you don’t have an accurate idea of how long past things you have done have taken. Many people argue that software development is unlike other work and it can’t be accurately estimated. While, I agree that software development is more difficult to estimate than installing carpets or re-roofing houses, I think that many software developer’s suck at estimation because they have no idea how long things actually take. Do yourself a favor and start tracking your time. There are a ton of good tools for doing this, like:RescueTime Toggl PayMoIf you are curious about how I track my time and plan my week, check out this video I did explaining the process I developed:By the way, following this process has caused me to become extremely good at estimating. I can usually estimate an entire week worth of work within one-to-two hours of accuracy. And I know this for a fact, because I track it. Tip 4: Time-box things I said I’d get back to this one, and here it is. One big secret to becoming a software developer who is better at estimating tasks is to time-box those tasks. It’s almost like cheating. When you time-box a task, you ensure it will take exactly as long as you have planned for it to take. You might think that most software development tasks can’t be time-boxed, but you are wrong. I use the technique very frequently, and I have found that many tasks we do tend to be quite variable in the time it takes us to do them. I’ve found that if you give a certain amount of time to a task–and only that amount of time–you can work in a way to make sure the work gets done in that amount of time. Consider the example of writing unit tests: For most software developers, writing unit tests is a very subjective thing. Unless you are going for 100% code coverage, you usually just write unit tests until you feel that you have adequately tested the code you are trying to test. (If you do test driven development, TDD, that might not be true either.) If you set a time-box for how long you are going to spend on writing unit tests, you can force yourself to work on the most important unit tests first and operate on the 80 / 20 principle to ensure you are getting the biggest bang for your buck. For many tasks, you can end up spending hours of extra time working on minor details that don’t really make that much of a difference. Time-boxing forces you to work on what is important first and to avoid doing things like premature optimization or obsessively worrying about variable names. Sure, sometimes, you’ll have to run over the time-box you set for a task, but many times, you’ll find that you actually got done what needed to be done and you can always come back and gold-plate things later if there is time for it. Again, just like tracking your time, time-boxing is a habit you have to develop, but once you get used to it, you’ll be able to use it as a cheat to become more accurate at estimates than you ever imagined possible. You may want to get yourself a Pomodoro or kitchen timer to help you track your time and time-box tasks. Sometimes it is nice to have a physical timer. Tip 5: Revise your estimates Here is a little secret: You don’t have to get it right on the first go. Instead, you can actually revise your estimates as you progress through a task.Yes, I know that your boss wants you to give an accurate estimate right now, not as you get closer to being done, but you can always give you best estimate right now and revise it as you progress through the task. I can’t image any situation where giving more up-to-date information is not appreciated. Use the other four tips to make sure your original estimate is as accurate as possible, but every so often, you should take a moment to reevaluate what the actual current estimate is. Think about it this way: You know when you download a file and it tells you how long it will take? Would you prefer that it calculated that duration just at the beginning of the download process and never updated? Of course not. Instead, most download managers show a constantly updated estimate of how much time is left. Just going through this process can make you better at estimations in general. When you constantly are updating and revising your estimates, you are forced to face the reasons why your original estimates were off. What about you? These are just a few of the most useful tips that I use to improve the accuracy of my estimates, but what about you? Is there something I am leaving out here? Let me know in the comments below.Reference: 5 Ways Software Developers Can Become Better at Estimation from our JCG partner John Sonmez at the Making the Complex Simple blog....
apache-maven-logo

Configure JBoss / Wildfly Datasource with Maven

Most Java EE applications use database access in their business logic, so developers are often faced with the need to configure drivers and database connection properties in the application server. In this post, we are going to automate that task for JBoss / Wildfly and a Postgre database using Maven. The work is based on my World of Warcraft Auctions Batch application from the previous post.             Maven Configuration Let’s start by adding the following to our pom.xml: Wildfly Maven Pluginorg.wildfly.plugins wildfly-maven-plugin 1.0.2.Final false target/scripts/${cli.file} org.postgresql postgresql 9.3-1102-jdbc41We are going to use the Wildfly Maven Plugin to execute scripts with commands in the application server. Note that we also added a dependency to the Postgre driver. This is for Maven to download the dependency, because we are going to need it later to add it to the server. There is also a ${cli.file} property that is going to be assigned to a profile. This is to indicate which script we want to execute. Let’s also add the following to the pom.xml: Maven Resources Pluginorg.apache.maven.plugins maven-resources-plugin 2.6 copy-resources process-resources copy-resources ${basedir}/target/scripts src/main/resources/scripts true ${basedir}/src/main/resources/configuration.propertiesWith the Resources Maven Plugin we are going to filter the script files contained in the src/main/resources/scripts and replace them with the properties contained in ${basedir}/src/main/resources/configuration.properties file. Finally lets add a few Maven profiles to the pom.xml, with the scripts that we want to run: Maven Profiles install-driver wildfly-install-postgre-driver.cli remove-driver wildfly-remove-postgre-driver.cli install-wow-auctions wow-auctions-install.cli remove-wow-auctions wow-auctions-remove.cliWildfly Script Files Add Driver The scripts with the commands to add a Driver: wildfly-install-postgre-driver.cli # Connect to Wildfly instance connect# Create Oracle JDBC Driver Module # If the module already exists, Wildfly will output a message saying that the module already exists and the script exits. module add \ --name=org.postgre \ --resources=${settings.localRepository}/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-jdbc41.jar \ --dependencies=javax.api,javax.transaction.api# Add Driver Properties /subsystem=datasources/jdbc-driver=postgre: \ add( \ driver-name="postgre", \ driver-module-name="org.postgre") Database drivers are added to Wildfly as a module. In this was, the driver is widely available to all the applications deployed in the server. With ${settings.localRepository} we are pointing into the database driver jar downloaded to your local Maven repository. Remember the dependency that we added into the Wildfly Maven Plugin? It’s to download the driver when you run the plugin and add it to the server. Now, to run the script we execute (you need to have the application server running): mvn process-resources wildfly:execute-commands -P "install-driver" The process-resources lifecycle is needed to replace the properties in the script file. In my case ${settings.localRepository} is replaced by /Users/radcortez/.m3/repository/. Check the target/scripts folder. After running the command, you should see the following output in the Maven log: {"outcome" => "success"} And on the server:INFO [org.jboss.as.connector.subsystems.datasources] (management-handler-thread - 4) JBAS010404: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 9.3) INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) JBAS010417: Started Driver service with driver-name = postgre wildfly-remove-postgre-driver.cli # Connect to Wildfly instance connectif (outcome == success) of /subsystem=datasources/jdbc-driver=postgre:read-attribute(name=driver-name)# Remove Driver /subsystem=datasources/jdbc-driver=postgre:removeend-if# Remove Oracle JDBC Driver Module module remove --name=org.postgre This script is to remove the driver from the application server. Execute mvn wildfly:execute-commands -P "remove-driver". You don’t need process-resources if you already executed the command before, unless you change the scripts. Add Datasource wow-auctions-install.cli The scripts with the commands to add a Datasource: wow-auctions-install.cli # Connect to Wildfly instance connect# Create Datasource /subsystem=datasources/data-source=WowAuctionsDS: \ add( \ jndi-name="${datasource.jndi}", \ driver-name=postgre, \ connection-url="${datasource.connection}", \ user-name="${datasource.user}", \ password="${datasource.password}")/subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}") We also need a a file to define the properties: configuration.properties datasource.jndi=java:/datasources/WowAuctionsDS datasource.connection=jdbc:postgresql://localhost:5432/wowauctions datasource.user=wowauctions datasource.password=wowauctions Default Java EE 7 Datasource Java EE 7, specifies that the container should provide a default Datasource. Instead of defining a Datasource with the JNDI name java:/datasources/WowAuctionsDS in the application, we are going to point our newly created datasource to the default one with /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}"). In this way, we don’t need to change anything in the application. Execute the script with mvn wildfly:execute-commands -P "install-wow-auctions". You should get the following Maven output:org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} And on the server: INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) JBAS010400: Bound data source wow-auctions-remove.cli # Connect to Wildfly instance connect# Remove Datasources /subsystem=datasources/data-source=WowAuctionsDS:remove/subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="java:jboss/datasources/ExampleDS") This is the script to remove the Datasource and revert the Java EE 7 default Datasource. Run it by executing mvn wildfly:execute-commands -P "remove-wow-auctions" Conclusion This post demonstrated how to automate add / remove Drivers to Wildfly instances and also add / remove Datasources. This is useful if you want to switch between databases or if you’re configuring a server from the ground up. Think about CI environments. These scripts are also easily adjustable to other drivers.You can get the code from the WoW Auctions Github repo, which uses this setup.Enjoy!Reference: Configure JBoss / Wildfly Datasource with Maven from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
jboss-wildfly-logo

WebSocket Chat on WildFly and OpenShift

Chat is one of the most canonical sample to explain WebSocket. Its a fairly commonly used interface and allows to explain the fundamental WebSocket concepts very easily. Of course, Java EE 7 WebSocket has one too, available here! You can easily run it on WildFly using the following steps:               curl -O http://download.jboss.org/wildfly/8.1.0.Final/wildfly-8.1.0.Final.zip unzip wildfly-8.1.0.Final.zip ./wildfly-8.1.0.Final/bin/standalone.sh git clone https://github.com/javaee-samples/javaee7-samples.git cd javaee7-samples mvn -f websocket/chat/pom.xml wildfly:deploy And then access it at http://localhost:8080/chat/. One of the biggest advantage of WebSocket is how it opens up a socket over the same port as HTTP, 8080 in this case. If you want to deploy this application to OpenShift, then WebSocket is available on port 8000 for regular access, and 8443 for secure access. This is explained in the figure below:If you want to run this Chat application on OpenShift, then use the following steps:Click here to provision a WildFly instance in OpenShift. Change the name to “chatserver” and everything else as default. Click on “Create Application” to create the application. Clone the workspace: git clone ssh://544f08a850044670df00009e@chatserver-milestogo.rhcloud.com/~/git/chatserver.git/Edit the first line of “javaee7-samples/websocket/chat/src/main/webapp/websocket.js”from: var wsUri = "ws://" + document.location.hostname + ":" + document.location.port + document.location.pathname + "chat"; to: var wsUri = "ws://" + document.location.hostname + ":8000" + document.location.pathname + "chat";Create the WAR file: cd javaee7-samples mvn -f websocket/chat/pom.xmlCopy the generated WAR file to the workspace cloned earlier: cd .. cp javaee7-samples/websocket/chat/target/chat.war chatserver/deployments/ROOT.warRemove existing files and add the WAR file to git repository: cd chatserver git rm -rf src pom.xml git add deployments/ROOT.war git commit . -m"updating files" git push And this shows the output as: Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 6.88 KiB | 0 bytes/s, done. Total 4 (delta 1), reused 0 (delta 0) remote: Stopping wildfly cart remote: Sending SIGTERM to wildfly:285130 ... remote: Building git ref 'master', commit 05a7978 remote: Preparing build for deployment remote: Deployment id is 14bcec20 remote: Activating deployment remote: Deploying WildFly remote: Starting wildfly cart remote: Found 127.2.87.1:8080 listening port remote: Found 127.2.87.1:9990 listening port remote: /var/lib/openshift/544f08a850044670df00009e/wildfly/standalone/deployments /var/lib/openshift/544f08a850044670df00009e/wildfly remote: /var/lib/openshift/544f08a850044670df00009e/wildfly remote: CLIENT_MESSAGE: Artifacts deployed: ./ROOT.war remote: ------------------------- remote: Git Post-Receive Result: success remote: Activation status: success remote: Deployment completed with status: success To ssh://544f08a850044670df00009e@chatserver-milestogo.rhcloud.com/~/git/chatserver.git/ 454bba9..05a7978  master -> masterAnd now your chat server is available at: http://chatserver-milestogo.rhcloud.com and looks like:Enjoy!Reference: WebSocket Chat on WildFly and OpenShift from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Securing WebSocket using wss and HTTPS/TLS

50th tip on this blog, yaay! Tech Tip #49 explained how to secure WebSockets using username/password and Servlet Security mechanisms. This Tech Tip will explain how to secure WebSockets using HTTPS/TLS on WildFly. Lets get started!        Create a new keystore: keytool -genkey -alias websocket -keyalg RSA -keystore websocket.keystore -validity 10950 Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Arun Gupta What is the name of your organizational unit? [Unknown]: JBoss Middleware What is the name of your organization? [Unknown]: Red Hat What is the name of your City or Locality? [Unknown]: San Jose What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Arun Gupta, OU=JBoss Middleware, O=Red Hat, L=San Jose, ST=CA, C=US correct? [no]: yesEnter key password for <websocket> (RETURN if same as keystore password): Re-enter new password: Used “websocket” as the convenience password. Download WildFly 8.1, unzip, and copy “websocket.keystore” file in standalone/configuration directory. Start WildFly as: ./bin/standalone.shConnect to it using jboss-cli as: ./bin/jboss-cli.sh -cAdd a new security realm as: [standalone@localhost:9990 /] /core-service=management/security-realm=WebSocketRealm:add() {"outcome" => "success"} And configure it: [standalone@localhost:9990 /] /core-service=management/security-realm=WebSocketRealm/server-identity=ssl:add(keystore-path=websocket.keystore, keystore-relative-to=jboss.server.config.dir, keystore-password=websocket) {     "outcome" => "success", "response-headers" => { "operation-requires-reload" => true, "process-state" => "reload-required" } }Add a new HTTPS listener as: [standalone@localhost:9990 /] /subsystem=undertow/server=default-server/https-listener=https:add(socket-binding=https, security-realm=WebSocketRealm) { "outcome" => "success",   "response-headers" => {"process-state" => "reload-required"} }A simple sample to show TLS-based security for WebSocket is available at github.com/javaee-samples/javaee7-samples/tree/master/websocket/endpoint-wss. Clone the workspace and change directory to “websocket/endpoint-wss”. The sample’s deployment descriptor has: <security-constraint> <web-resource-collection> <web-resource-name>Secure WebSocket</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> This ensures that any request coming to this application will be auto-directed to an HTTPS URL. Deploy the sample by giving the command: mvn wildfly:deployNow accessing http://localhost:8080/endpoint-wss redirects to https://localhost:8080/endpoint-wss. The browsers may complain about self-signed certificate. For example, Chrome shows the following warning:And Safari shows the following warning:In either case, click on “Proceed to localhost” or “Continue” to proceed further. And then a secure WebSocket connection is established. Another relevant point to understand is that a non-secure WebSocket connection cannot be made from an https-protected page. For example the following code in our sample: new WebSocket("ws://localhost:8080/endpoint-wss/websocket"); will throw the following exception in Chrome Developer Tools: [blocked] The page at 'https://localhost:8443/endpoint-wss/index.jsp' was loaded over HTTPS, but ran insecure content from 'ws://localhost:8080/endpoint-wss/websocket': this content should also be loaded over HTTPS. Uncaught SecurityError: Failed to construct 'WebSocket': An insecure WebSocket connection may not be initiated from a page loaded over HTTPS. Enjoy!Reference: Securing WebSocket using wss and HTTPS/TLS from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

The JAXB Well Known Secret

Introduction I rediscovered a library that Java offers to the masses. When I first read the specification, I was confused and thought I needed all these special tools to implement. I found recently that all was needed was some annotations and a POJO. JAXB JAXB stands for Java Architecture for XML Binding. This architecture allows a developer to turn the data from a class to be turned into a XML representation. This is called marshalling. The architecture also allows a developer to reverse the process turning a XML representation to be turned into a class. This is called unmarshalling. There are tools that can create Java classes from XML Schema files. The tool is called xjc. There is another tool that creates a xsd files by using schemagen. Marshalling Marshalling and unmarshalling happens several places in Java. The first I was exposed to this was RMI. Objects are sent over  being used as parameters for remote method calls, hence the name Remote Method Invocation (RMI). Another place it happens is writing objects to a stream. The streams that implement this are ObjectOutputStream and ObjectInputStream. Another place that it happens are ORM classes. Another way of course is writing a XML representation of an instance. Classes that want to be marshalled need to implement Serializable and all of its member attributes need to implement Serializable too with the exception of classes going through JAXB. Serializable is a marker interface. It has no methods to implement but it shows that a class can be serialized or marshalled. An object that has been marshalled has had its data put into some persistent fashion. Unmarshalled objects have had their data read from a persistent state and joined with a class. This makes classpaths very important. For a fun fact, a valid entry in a classpath is http://ip:port/path/to/jar. I imagine some organizations make use of this by centralizing their jar files and the latest version is just a download away. Example I used maven and spring to do this example. The reason was not to make it more complicated but to make the code cleaner to read and focus more on using the technology that I am showing. The dependencies in the pom.xml file are below: <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency><dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.2.8-b01</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.3.RELEASE</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.2.3.RELEASE</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>3.2.3.RELEASE</version> </dependency></dependencies> The wonderful thing about JAXB is that it uses POJOs. Contact.java is the central POJO class in the collection of three. package org.mathison.jaxb.beans;import java.util.List; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlElementWrapper; import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Contact { private String lastName; private String firstName; private String middleName; private String jobTitle;@XmlElementWrapper(name = "addresses") @XmlElement(name = "address") private List<Address> addresses;@XmlElementWrapper(name = "phone-numbers") @XmlElement(name = "phone-number") private List<PhoneNumber> numbers;public String getLastName() { return lastName; }public void setLastName(String lastName) { this.lastName = lastName; }public String getFirstName() { return firstName; }public void setFirstName(String firstName) { this.firstName = firstName; }public String getMiddleName() { return middleName; }public void setMiddleName(String middleName) { this.middleName = middleName; }public String getJobTitle() { return jobTitle; }public void setJobTitle(String jobTitle) { this.jobTitle = jobTitle; }public List<Address> getAddresses() { return addresses; }public void setAddresses(List<Address> addresses) { this.addresses = addresses; }public List<PhoneNumber> getNumbers() { return numbers; }public void setNumbers(List<PhoneNumber> numbers) { this.numbers = numbers; }@Override public String toString() { return "Contact{" + "lastName=" + lastName + ", firstName=" + firstName + ", middleName=" + middleName + ", jobTitle=" + jobTitle + ", addresses=" + addresses + ", numbers=" + numbers + '}'; }@Override public int hashCode() { int hash = 3;hash = 23 * hash + (this.lastName != null ? this.lastName.hashCode() : 0);hash = 23 * hash + (this.firstName != null ? this.firstName.hashCode() : 0);hash = 23 * hash + (this.middleName != null ? this.middleName.hashCode() : 0);hash = 23 * hash + (this.jobTitle != null ? this.jobTitle.hashCode() : 0);hash = 23 * hash + (this.addresses != null ? this.addresses.hashCode() : 0);hash = 23 * hash + (this.numbers != null ? this.numbers.hashCode() : 0);return hash; }@Override public boolean equals(Object obj) {if (obj == null) { return false; }if (getClass() != obj.getClass()) { return false; }final Contact other = (Contact) obj;if ((this.lastName == null) ? (other.lastName != null) : !this.lastName.equals(other.lastName)) {return false; }if ((this.firstName == null) ? (other.firstName != null) : !this.firstName.equals(other.firstName)) { return false; }if ((this.middleName == null) ? (other.middleName != null) : !this.middleName.equals(other.middleName)) { return false; }if ((this.jobTitle == null) ? (other.jobTitle != null) : !this.jobTitle.equals(other.jobTitle)) { return false; }if(!listEquals(this.addresses, other.addresses)) { return false; }if(!listEquals(this.numbers, other.numbers)) { return false; }return true;}private boolean listEquals(List first, List second) { for(Object o: first) { if(!second.contains(o)) { return false; } }return true; }} The main part to look at is the annotations. @XmlRootElement defines that this is the start of a class. @XmlAccessorType(XmlAccessType.FIELD) tells the architecture that the fields will be used to define the elements in the xml. The annotations can be put on the getters as well. If the annotation is not used, JAXB gets confused as to which to use. For instances where a list is present, @XmlElementWrapper is used to tell JAXB what the outer tag will be. For example, there are a list of addresses. The wrapper takes a parameter named “name” and it is filled with “addresses.” When the XML is rendered, there will be the tag “addresses” wrapped around the collection of addresses. The @XmlElement annotation is used when one wants to change the tag of a property. To come back to our address list, the annotation has redefined the addresses list to “address.” This will cause each address object to have a tag of “address” instead of “addresses” which is already taken up. The same pattern is used for numbers. The rest of the properties will be have tags that match the name of them. For example, lastName will be turned into the tag “lastName.” The other two POJOs, PhoneNumber.java and Address.java have public enum classes. Here is PhoneNumber.java: package org.mathison.jaxb.beans;import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType;@XmlRootElement public class PhoneNumber {@XmlType(name="phone-type") public enum Type { HOME, WORK, HOME_FAX, WORK_FAX; }private Type type; private String number;public Type getType() { return type; }public void setType(Type type) { this.type = type; }public String getNumber() { return number; }public void setNumber(String number) { this.number = number; }@Override public String toString() { return "PhoneNumber{" + "type=" + type + ", number=" + number + '}'; }@Override public int hashCode() { int hash = 7;hash = 37 * hash + (this.type != null ? this.type.hashCode() : 0); hash = 37 * hash + (this.number != null ? this.number.hashCode() : 0);return hash;}@Overridepublic boolean equals(Object obj) {if (obj == null) { return false; }if (getClass() != obj.getClass()) { return false; }final PhoneNumber other = (PhoneNumber) obj;if (this.type != other.type) { return false; }if ((this.number == null) ? (other.number != null) : !this.number.equals(other.number)) { return false; }return true; }} The annotation of note is @XmlType. This tells JAXB that a class of limited number of values. It takes a name parameter. The last POJO also uses @XmlType to define its public enum class. It can be found at Address.java. Putting It All Together With all of this annotation and class definition, it is time to pull it all together into one main class. Here is App.java, the main class: package org.mathison.jaxb.app;import java.io.StringReader; import java.io.StringWriter; import javax.xml.bind.JAXBContext; import javax.xml.bind.Marshaller; import javax.xml.bind.Unmarshaller; import org.mathison.jaxb.beans.Contact; import org.springframework.context.ApplicationContext; import org.springframework.context.support.GenericXmlApplicationContext;public class App {public static void main( String[] args ) {ApplicationContext cxt = new GenericXmlApplicationContext("jaxb.xml"); Contact contact = cxt.getBean("contact", Contact.class); StringWriter writer = new StringWriter();try { JAXBContext context = JAXBContext.newInstance(Contact.class);//create xml from an instance from ContactMarshaller m = context.createMarshaller(); m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE); m.marshal(contact, writer);String xml = writer.getBuffer().toString();System.out.println(xml);//Take xml to ContactStringReader reader = new StringReader(xml); Unmarshaller u = context.createUnmarshaller();Contact fromXml = (Contact)u.unmarshal(reader);System.out.println("Are the instances equivalent: " + contact.equals(fromXml));} catch(Exception e){ e.printStackTrace(); } } } First, the instance of contact is retrieved from the ApplicationContext. Second, an instance of JAXBContext is created with the Contact class as the root class. The context will analyze the class structure and create a context that can marshall or unmarshall the Contact, Address and PhoneNumber classes. In the next section, a marshaller is created from the JAXBContext. The property, Marshaller.JAXB_FORMATTED_OUTPUT is set to true. This creates a XML output that is formatted. If the property was not set, the XML would come out as one line of text. The the marshaller is called to marshall contact and be written to a StringWriter. Then the XML is printed to System.out. The output should look like the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <contact><lastName>Mathison</lastName> <firstName>Daryl</firstName> <middleName>Bob</middleName> <jobTitle>Developer</jobTitle> <addresses> <address> <addressLine>123 Willow View</addressLine> <city>Cibolo</city> <state>TX</state> <type>HOME</type> <zipCode>78228</zipCode> </address> <address> <addressLine>411 Grieg</addressLine> <city>San Antonio</city> <state>TX</state> <type>WORK</type> <zipCode>78228</zipCode> </address> </addresses><phone-numbers> <phone-number> <number>210-123-4567</number> <type>WORK</type> </phone-number><phone-number> <number>210-345-1111</number> <type>HOME</type> </phone-number> </phone-numbers></contact> In the next section, the xml is unmarshalled back into an instance of Contact with its data. An Unmarshaller is created by the JAXBContext. Next, the unmarshaller is passed a StringReader with the just created XML as its contents. The Unmarshaller returns an Object that gets cast to a Contact. The original instance of Contact is tested against the new Contact instance to see if they are equivalent. The output should show: Are the instances equivalent: true. Summary In this example, an instance of Contact was turned into XML and the resulting XML was turned back to a Contact instance with the help of JAXB. JAXB is an architecture that maps the state of an object into XML and maps XML back into an object. Referenceshttp://www.techrepublic.com/blog/programming-and-development/jaxb-20-offers-improved-xml-binding-in-java/498 http://www.vogella.com/articles/JAXB/article.html http://en.wikipedia.org/wiki/JAXBReference: The JAXB Well Known Secret from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
java-logo

Adaptive heap sizing

While enhancing our test bed to improve the Plumbr GC problem detector,  I ended up writing a small test case I thought might be interesting for the wider audience. The goal I was chasing was to test JVM’s self-adaptiveness in regard of how the heap is segmented between eden, survivor and tenured spaces. The test itself is generating objects in batches. Batches are generated once per second and each batch is 500KB in size. Those objects are referenced for five seconds, after this the references are removed and objects from this particular batch are eligible for garbage collection. The test was run with Oracle Hotspot 7 JVM on Mac OS X, using ParallelGC and is given 30MB heap space to work with. Knowing the platform, we can expect that the JVM will launch with the following heap configuration:The JVM will start with 10MB in Young and 20MB in Tenured space, as without explicit configuration the JVM is using 1:2 ratio to distribute heap between the Young and Tenured spaces. In my Mac OS X, 10MB of young space is further distributed in between Eden and two Survivor spaces, given 8MB and 2x1MB correspondingly. Again, these are the platform-specific defaults used.Indeed, when launching the test and peeking under the hood with jstat, we see the following, confirming our back-of-the-napkin estimates: My Precious:gc-pressure me$ jstat -gc 2533 1s S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 1024.0 1024.0 0.0 0.0 8192.0 5154.4 20480.0 0.0 21504.0 2718.9 0 0.000 0 0.000 0.000 1024.0 1024.0 0.0 0.0 8192.0 5502.1 20480.0 0.0 21504.0 2720.1 0 0.000 0 0.000 0.000 1024.0 1024.0 0.0 0.0 8192.0 6197.5 20480.0 0.0 21504.0 2721.0 0 0.000 0 0.000 0.000 1024.0 1024.0 0.0 0.0 8192.0 6545.2 20480.0 0.0 21504.0 2721.2 0 0.000 0 0.000 0.000 1024.0 1024.0 0.0 0.0 8192.0 7066.8 20480.0 0.0 21504.0 2721.6 0 0.000 0 0.000 0.000 1024.0 1024.0 0.0 0.0 8192.0 7588.3 20480.0 0.0 21504.0 2722.1 0 0.000 0 0.000 0.000 From here, we can also give the next set of predictions about what is going to happen:The 8MB in Eden will be filled in around 16 seconds – remember, we are generating 500KB of objects per second In every moment we have approximately 2.5MB of live objects – generating 500KB each second and keeping references for the objects for five seconds gives us just about that number Minor GC will trigger whenever the Eden is full – meaning we should see a minor GC in every 16 seconds or so. After the minor GC, we will end up with a premature promotion – Survivor spaces are just 1MB in size and the live set of 2.5MB will not fit into any of our 1MB Survivor spaces. So the only way to clean the Eden is to propagate the 1.5MB (2.5MB-1MB) of live objects not fitting into Survivor to Tenured space.Checking the logs gives us confidence about these predictions as well: My Precious:gc-pressure me$ jstat -gc -t 2575 1s Time S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 6.6 1024.0 1024.0 0.0 0.0 8192.0 4117.9 20480.0 0.0 21504.0 2718.4 0 0.000 0 0.000 0.000 7.6 1024.0 1024.0 0.0 0.0 8192.0 4639.4 20480.0 0.0 21504.0 2718.7 0 0.000 0 0.000 0.000 ... cut for brevity ... 14.7 1024.0 1024.0 0.0 0.0 8192.0 8192.0 20480.0 0.0 21504.0 2723.6 0 0.000 0 0.000 0.000 15.6 1024.0 1024.0 0.0 1008.0 8192.0 963.4 20480.0 1858.7 21504.0 2726.5 1 0.003 0 0.000 0.003 16.7 1024.0 1024.0 0.0 1008.0 8192.0 1475.6 20480.0 1858.7 21504.0 2728.4 1 0.003 0 0.000 0.003 ... cut for brevity ... 29.7 1024.0 1024.0 0.0 1008.0 8192.0 8163.4 20480.0 1858.7 21504.0 2732.3 1 0.003 0 0.000 0.003 30.7 1024.0 1024.0 1008.0 0.0 8192.0 343.3 20480.0 3541.3 21504.0 2733.0 2 0.005 0 0.000 0.005 31.8 1024.0 1024.0 1008.0 0.0 8192.0 952.1 20480.0 3541.3 21504.0 2733.0 2 0.005 0 0.000 0.005 ... cut for brevity ... 45.8 1024.0 1024.0 1008.0 0.0 8192.0 8013.5 20480.0 3541.3 21504.0 2745.5 2 0.005 0 0.000 0.005 46.8 1024.0 1024.0 0.0 1024.0 8192.0 413.4 20480.0 5201.9 21504.0 2745.5 3 0.008 0 0.000 0.008 47.8 1024.0 1024.0 0.0 1024.0 8192.0 961.3 20480.0 5201.9 21504.0 2745.5 3 0.008 0 0.000 0.008 Not in 16 seconds, but more like in every 15 seconds or so, the garbage collection kicks in, cleans the Eden and propagates ~1MB of live objects to one of the Survivor spaces and overflows the rest to Old space. So far, so good. The JVM is exactly behaving the way we expect. The interesting part kicks in after the JVM has monitored the GC behaviour for a while and starts to understand what is happening. During our test case, this happens in around 90 seconds: My Precious:gc-pressure me$ jstat -gc -t 2575 1s Time S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 94.0 1024.0 1024.0 0.0 1024.0 8192.0 8036.8 20480.0 8497.0 21504.0 2748.8 5 0.012 0 0.000 0.012 95.0 1024.0 3072.0 1024.0 0.0 4096.0 353.3 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 96.0 1024.0 3072.0 1024.0 0.0 4096.0 836.6 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 97.0 1024.0 3072.0 1024.0 0.0 4096.0 1350.0 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 98.0 1024.0 3072.0 1024.0 0.0 4096.0 1883.5 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 99.0 1024.0 3072.0 1024.0 0.0 4096.0 2366.8 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 100.0 1024.0 3072.0 1024.0 0.0 4096.0 2890.2 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 101.0 1024.0 3072.0 1024.0 0.0 4096.0 3383.7 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 102.0 1024.0 3072.0 1024.0 0.0 4096.0 3909.7 20480.0 10149.6 21504.0 2748.8 6 0.014 0 0.000 0.014 103.0 3072.0 3072.0 0.0 2720.0 4096.0 323.0 20480.0 10269.6 21504.0 2748.9 7 0.016 0 0.000 0.016 What we see here is the amazing adaptibility of the JVM. After learning about the application behaviour, the JVM has resized survivor space to be big enough to hold all live objects. New configuration for the Young space is now:Eden 4MB Survivor spaces 3MB eachAfter this, the GC frequency increases – the Eden is now 50% smaller and instead of ~16 seconds it now fills in around 8 seconds or so. But the benefit is also visible as the survivor spaces are now large enough to accommodate the live objects at any given time. Coupling this with the fact that no objects live longer than a single minor GC cycle (remember, just 2.5MB of live objects at any given time), we stop promoting objects to the old space. Continuing to monitor the JVM we see that the old space usage is constant after the adoption. No more objects are propagated to old, but as no major GC is triggered the ~10MB of garbage that managed to propagate before the adaption took place will live in the old space forever. You can also turn of the “amazing adaptiveness” if you are sure about what you are doing. Specifying -XX-UseAdaptiveSizingPolicy in your JVM parameters will instruct JVM to stick to the parameters given at launch time and not trying to outsmart you. Use this option with care, modern JVMs are generally really good at predicting the suitable configuration for you.Reference: Adaptive heap sizing from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....
ceylon_logo

Ceylon: Ceylon command-line plugins

With Ceylon we try our best to make every developer’s life easier. We do this with a great language, a powerful IDE, a wonderful online module repository, but also with an amazing command-line interface (CLI). Our command line is built around the idea of discoverability where you get a single executable called ceylon and lots of subcommands that you can discover via --help or completion. We have a number of predefined subcommands, but every so often, we want to be able to write new subcommands. For example, I want to be able to invoke both Java and JavaScript compilers and generate the API documentation in a single command ceylon all, or I want to be able to invoke the ceylon.formatter module with ceylon format instead of ceylon run ceylon.formatter. Well, with Ceylon 1.1 we now support custom subcommands, fashioned after the git plugin system. They’re easy to write: just place them inscript/your/module/ceylon-foo and package them with ceylon plugin pack your.module, and you can publish them to Herd. Now every one can install your CLI plugin with ceylon plugin install your.module/1.0 and call them with ceylon foo. What’s even better is that they will be listed in the ceylon --help and even work with autocompletion. ceylon.formatter uses one, and I encourage you to install them with ceylon plugin install ceylon.formatter/1.1.0 and format your code at will with ceylon format ceylon.build.engine also defines one and it just feels great being able to build your Ceylon project with ceylon build compile, I have to say. Although, unfortunately that particular module has not yet been published to Herd yet, but hopefully it will be pushed soon.Reference: Ceylon: Ceylon command-line plugins from our JCG partner Stéphane Épardaud at the Ceylon Team blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close