Featured FREE Whitepapers

What's New Here?

agile-logo

Agile VS Real Life

The Agile Manifesto tells us that: “We have come to value “Individuals and Interaction over Processes and Tools” Reality tells us otherwise. Want to do unit testing? Pick up a test framework and you’re good to go. Want your organization to be agile? Scrum is very simple, and SAFe is scaled simplicity. We know there are no magic bullets. Yet we’re still attracted to pre-wrapped solutions. Why? Good question. We’re not stupid, most of us anyway. Yet we find it very easy to make up a story about how easy it’s going to be. Here are a couple of theories.We’re concentrating on short term gains. Whether it’s the start-up pushing for a lucrative exit by beating the market, or the enterprise looking at the next investor call, companies are pushing their people to maximize value in the short term. With that in mind, people look for a “proven” tool or process, that minimizes long term investments. In fact, systems punish people, if they do otherwise. We don’t understand complexity. Think about how many systems we’re part of , how they impact each other, and then consider things we haven’t thought about. That’s overwhelming. Our wee brain just got out of fight-or-flight mode, you want it to do full plan and execution with all those question marks? People are hard. Better get back to dry land where tools and processes are actually working. We’re biased in so many ways. One of our biases is called anchoring. Simply put, if we first hear of something, we compare everything to it. It becomes our anchor. Now, when you’re researching a new area, do you start with the whole methodology? Nope. We’re looking for examples, similar to our experiences. What comes out first when we search? The simple stuff. Tools and processes. Once we start there, there’s no way back. We don’t live well with uncertainty. Short term is fine, because we have the illusion of control over it. Because of complexity, long-term is so out of our reach we give up, and try to concentrate short term wins. We don’t like to read the small print. Small print hurts the future-perfect view. We avoid the context issues, we tell ourselves that the annotation applies to a minority of the cases, which obviously we don’t belong to.  Give us the short-short version, and we’ll take it from there. We like to be part of the group. Groups are comfy. Belonging to one removes anxiety. Many companies choose scrum because it works for them, why won’t it work for me? The only people who publish big methodology papers are from the academia. And that’s one group we don’t want to be part of, heaven forbid.That’s why we like processes and tools. Fighting that is not only hard, but may carry a penalty. So what’s the solution? Looking for simplicity again? So soon? Well, the good news, is that it is possible to do that with discipline. If we have enough breathing room, if we don’t get push back from the rest of our company, if we acknowledge that we need to invest in learning , and understand that processes and tools are just the beginning – then there’s hope for us yet. Lots of if’s. But if you don’t want to bother, just go with this magical framework.Reference: Agile VS Real Life from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

Goodbye Sense – Welcome Alternatives?

I only recently noticed that Sense, the Chrome Plugin for Elasticsearch has been pulled from the app store by its creator. There are quite strong opinions in this thread and I would like to have Sense as a Chrome plugin as well. But I am also totally fine with Elasticsearch as a company trying to monetize some of its products so that is maybe something we just have to accept. What is interesting is that it isn’t even possible to fork the project and keep developing it as there is no explicit license in the repo. I guess there is a lesson buried somewhere in here. In this post I would like to look at some of the alternatives for interacting with Elasticsearch. Though the good thing about Sense is that it is independent from the Elasticsearch installation we are looking at plugins here. It might be possible to use some of them without installing them in Elasticsearch but I didn’t really try. The plugins are generally doing more things but I am looking at the REST capabilities only. Marvel Marvel is the commercial plugin by Elasticsearch (free for development purposes). Though it does lots of additional things, it contains the new version of Sense. Marvel will track lots of the state and interaction with Elasticsearch in a seperate index so be aware that it might store quite some data. Also of course you need to respect the license; when using it on a production system you need to pay. The main Marvel dashboard, which is Kibana, is available at http://localhost:9200/_plugin/marvel. Sense can be accessed directly using http://localhost:9200/_plugin/marvel/sense/index.html.The Sense version of Marvel behaves exactly like the one you are used from the Chrome plugin. It has highlighting, autocompletion (even for new features), the history and the formatting. elasticsearch-head elasticsearch-head seems to be one of the oldest plugins available for Elasticsearch and it is recommended a lot. The main dashboard is available at http://localhost:9200/_plugin/head/ which contains the cluster overview.There is an interface for building queries at the Structured Query tab./p>It lets you execute queries by selecting values from dropdown boxes and it can even detect fields that are available for the index and type. Results are displayed in a table. Unfortunately the values that can be selected are rather outdated. Instead of the match query it still contains the text query that is deprecated since Elasticsearch 0.19.9 and is not available anymore with newer versions of Elasticsearch. Another interface on the Any Request tab lets you execute custom requests.The text box that accepts the body has no highlighting and it is not possble to use tabs but errors will be displayed, the response is formatted, links are set and you do have the option to use a table or the JSON format for responses. The history lets you execute older queries. There are other options like Result Transformer that sound interesting but I have never tried those. elasticsearch-kopf elasticsearch-kopf is a clone of elasticsearch-head that also provides an interface to send arbitrary requests to Elasticsearch.You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing a autocompletion. If you’d like to learn more about elasticsearch-kopf I have recently published a tour through its features. Inquisitor Inquisitor is a tool to help you understand Elasticsearch queries. Besides other options it allows you to execute search queries.Index and type can be chosen from the ones available in the cluster. There is no formatting in the query field, you can’t even use tabs for indentation, but errors in your query are displayed in the panel on top of the results while typing. The response is displayed in a table, matching fields are automatically highlighted. Because of the limited possibilites when entering text the plugin seems to be more useful when it comes to the analyzing part or for pasting existing queries Elastic-Hammer Andrew Cholakian, the author of Exploring Elasticsearch, has published another query tool, Elastic-Hammer. It can either be installed locally or used as an online version directly.It is a quite useful query tool that will display syntactic errors in your query and format images and links in a pretty response. It even offers autocompletion though not as elaborated as the one Sense and Marvel are providing: It will display any allowed term, no matter the context. So you can’t really see which terms currently are allowed but only that the term is allowed at all. Nevertheless this can be useful. Searches can also be saved in local storage and executed again. Conclusion Currently none of the free and open source plugins seems to provide an interface that is as good as the one contained in Sense and Marvel. As Marvel is free for development you can still use but you need to install it in the instances again. Sense was more convenient and easier to start but I guess one can get along with Marvel the same way. Finally I wouldn’t be surprised if someone from the very active Elasticsearch community comes up with another tool that can take the place of Sense again.Reference: Goodbye Sense – Welcome Alternatives? from our JCG partner Florian Hopf at the Dev Time blog....
java-logo

Java SE 8 new features tour: Functional programming with Lambda Expression

This article of the “Java SE 8 new features tour” series will deep dive into understanding Lambda expressions. I will show you a few different uses of Lambda Expressions. They all have in common the implementation of functional interfaces. I will explain how the compiler is inferring information from code, such as specific types of variables and what is really happening in the background. In the previous article “Java SE 8 new features tour: The Big change, in Java Development world”, where I have talked about what we are going to explore during this series. I have started by an introduction to Java SE 8 main features, followed by installation process of JDK8 on both Microsoft windows and Apple Mac OS X platforms, with important advices and notice to take care of. Finally, we went through a development of a console application powered by Lambda expression to make sure that we have installed Java SE 8 probably. Source code is hosted on my Github account: Clone from HERE. What is Lambda expression? Perhaps the best-known new feature of Java SE 8 is called Project Lambda, an effort to bring Java into the world of functional programming. In computer science terminology;A Lambda is an anonymous function. That is, a function without a name.In Java;All functions are members of classes, and are referred to as methods. To create a method, you need to define the class of which it’s a member.A lambda expression in Java SE 8 lets you define a class and a single method with very concise syntax implementing an interface that has a single abstract method. Let’s figure out the idea. Lambda Expressions lets developers simplify and shorten their code. Making it more readable and maintainable. This leads to remove more verbose class declarations. Let’s take a look at a few code snippets.Implementing an interface: Prior to Java SE 8, if you wanted to create a thread, you’d first define a class that implements the runnable interface. This is an interface that has a single abstract method named Run that accepts no arguments. You might define the class in its own code file. A file named by MyRunnable.java. And you might name the class, MyRunnable, as I’ve done here. And then you’d implement the single abstract method. public class MyRunnable implements Runnable { @Override public void run() { System.out.println("I am running"); } public static void main(String[] args) { MyRunnable r1 = new MyRunnable(); new Thread(r1).start(); } } In this example, my implementation outputs a literal string to the console. You would then take that object, and pass it to an instance of the thread class. I’m instantiating my runnable as an object named r1. Passing it to the thread’s constructor and calling the thread’s start method. My code will now run in its own thread and its own memory space. Implementing an inner class: You could improve on this code a bit, instead of declaring your class in a separate file, you might declare it as single use class, known as an inner class, local to the method in which it’s used. public static void main(String[] args) { Runnable r1 = new Runnable() { @Override public void run() { System.out.println("I am running"); } }; new Thread(r1).start(); } So now, I’m once again creating an object named r1, but I’m calling the interface’s constructor method directly. And once again, implementing it’s single abstract method. Then I’m passing the object to the thread’s constructor. Implementing an anonymous class: And you can make it even more concise, by declaring the class as an anonymous class, so named because it’s never given a name. I’m instantiating the runnable interface and immediately passing it to the thread constructor. I’m still implementing the run method and I’m still calling the thread’s start method. public static void main(String[] args) { new Thread(new Runnable() { @Override public void run() { System.out.println("I am running"); } }).start(); }Using lambda expression: In Java SE 8 you can re-factor this code to significantly reduce it and make it a lot more readable. The lambda version might look like this. public static void main(String[] args) { Runnable r1 = () -> System.out.println("I am running"); new Thread(r1).start(); } I’m declaring an object with a type of runnable but now I’m using a single line of code to declare the single abstract method implementation and then once again I’m passing the object to the Thread’s constructor. You are still implementing the runnable interface and calling it’s run method but you’re doing it with a lot less code. In addition, it could be improved as the following: public static void main(String[] args) { new Thread(() -> System.out.println("I am running")).start(); } Here is an important quote from an early specs document about Project Lambda. Lambda expressions can only appear in places where they will be assigned to a variable whose type is a functional interface. Quote By Brian Goetz Let’s break this down to understand what’s happening.What are the functional interfaces? A functional interface is an interface that has only a single custom abstract method. That is, one that is not inherited from the object class. Java has many of these interfaces such as Runnable, Comparable, Callable, TimerTask and many others. Prior to Java 8, they were known as Single Abstract Method or SAM interfaces. In Java 8 we now call them functional interfaces. Lambda Expression syntax:This lambda expression is returning an implementation of the runnable interface; it has two parts separated by a new bit of syntax called the arrow token or the Lambda operator. The first part of the lambda expression, before the arrow token, is the signature of the method you’re implementing. In this example, it’s a no arguments method so it’s represented just by parentheses. But if I’m implementing a method that accepts arguments, I would simply give the arguments names. I don’t have to declare their types. Because the interface has only a single abstract method, the data types are already known. And one of the goals of a lambda expression is to eliminate unnecessary syntax. The second part of the expression, after the arrow token, is the implementation of the single method’s body. If it’s just a single line of code, as with this example, you don’t need anything else. To implement a method body with multiple statements, wrap them in braces. Runnable r = ( ) -> { System.out.println("Hello!"); System.out.println("Lambda!"); }; Lambda Goals: Lambda Expressions can reduce the amount of code you need to write and the number of custom classes you have to create and maintain. If you’re implementing an interface for one-time use, it doesn’t always make sense to create yet another code file or yet another named class. A Lambda Expression can define an anonymous implementation for one time use and significantly streamline your code. Defining and instantiating a functional interface To get started learning about Lambda expressions, I’ll create a brand new functional interface. An interface with a single abstract method, and then I’ll implement that interface with the Lambda expression. You can use my source code project “JavaSE8-Features” hosted on github to navigate the project code.Method without any argument, Lambda implementation In my source code, I’ll actually put the interface into its own sub-package ending with lambda.interfaces. And I’ll name the interface, HelloInterface.In order to implement an interface with a lambda expression, it must have a single abstract method. I will declare a public method that returns void, and I’ll name it doGreeting. It won’t accept any arguments.That is all you need to do to make an interface that’s usable with Lambda expressions. If you want, you can use a new annotation, that’s added to Java SE 8, named Functional Interface. /** * * @author mohamed_taman */ @FunctionalInterface public interface HelloInterface { void doGreeting(); } Now I am ready to create a new class UseHelloInterface under lambda.impl package, which will instantiate my functional interface (HelloInterface) as the following: /** * @author mohamed_taman */ public class UseHelloInterface { public static void main(String[] args) { HelloInterface hello = ()-> out.println("Hello from Lambda expression"); hello.doGreeting(); } } Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Hello from Lambda expression ------------------------------------------------------------------------------------ So that’s what the code can look like when you’re working with a single abstract method that doesn’t accept any arguments. Let’s take a look at what it looks like with arguments.Method with any argument, Lambda implementation Under lambda.interfaces. I’ll create a new interface and name it CalculatorInterface. Then I will declare a public method that returns void, and I will name it doCalculate, which will receive two integer arguments value1 and value2. /** * @author mohamed_taman */ @FunctionalInterface public interface CalculatorInterface { public void doCalculate(int value1, int value2); } Now I am ready to create a new class Use CalculatorInterface under lambda.impl package, which will instantiate my functional interface (CalculatorInterface) as the following: public static void main(String[] args) { CalculatorInterface calc = (v1, v2) -> { int result = v1 * v2; out.println("The calculation result is: "+ result); }; calc.doCalculate(10, 5); } Note the doCalculate() arguments, they were named value1 and value2 in the interface, but you can name them anything here. I’ll name them v1 and v2. I don’t need to put in int before the argument names; that information is already known, because the compiler can infer this information from the functional interface method signature.Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- The calculation result is: 50 ------------------------------------------------------------------------------------ BUILD SUCCESS Always bear in mind the following rule: Again, you have to follow that rule that the interface can only have one abstract method. Then that interface and its single abstract method can be implemented with a lambda expression.Using built-in functional interfaces with lambdas I’ve previously described how to use a lambda expression to implement an interface that you’ve created yourself.Now, I’ll show lambda expressions with built in interfaces. Interfaces that are a part of the Java runtime. I’ll use two examples. I’m working in a package called lambda.builtin, that’s a part of the exercise files. And I’ll start with this class. UseThreading. In this class, I’m implementing the Runnable interface. This interface’s a part of the multithreaded architecture of Java.My focus here is on how you code, not in how it operates. I’m going to show how to use lambda expressions to replace these inner classes. I’ll comment out the code that’s declaring the two objects. Then I’ll re-declare them and do the implementation with lambdas. So let’s start. public static void main(String[] args) { //Old version // Runnable thrd1 = new Runnable(){ // @Override // public void run() { // out.println("Hello Thread 1."); // } //}; /* ***************************************** * Using lambda expression inner classes * ***************************************** */ Runnable thrd1 = () -> out.println("Hello Thread 1."); new Thread(thrd1).start(); // Old Version /* new Thread(new Runnable() { @Override public void run() { out.println("Hello Thread 2."); } }).start(); */ /* ****************************************** * Using lambda expression anonymous class * ****************************************** */ new Thread(() -> out.println("Hello Thread 2.")).start(); } Let’s look at another example. I will use a Comparator. The Comparator is another functional interface in Java, which has a single abstract method. This method is the compare method.Open the file UseComparator class, and check the commented bit of code, which is the actual code before refactoring it to lambda expression. public static void main(String[] args) { List<string> values = new ArrayList(); values.add("AAA"); values.add("bbb"); values.add("CCC"); values.add("ddd"); values.add("EEE"); //Case sensitive sort operation sort(values); out.println("Simple sort:"); print(values); // Case insensetive sort operation with anonymous class /* Collections.sort(values, new Comparator<string>() { @Override public int compare(String o1, String o2) { return o1.compareToIgnoreCase(o2); } }); */ // Case insensetive sort operation with Lambda sort(values,(o1, o2) -> o1.compareToIgnoreCase(o2)); out.println("Sort with Comparator"); print(values); } As before, it doesn’t provide you any performance benefit. The underlying functionality is exactly the same. Whether you declare your own classes, use inner or anonymous inner classes, or lambda expressions, is completely up to you.In the next article of this series, we will explore and code how to traverse the collections using lambda expression, filtering collections with Predicate interfaces, Traversing collections with method references, implementing default methods in interfaces, and finally implementing static methods in interfaces. Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: Functional programming with Lambda Expression from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....
java-logo

Getting an Infinite List of Primes in Java

A common problem is to determine the prime factorization of a number. The brute force approach is trial division (Wikipedia, Khan Academy) but that requires a lot of wasted effort if multiple numbers must be factored. One widely used solution is the Sieve of Eratosthenes (Wikipedia, Math World). It is easy to modify the Sieve of Eratosthenes to contain the largest prime factor of each composite number. This makes it extremely cheap to subsequently compute the prime factorization of numbers. If we only care about primality we can either use a bitmap with the Sieve of Eratosthenes, or use the Sieve of Atkin). (Sidenote: for clarity I’m leaving out the common optimizations that follow from the facts that a prime number is always “1 mod 2, n > 2″ and “1 or 5 mod 6, n > 5″. This can substantially reduce the amount of memory required for a sieve.) public enum SieveOfEratosthenes { SIEVE; private int[] sieve;private SieveOfEratosthenes() { // initialize with first million primes - 15485865 // initialize with first 10k primes - 104729 sieve = initialize(104729); }/** * Initialize the sieve. */ private int[] initialize(int sieveSize) { long sqrt = Math.round(Math.ceil(Math.sqrt(sieveSize))); long actualSieveSize = (int) (sqrt * sqrt);// data is initialized to zero int[] sieve = new int[actualSieveSize];for (int x = 2; x < sqrt; x++) { if (sieve[x] == 0) { for (int y = 2 * x; y < actualSieveSize; y += x) { sieve[y] = x; } } }return sieve; }/** * Is this a prime number? * * @FIXME handle n >= sieve.length! * * @param n * @return true if prime * @throws IllegalArgumentException * if negative number */ public boolean isPrime(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }boolean isPrime = sieve[n] == 0;return isPrime; }/** * Factorize a number * * @FIXME handle n >= sieve.length! * * @param n * @return map of prime divisors (key) and exponent(value) * @throws IllegalArgumentException * if negative number */ private Map<Integer, Integer> factorize(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }final Map<Integer, Integer> factors = new TreeMap<Integer, Integer>();for (int factor = sieve[n]; factor > 0; factor = sieve[n]) { if (factors.containsKey(factor)) { factors.put(factor, 1 + factors.get(factor)); } else { factors.put(factor, 1); }n /= factor; }// must add final term if (factors.containsKey(n)) { factors.put(n, 1 + factors.get(n)); } else { factors.put(n, 1); }return factors; }/** * Convert a factorization to a human-friendly string. The format is a * comma-delimited list where each element is either a prime number p (as * "p"), or the nth power of a prime number as "p^n". * * @param factors * factorization * @return string representation of factorization. * @throws IllegalArgumentException * if negative number */ public String toString(Map factors) { StringBuilder sb = new StringBuilder(20);for (Map.Entry entry : factors.entrySet()) { sb.append(", ");if (entry.getValue() == 1) { sb.append(String.valueOf(entry.getKey())); } else { sb.append(String.valueOf(entry.getKey())); sb.append("^"); sb.append(String.valueOf(entry.getValue())); } }return sb.substring(2); } } This code has a major weakness – it will fail if the requested number is out of range. There is an easy fix – we can dynamically resize the sieve as required. We use a Lock to ensure multithreaded calls don’t get the sieve in an intermediate state. We need to be careful to avoid getting into a deadlock between the read and write locks. private final ReadWriteLock lock = new ReentrantReadWriteLock();/** * Initialize the sieve. This method is called when it is necessary to grow * the sieve. */ private void reinitialize(int n) { try { lock.writeLock().lock(); // allocate 50% more than required to minimize thrashing. initialize((3 * n) / 2); } finally { lock.writeLock().unlock(); } }/** * Is this a prime number? * * @param n * @return true if prime * @throws IllegalArgumentException * if negative number */ public boolean isPrime(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }if (n > sieve.length) { reinitialize(n); }boolean isPrime = false; try { lock.readLock().lock(); isPrime = sieve[n] == 0; } finally { lock.readLock().unlock(); }return isPrime; }/** * Factorize a number * * @param n * @return map of prime divisors (key) and exponent(value) * @throws IllegalArgumentException * if negative number */ private Map<Integer, Integer> factorize(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }final Map<Integer, Integer> factors = new TreeMap<Integer, Integer>();try { if (n > sieve.length) { reinitialize(n); }lock.readLock().lock(); for (int factor = sieve[n]; factor > 0; factor = sieve[n]) { if (factors.containsKey(factor)) { factors.put(factor, 1 + factors.get(factor)); } else { factors.put(factor, 1); }n /= factor; } } finally { lock.readLock().unlock(); }// must add final term if (factors.containsKey(n)) { factors.put(n, 1 + factors.get(n)); } else { factors.put(n, 1); }return factors; } Iterable<Integer> and foreach loops In the real world it’s often easier to use a foreach loop (or explicit Iterator) than to probe a table item by item. Fortunately it’s easy to create an iterator that’s built on top of our self-growing sieve. /** * @see java.util.List#get(int) * * We can use a cache of the first few (1000? 10,000?) primes * for improved performance. * * @param n * @return nth prime (starting with 2) * @throws IllegalArgumentException * if negative number */ public Integer get(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }Iterator<Integer> iter = iterator(); for (int i = 0; i < n; i++) { iter.next(); }return iter.next(); }/** * @see java.util.List#indexOf(java.lang.Object) */ public int indexOf(Integer n) { if (!isPrime(n)) { return -1; }int index = 0; for (int i : sieve) { if (i == n) { return index; } index++; } return -1; } /** * @see java.lang.Iterable#iterator() */ public Iterator<Integer> iterator() { return new EratosthenesListIterator(); }public ListIterator<Integer> listIterator() { return new EratosthenesListIterator(); }/** * List iterator. * * @author Bear Giles <bgiles@coyotesong.com> */ static class EratosthenesListIterator extends AbstractListIterator<Integer> { int offset = 2;/** * @see com.invariantproperties.projecteuler.AbstractListIterator#getNext() */ @Override protected Integer getNext() { while (true) { offset++; if (SIEVE.isPrime(offset)) { return offset; } } // we'll always find a value since we dynamically resize the sieve. }/** * @see com.invariantproperties.projecteuler.AbstractListIterator#getPrevious() */ @Override protected Integer getPrevious() { while (offset > 0) { offset--; if (SIEVE.isPrime(offset)) { return offset; } }// we only get here if something went horribly wrong throw new NoSuchElementException(); } } } IMPORTANT : The code: for (int prime : SieveOfEratosthenes.SIEVE) { ... } is essentially an infinite loop. It will only stop once the JVM exhausts the heap space when allocating a new sieve. In practice this means that the maximum prime we can maintain in our sieve is around 1 GB. That requires 4 GB with 4-byte ints. If we only care about primality and use a common optimization that 4 GB can hold information on 64 GB values. For simplicity we can call this 9-to-10 digit numbers (base 10). What if we put our sieve on a disk? There is no reason why the sieve has to remain in memory. Our iterator can quietly load values from disk instead of an in-memory cache. A 4 TB disk, probably accessed in raw mode, would seem to bump the size of our sieve to 14-to-15 digit numbers (base 10). In fact it will be a bit less because we’ll have to double the size of our primitive types from int to long, and then probably to an even larger format. More! More! More! We can dramatically increase the effective size of our sieve by noting that we only have to compute sqrt(n) to initialize a sieve of n values. We can flip that and say that a fully populated sieve of n values can be used to populate another sieve of n2 values. In this case we’ll want to only populate a band, not the full n2 sieve. Our in-memory sieve can now cover values up to roughly 40 digit numbers (base 10), and the disk-based sieve jumps to as much as 60 digit numbers (base 10), minus the space require for the larger values. There is no reason why this approach can’t be taken even further – use a small sieve to bootstrap a larger transient sieve and use it, in turn, to populate an even larger sieve. But how long will this take ? Aye, there’s the rub. The cost to initialize a sieve of n values is O(n2). You can use various tweaks to reduce the constants but at the end of the day you’re visiting every node once (O(n)), and then visiting some rolling value proportional to n beyond each of those points. For what it’s worth this is a problem where keeping the CPU’s cache architecture could make a big difference. In practical terms any recent system should be able to create a sieve containing the first million primes within a few seconds. Bump the sieve to the first billion primes and the time has probably leapt to a week, maybe a month if limited JVM heap space forces us to use the disk heavily. My gut instinct is that it will take a server farm months to years to populate a TB disk Why bother ? For most of us the main takeaway is a demonstration of how to start a collection with a small seed, say a sieve with n = 1000, and transparently grow it as required. This is easy with prime numbers but it isn’t a huge stretch to imagine the same approach being used with, oh, RSS feeds. We’re used to thinking of Iterators as some boring aspect of Collections but in fact they give us a lot of flexibility when used as part of an Iterable. There is also a practical reason for a large prime sieve – factoring large numbers. There are several good algorithms for factoring large numbers but they’re expensive – even “small” numbers may take months or years on a server farm. That’s why the first step is always doing trial division with “small” primes – something that may take a day by itself. Source Code The good news is that I have published the source code for this… and the bad news is it’s part of ongoing doodling when I’m doing Project Euler problems. (There are no solutions here – it’s entirely explorations of ideas inspired by the problems. So the code is a little rough and should not be used to decide whether or not to bring me in for an interview (unless you’re impressed): http://github.com/beargiles/projecteuler.Reference: Getting an Infinite List of Primes in Java from our JCG partner Bear Giles at the Invariant Properties blog....
java-logo

Parsing an Excel File into JavaBeans using jXLS

This post shows how you can use jXLS to parse an Excel file into a list of JavaBeans. Here is a generic utility method I wrote to do that:                 /** * Parses an excel file into a list of beans. * * @param <T> the type of the bean * @param xlsFile the excel data file to parse * @param jxlsConfigFile the jxls config file describing how to map rows to beans * @return the list of beans or an empty list there are none * @throws Exception if there is a problem parsing the file */ public static <T> List<T> parseExcelFileToBeans(final File xlsFile, final File jxlsConfigFile) throws Exception { final XLSReader xlsReader = ReaderBuilder.buildFromXML(jxlsConfigFile); final List<T> result = new ArrayList<>(); final Map<String, Object> beans = new HashMap<>(); beans.put("result", result); try (InputStream inputStream = new BufferedInputStream(new FileInputStream(xlsFile))) { xlsReader.read(inputStream, beans); } return result; } Example: Consider the following Excel file containing person information:FirstName LastName AgeJoe Bloggs 25John Doe 30  Create the following Person bean to bind each Excel row to: package model;public class Person {private String firstName; private String lastName; private int age;public Person() { } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } Create a jXLS configuration file which tells jXLS how to process your Excel file and map rows to Person objects: <workbook> <worksheet name="Sheet1"> <section startRow="0" endRow="0" /> <loop startRow="1" endRow="1" items="result" var="person" varType="model.Person"> <section startRow="1" endRow="1"> <mapping row="1" col="0">person.firstName</mapping> <mapping row="1" col="1">person.lastName</mapping> <mapping row="1" col="2">person.age</mapping> </section> <loopbreakcondition> <rowcheck offset="0"> <cellcheck offset="0" /> </rowcheck> </loopbreakcondition> </loop> </worksheet> </workbook> Now you can parse the Excel file into a list of Person objects with this one-liner: List<Person> persons = Utils.parseExcelFileToBeans(new File("/path/to/personData.xls"), new File("/path/to/personConfig.xml")); Related posts: Parsing a CSV file into JavaBeans using OpenCSVReference: Parsing an Excel File into JavaBeans using jXLS from our JCG partner Fahd Shariff at the fahd.blog blog....
java-interview-questions-answers

Tracing SQL statements in JBoss AS 7 using a custom logging handler

Using an ORM to abstract from your specific database and to let it create and issue all the SQL statements you would have to write by hand yourself seems handy. This is what made ORM solutions popular. But it also comes with a downside: As the ORM does a lot of work for you, you lose to some degree control over the generated SQL and you have to rely on the ORM to create a high-performance statement for you. But it can happen that the SQL generated by the ORM is not what you might have written by hand and expected the ORM to do for you. In this case you have to get back control over the SQL and put your hands on the code again.   In huge applications this task is not as trivial, as there might be hundreds of statements issued to the database that stem from hundreds of lines of Java code that makes heavy usage of JPA features. Tracing the SQL statement that your database profiling tool has identified as problematic down to the actual code line becomes tedious. We know that we can enable SQL statement logging for Hibernate with the following two lines in our persistence.xml: <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> But this will only output the already generated SQL; the actual Java code line is still not visible. For smaller applications it might be feasible to attach a debugger to the application server and debug through the code until you have found the line that logs the problematic SQL statement, but for bigger applications this is time consuming. As Hibernate itself does not provide any means of intercepting the logging and enhance it with more information, we will have to do this on our own. The JBoss documentation indicates that it is possible to write your own custom logging handler. As this logging handler receives all the logging messages and therewith also the messages produces by Hibernate with enabled SQL logging, we can try to find the line we are looking for and then output a stack trace to our own log file. Writing a custom logging handler turns out to be very simple. All you have to do is setup a small project with a class that extends the class Handler from the JDK package java.util.logging: package mypackage;import java.util.logging.Handler; import java.util.logging.LogRecord;public class MyJBossLogger extends Handler {@Override public void publish(LogRecord record) { } @Override public void flush() { } @Override public void close() throws SecurityException {} } The publish() method receives all logging output in form of an instance of LogRecord. Its method getMessage() lets us access the output directly. Hence we can match this message against some keywords we have loaded from some configuration file: @Override public void publish(LogRecord record) { String message = record.getMessage(); buffer.add(message + "\n"); if (keywords == null) { keywords = loadKeywords(); } if (matches(message, keywords)) { String stacktrace = "\nStacktrace:\n"; StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace(); for (StackTraceElement element : stackTrace) { stacktrace += element.toString() + "\n"; } buffer.add(stacktrace); flush(); } } Buffer is here some simple data structure (e.g. guava’s EvictingQueue) that buffers the last lines, as the method publish() is called for each line(!) of output. As a complete SQL statement spans more than one line, we have to remember a couple of them. Next to the buffered lines and the current line we also output a String representation of the current stack trace. This tells us later in the log file from where we are called and therewith which line of Java code in our project causes the current statement. Once we have compiled the project we can copy the resulting jar file to the newly created folder structure under: $JBOSS_HOME/modules/system/layers/base/com/mydomain/mymodule/main (for JBoss AS 7.2). In order to tell JBoss AS about our new module, we have to create a XML file called module.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.1" name="com.mydomain.mymodule"> <resources> <resource-root path="MyJBossLogger-0.0.1-SNAPSHOT.jar"/> </resources> </module> The name of the module corresponds to the path within the JBoss modules folder. It will also be used in the configuration file to configure our custom logging handler: ... <subsystem xmlns="urn:jboss:domain:logging:1.2"> <custom-handler name="CUSTOM" module="com.mydomain.mymodule" class="com.mydomain.mymodule.MyJBossLogger"> <level name="DEBUG"/> </custom-handler> ... When we implement the flush() method of our logging handler to write the output to some log file, we will see something like the following (of course in condensed form): Hibernate: select ... from customer ... Stacktrace: java.lang.Thread.getStackTrace(Thread.java:1568) com.mydomain.mymodule.MyJBossLogger.publish(MyJBossLogger.java:20) org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:292) org.jboss.logmanager.LoggerNode.publish(LoggerNode.java:300) org.jboss.logmanager.Logger.logRaw(Logger.java:721) org.jboss.logmanager.Logger.log(Logger.java:506) ... com.mydomain.myapp.ArticleEntity.getCustomers(ArticleRepository.java:234) ... Here we can see clearly which OneToMany relation causes the problematic select statement we were looking for. Conclusion Using a custom logging handler to inject the current stack trace into the logging of the SQL statements may help you when you want to find the exact location in the source code where a concrete query is issued. It turned also out that writing your own custom logging handler for JBoss AS is also a straight forward task.Reference: Tracing SQL statements in JBoss AS 7 using a custom logging handler from our JCG partner Martin Mois at the Martin’s Developer World blog....
java-interview-questions-answers

Spring Integration Java DSL sample – further simplification with Jms namespace factories

In an earlier blog entry I had touched on a fictitious rube goldberg flow for capitalizing a string through a complicated series of steps, the premise of the article was to introduce Spring Integration Java DSL as an alternative to defining integration flows through xml configuration files. I learned a few new things after writing that blog entry, thanks to Artem Bilan and wanted to document those learnings here: So, first my original sample, here I have the following flow (the one’s in bold):    Take in a message of this type – “hello from spring integ” Split it up into individual words (hello, from, spring, integ) Send each word to an ActiveMQ queue Pick up the word fragments from the queue and capitalize each word Place the response back into a response queue Pick up the message, re-sequence based on the original sequence of the words Aggregate back into a sentence (“HELLO FROM SPRING INTEG”) and Return the sentence back to the calling application.EchoFlowOutbound.java: @Bean public DirectChannel sequenceChannel() { return new DirectChannel(); }@Bean public DirectChannel requestChannel() { return new DirectChannel(); }@Bean public IntegrationFlow toOutboundQueueFlow() { return IntegrationFlows.from(requestChannel()) .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s")) .handle(jmsOutboundGateway()) .get(); }@Bean public IntegrationFlow flowOnReturnOfMessage() { return IntegrationFlows.from(sequenceChannel()) .resequence() .aggregate(aggregate -> aggregate.outputProcessor(g -> Joiner.on(" ").join(g.getMessages() .stream() .map(m -> (String) m.getPayload()).collect(toList()))) , null) .get(); }@Bean public JmsOutboundGateway jmsOutboundGateway() { JmsOutboundGateway jmsOutboundGateway = new JmsOutboundGateway(); jmsOutboundGateway.setConnectionFactory(this.connectionFactory); jmsOutboundGateway.setRequestDestinationName("amq.outbound"); jmsOutboundGateway.setReplyChannel(sequenceChannel()); return jmsOutboundGateway; } It turns out, based on Artem Bilan‘s feedback, that a few things can be optimized here. First notice how I have explicitly defined two direct channels, “requestChannel” for starting the flow that takes in the string message and the “sequenceChannel” to handle the message once it returns back from the jms message queue, these can actually be totally removed and the flow made a little more concise this way: @Bean public IntegrationFlow toOutboundQueueFlow() { return IntegrationFlows.from("requestChannel") .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s")) .handle(jmsOutboundGateway()) .resequence() .aggregate(aggregate -> aggregate.outputProcessor(g -> Joiner.on(" ").join(g.getMessages() .stream() .map(m -> (String) m.getPayload()).collect(toList()))) , null) .get(); }@Bean public JmsOutboundGateway jmsOutboundGateway() { JmsOutboundGateway jmsOutboundGateway = new JmsOutboundGateway(); jmsOutboundGateway.setConnectionFactory(this.connectionFactory); jmsOutboundGateway.setRequestDestinationName("amq.outbound"); return jmsOutboundGateway; } “requestChannel” is now being implicitly created just by declaring a name for it. The sequence channel is more interesting, quoting Artem Bilan – do not specify outputChannel for AbstractReplyProducingMessageHandler and rely on DSL what it means is that here jmsOutboundGateway is a AbstractReplyProducingMessageHandler and its reply channel is implicitly derived by the DSL. Further, two methods which were earlier handling the flows for sending out the message to the queue and then continuing once the message is back, is collapsed into one. And IMHO it does read a little better because of this change. The second good change and the topic of this article is the introduction of the Jms namespace factories, when I had written the previous blog article, DSL had support for defining the AMQ inbound/outbound adapter/gateway, now there is support for Jms based inbound/adapter adapter/gateways also, this simplifies the flow even further, the flow now looks like this: @Bean public IntegrationFlow toOutboundQueueFlow() { return IntegrationFlows.from("requestChannel") .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s")) .handle(Jms.outboundGateway(connectionFactory) .requestDestination("amq.outbound")) .resequence() .aggregate(aggregate -> aggregate.outputProcessor(g -> Joiner.on(" ").join(g.getMessages() .stream() .map(m -> (String) m.getPayload()).collect(toList()))) , null) .get(); } The inbound Jms part of the flow also simplifies to the following: @Bean public IntegrationFlow inboundFlow() { return IntegrationFlows.from(Jms.inboundGateway(connectionFactory) .destination("amq.outbound")) .transform((String s) -> s.toUpperCase()) .get(); } Thus, to conclude, Spring Integration Java DSL is an exciting new way to concisely configure Spring Integration flows. It is already very impressive in how it simplifies the readability of flows, the introduction of the Jms namespace factories takes it even further for JMS based flows.I have updated my sample application with the changes that I have listed in this article – https://github.com/bijukunjummen/rg-si.Reference: Spring Integration Java DSL sample – further simplification with Jms namespace factories from our JCG partner Biju Kunjummen at the all and sundry blog....
software-development-2-logo

You Want to Become a Software Architect? Here is Your Reading List!

How do you become a Software Architect? Well, I guess the best way would be to do about two dozen very different projects in different roles, with as many different technologies as possible. This would guarantee that you get a lot of experience with different approaches and challenges which certainly would provide you with a lot of the stuff you need to know to fill the role of an architect. Unfortunately in the real world this is hard to accomplish. Often the next project uses similar technologies and strategies as the last one, also project owners for some reason don’t like it when you use their projects as a training ground. So we need an alternative way of learning, where we can learn from the mistakes made by others, instead of learning from our own. Here is a list of books I’d recommend for anybody wanting to become a Software Architect (in no special order): Akka Concurrency: This one is an odd one in the list. Akka is an actor framework for the JVM, written in Scala, but also usable in Java. I recommend it because it is a very different approach of structuring your code than the “normal” Java way. Domain-Driven Design: Tackling Complexity in the Heart of Software I’m sure you have heard the term Domain Driven Design, right? If not: the basic idea is to structure your application based on the problem domain. Sound simple and obvious? As usual the devil is in the detail. Softwarearchitekturen dokumentieren und kommunizieren: Entwürfe, Entscheidungen und Lösungen nachvollziehbar und wirkungsvoll festhalten. Sorry, recommending a German book in an english blog. I just don’t know an English alternative. Although we learned that documentation is not as important as working software, documentation still is important, and this book will teach you a lot about how to document your architecture in a pragmatic way. Effektive Softwarearchitekturen: Ein praktischer Leitfaden Another German one. (Sorry). A good overview about what belongs in an Architecture and what influences you need to take into account. Specification by Example: How Successful Teams Deliver the Right Software At least in my book, architecture work also means part of your task is to bring your team together on one page. The ideas from this book will help you to bring analysts, testers and developers together. Again the idea is simple but executing it can become tough, since you are not dealing with code so much, but with people. Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing Same author, same basic topic. The title is better, because in the end it is not so much about the specification of your system, but about communication. ATDD by Example: A Practical Guide to Acceptance Test-Driven Development (Addison-Wesley Signature Series (Beck)) One more about testing. This one talks more about technical issues when using ATDD, which by the way has a huge overlap with Specification by Example. Vorgehensmuster für Softwarearchitektur: Kombinierbare Praktiken in Zeiten von Agile und Lean This is the last German one in the List. I promise. When you’re moving from Development to Architecture you’ll have to work more with people, which at least for me makes things way more difficult, because a solution that worked yesterday might not work today. This book gives you many alternative strategies to try. Structure and Interpretation of Computer Programs (Mit Electrical Engineering and Computer Science Series) This one was a real eye opener to me. On one hand it will teach you some Scheme, which might not be so interesting, because for most of us this won’t be the language we use to implement the next system. BUT it will also teach you why in many cases the functional approach is way more simple than the imperative way. If you are confused by the difference between simple and easy try watch this talk by the way. Clean Code: A Handbook of Agile Software Craftsmanship of course an Architect is still a Developer, so he or she better knows how to code and how to code well. REST und HTTP: Einsatz der Architektur des Web für Integrationsszenarien Oh darn, another German one slipt through. I’m still shocked how many Developers and Architects don’t know REST and why it is important and powerful. This book will fix that at least in your case. HTTP: The Definitive Guide: The Definitive Guide (Definitive Guides) While we are at the basics of most modern Software Systems. While HTTP isn’t rocket science it certainly helps to know how it really works. This one is the Definitive Guide about the topic. Release It!: Design and Deploy Production-Ready Software (Pragmatic Programmers) This one is just awesome. Full of stuff that can go bad in production and how to design your system so it can handle things gracefully. And fun to read too. 97 Things Every Programmer Should Know: Collective Wisdom from the Experts Again, a Software Architect is just another Developer, so if you haven’t read it yet you’ll find lots of good ideas in here. 97 Things Every Software Architect Should Know: Collective Wisdom from the Experts and even more ideas in here. So that should keep you busy for the next month or two. Let me know what else should we read to become better Architects and Developers?Reference: You Want to Become an Software Architect? Here is Your Reading List! from our JCG partner Jens Schauder at the Schauderhaft blog....
software-development-2-logo

gonsole weeks: oops – it’s a framework

While Eclipse ships with a comprehensive Git tool, it seems that for certain tasks many developers switch to the command line. This gave Rüdiger and me the idea, to start an open source project to provide a git console integration for the IDE. What happened so far during the gonsole weeks can be read in eclipse egit integration and content assist for git commands. Since our last update we have been busy replacing most of the explorative code passages with clean implementations based on unit tests. Additionally the architecture got a thorough overhaul which resulted into a split of reusable core functionality and the gonsole specific implementations. Though this wasn’t the main intent the rework led to an API that allows to implement other consoles with the same characteristics easily. In this post we will take an excursion to explain the few steps needed to write your own console. Assume that you want to create a command line based calculator that provides arithmetic operations like sum, sub etc. The basic set of features is provided by the com.codeaffine.console.core plug-in that supplies the API to contribute the calculator console. First of all we need a plug-in project that declares the necessary imports:In the gonsole project we got used to nested TDD. That’s why we want start by writing an acceptance test. Luckily com.codeaffine.console.core.pdetest exports a class called ConsoleBot that enables us to create such tests comfortably. Since these tests run in a plug-in environment we will place them in a fragment.With these two projects in place we are ready to write our first happy path test:As you can see, the ConsoleBot is a JUnit Rule that hides the details of how to create a ready-to-test console. Furthermore it allows to simulate user interaction like typing commands and opening content assist. To check the expected outcome conveniently, there is a custom AssertJ assertion. ConsoleBot#open() expects an implementation of ConsoleConfigurer as its first parameter. Simply to get our test compiling, the next step is to provide a skeleton implementation thereof. Such configurations contribute the pluggable parts that make up a particular console implementation. But before we can launch our console for the first time there is one last thing to do:To hook the console into the Eclipse console view our plug-in has to provide an extension to the org.eclipse.ui.console.consoleFactories extension-point. The latter expects an implementation of IConsoleFactory which is provided by our console API type ConsoleFactory. The sole coding needed is to override ConsoleFactory#getConsoleConfigurer() as shown above. Of course having only a scaffold, the runtime console does nothing useful yet and running our PDETest leads to a red bar:Now we are ready to develop the interpreter, content-assist and input prompt components in the usual TTD manner. We proceed doing this until our end-to-end test turns up green. We even have encountered that this can be a good starting point for an explorative break-out in case we are not sure which way to go. Once the end-to-end tests are green and we have gained enough confidence in our solution, we start to replace the hacked passages step by step with test driven solutions. For the example we took the hacking approach: class CalculatorConsoleCommandInterpreter implements ConsoleCommandInterpreter {static final String SUM = "sum";private static final String SUM_RESULT = "The sum of %s and %s is %s" + LINE_DELIMITER;private final ConsoleOutput consoleOutput;CalculatorConsoleCommandInterpreter( ConsoleOutput consoleOutput ) { this.consoleOutput = consoleOutput; }@Override public boolean isRecognized( String... commandLine ) { return commandLine.length > 0 && SUM.equals( commandLine[ 0 ] ); }@Override public String execute( String... commandLine ) { int sum = parseInt( commandLine[ 1 ] ) + parseInt( commandLine[ 2 ] ); consoleOutput.write( format( SUM_RESULT, commandLine[ 1 ], commandLine[ 2 ], valueOf( sum ) ) ); return null; } } The snippet makes our test happy by providing a simplistic ConsoleCommandInterpreter that supports the sum command. Supplemented with a similarly elaborated ContentProposalProvider our calculator in action now looks like this:Although there could be said much more about custom console implementations it is time to bring this excursion to an end. Please note that the console core API is still under development and might undergo substancial changes. For those interested we have added the calculator projects to the gonsole git repository and its build: Repository: https://github.com/rherrmann/gonsole Build: https://travis-ci.org/rherrmann/gonsole This ensures that the example will be kept up to date. And of course for a more grown-up solution you might always have a look at the gonsole implementations. That’s it for today folks, let’s get back to work. And maybe the next time we are indeed able to show you more content assist features…Reference: gonsole weeks: oops – it’s a framework from our JCG partner Frank Appel at the Code Affine blog....
java-logo

Adding @atomic operations to Java

Overview How might atomic operations work in Java, and is there a current alternative in OpenJDK/Hotspot it could translate to. Feedback In my previous article on Making operations on volatile fields atomic. it was pointed out a few times that “fixing” previous behaviour is unlikely to go ahead regardless of good intentions.   An alternative to this is to add an @atomic annotation.  This has the advantage of only applying to new code and not risk breaking old code. Note: The use of a lower case name is intentional as it *doesn’t* follow current coding conventions. Atomic operations Any field listed with an @atomic would make the whole expression atomic.  Variables which are non-volatile and non-atomic could be read at the start, or set after the completion of the expression.  The expression itself may require locking on some platforms, CAS operations or TSX depending on the CPU technology. If fields are only read, or only one is written too, this would be the same as volatile. Atomic Boolean Currently the AtomicBoolean uses 4 bytes, plus an object header, with possible padding (as well as a reference)  If the field was inlined it could look like this @atomic boolean flag; // toggle the flag. this.flag = !this.flag;But how would it work?  Not all platforms support 1 byte atomic operations e.g. Unsafe does have a 1 byte CAS operations.  This can be done with masking. // possible replacement. while(true) { int num = Unsafe.getUnsafe().getVolatileInt(this, FLAG_OFFSET & ~3); // word align the access. int value ^= 1 ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close