What's New Here?


Add Java 8 support to Eclipse Kepler

Want to add Java 8 support to Kepler? Java 8 has not yet landed in our standard download packages. But you can add it to your existing Eclipse Kepler package. I’ve got three different Eclipse installations running Java 8:A brand new Kepler SR2 installation of the Eclipse IDE for Java Developers; A slightly used Kepler SR1 installation of the Eclipse for RCP/RAP Developers (with lots of other features already added); and A nightly build (dated March 24/2014) of Eclipse 4.4 SDK.  The JDT team recommends that you start from Kepler SR2, the second and final service release for Kepler (but using the exact same steps, I’ve installed it into Kepler SR1 and SR2 packages). There are some detailed instructions for adding Java 8 support by installing a feature patch in the Eclipsepedia wiki. The short version is this:From Kepler SR2, use the “Help > Install New Software…” menu option to open the “Available Software” dialog; Enter http://download.eclipse.org/eclipse/updates/4.3-P-builds/ into the “Work with” field (highlighted below); Put a checkbox next to “Eclipse Java 8 Support (for Kepler SR2)” (highlighted below); Click “Next”, click “Next”, read and accept the license, and click “Finish” Watch the pretty progress bar move relatively quickly across the bottom of the window; and Restart Eclipse when prompted.  Voila! Support for Java 8 is installed. If you’ve already got the Java 8 JDK installed and the corresponding JRE is the default on your system, you’re done. If you’re not quite ready to make the leap to a Java 8 JRE, there’s still hope (my system is still configured with Java 7 as the default).Install the Java 8 JDK; Open the Eclipse preferences, and navigate to “Java > Installed JREs”; Click “Add…”; Select “Standard VM”, click “Next”; Enter the path to the Java 8 JRE (note that this varies depending on platform, and how you obtain and install the bits); Click “Finish”.Before closing the preferences window, you can set your workspace preference to use the newly-installed Java 8 JRE. Or, if you’re just planning to experiment with Java 8 for a while, you can configure this on a project-by-project basis.  It’s probably better to do this on the project as this will become a project setting that will follow the project into your version control system. Next step… learn how wrong my initial impressions of Java 8 were (hint: it’s far better). Reference: Add Java 8 support to Eclipse Kepler from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....

10 Top Technology Trends that will shape Business Application Architecture

Business applications tend to be transformed by technologies that disrupt traditional notions of process flexibility, insight, delivery speed, ownership, and support costs. Forrester is a global research and advisory firm, that helps the world’s top companies turn the complexity of change into business advantage. According to a Forrester research, there are ten key trends, presented below in short, which will drive business applications transformation and can be used by application delivery leaders, application architects, and enterprise architects to inform their application strategy. 1. Cloud Deployment Models Software upgrades in traditional applications have nowadays become so costly and difficult and customizations and extensions make them even more complicate. Cloud computing and particularly software-as-a-service (SaaS) can provide solutions to such problems. An all-in rental model with hosting, maintenance, managed services, automatic upgrades and software usage can lead to much more predictable costs. As SaaS will expand in the future, it will become more flexible and scalable. So more and more industries will enrich their application mix, blending off-premises business applications with on-premises apps according to their objectives. 2. Mobile Technology Since mobile technology is rapidly evolving, all business applications vendors use it to increase the usage of their business apps. Business processes now become portable through mobile apps, though business vendors need to decouple mobile capabilities from their architecture, so as to follow mobile world rapid paces. In the future, more powerful devices, such as tablet devices will enrich the capability of mobile business apps, whereas packaged mobile apps will expand in areas like architecture, banking, healthcare and real estate. 3. Embedded Modelling Tools Business stakeholders have always been depended on IT staff skilled in business applications configuration. This always led to long delivery terms, limited flexibility, even limitations in software. Business apps vendors provide configuration tooling that becomes more flexible, graphical and model-based, thus eliminating coding. Configuration tools will empower business experts in the future, to modify app behaviour without the help of IT. Graphical modelling tools include business rules, notification and embedded BI. So business will own all configuration and app flexibility will be the key for a vendor selection. 4. Application User Experiences Application usability is enriched with the use of drop down lists, colours, icons, so that apps vendors can reach high expectations of users. Graphical features, analytics and customer interactions move focus from data capture to business outcomes. So, customer experience is collected to improve business apps, along with social network help. But real business value will come from customers interactions, extending business process support and even with performance improvements. 5. Extensibility Is Improving Via PaaS Business apps are usually customized by tools offered by vendors, but these tools are usually complex and costly. So PaaS has come up, a set of application development tools for building apps in the cloud. With PaaS, business apps will thrive in the cloud. PaaS will add model-based configuration and BPM, and will become more extensible. 6. Componentization Manufacturers in automotive, aerospace and other mature industries have evolved from making simple service components to building assemblies of components that meet differentiating requirements. Componentization will result in blending elements and creating custom-built and know-your-customer apps. Componentized business functionality will lead firms to make use of component frameworks with internal and external business apps components. So, scalable, flexible and easy-to-use business apps will achieve competitiveness and change business strategy. 7. Elastic Computing Platforms Elastic Application Platform (EAP as defined by Forrester) is an app platform that automates elasticity of transactions, services, data, with high performance and using elastic resources. EAP will encourage scenarios like big data analytics, that have scalable demands for higher performance, at low cost, and rapid analysis of business information from vast data resources. EAP will enable innovative apps, that will offer predictive simulation combined with scalability and elasticity. SaaS can adapt to EAPs, allowing to scale thousands of customers on the same application and providing app upgrades and better levels of isolation. 8. Social Collaboration Via Social Tools Social collaboration can be used by vendors to solve business problems and lead to better business apps. For example, crowdsourcing has helped banks to develop new financial services products. But social media use in business processes may lead to overload of information that will cause difficulty in handling. It is obvious that the effective use of social collaboration in business apps will take several years to mature. 9. Big Data and Real-Time Analytics Today customers know everything about companies, since they can make use of price comparison websites, aggregator websites and social computing. Since firms like banking and retail need to learn more about their customers too, they can make use of big data real-time analytics, such as checking credit scoring and tracking customer behaviour across channels. Real-time information will help optimize business processes. Intelligence systems will collect internet information for customer. A 360-degree view of customer and business-wide information will reach all kinds of business partners, thus delivering a more current and comprehensive view of customers and partners relationships in real time. 10. Standardized Service Semantics In SOA syntax specifications and standards there has been a gap between the promise of full business agility and the reality of only syntactically standardized business services interfaces. This gap is crucial to be bridged, so that business value of architectural element is leveraged. Semantic business service specs will result in a quantum leap in extensibility. The convergence of semantic specs with large sets of vendor proprietary pre-built business services will speed this trend. According to the researchers, these ten key technology trends will reshape the way delivery teams will design, develop and select business applications. You can read more, downloading the full article from Forrester here.Understand These Trends To Shape Your Application Strategy This report enumerates the top 10 technology trends that are reshaping the nature and value proposition of business applications today and in the coming years. Application delivery leaders, application architects, and enterprise architects should use this report to inform their application strategy. The report will help them define their road map for implementing that strategy and avoid basing their long-term strategy on the technologies or architectures of the past, as these are inherently ill-suited to meeting the challenges of the coming decade....

Java 8 Friday: Optional Will Remain an Option in Java

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub. Optional: A new Option in Java   So far, we’ve been pretty thrilled with all the additions to Java 8. All in all, this is a revolution more than anything before. But there are also one or two sore spots. One of them is how Java will never really get rid of Null: The billion dollar mistake In a previous blog post, we have explained the merits of NULL handling in the Ceylon language, which has found one of the best solutions to tackle this issue – at least on the JVM which is doomed to support the null pointer forever. In Ceylon, nullability is a flag that can be added to every type by appending a question mark to the type name. An example: void hello() { String? name = process.arguments.first; String greeting; if (exists name) { greeting = "Hello, ``name``!"; } else { greeting = "Hello, World!"; } print(greeting); } That’s pretty slick. Combined with flow-sensitive typing, you will never run into the dreaded NullPointerException again:  Other languages have introduced the Option type. Most prominently: Scala. Java 8 now also introduced the Optional type (as well as the OptionalInt, OptionalLong, OptionalDouble types – more about those later on) How does Optional work? The main point behind Optional is to wrap an Object and to provide convenience API to handle nullability in a fluent manner. This goes well with Java 8 lambda expressions, which allow for lazy execution of operations. An example: Optional<String> stringOrNot = Optional.of("123");// This String reference will never be null String alwaysAString = stringOrNot.orElse("");// This Integer reference will be wrapped again Optional<Integer> integerOrNot = stringOrNot.map(Integer::parseInt);// This int reference will never be null int alwaysAnInt = stringOrNot .map(s -> Integer.parseInt(s)) .orElse(0); There are certain merits to the above in fluent APIs, specifically in the new Java 8 Streams API, which makes extensive use of Optional. For example: Arrays.asList(1, 2, 3) .stream() .findAny() .ifPresent(System.out::println); The above piece of code will print any number from the Stream onto the console, but only if such a number exists. Old API is not retrofitted For obvious backwards-compatibility reasons, the “old API” is not retrofitted. In other words, unlike Scala, Java 8 doesn’t use Optional all over the JDK. In fact, the only place where Optional is used is in the Streams API. As you can see in the Javadoc, usage is very scarce: http://docs.oracle.com/javase/8/docs/api/java/util/class-use/Optional.html This makes Optional a bit difficult to use. We’ve already blogged about this topic before. Concretely, the absence of an Optional type in the API is no guarantee of non-nullability. This is particularly nasty if you convert Streams into collections and collections into streams. The Java 8 Optional type is treacherous Parametric polymorphism The worst implication of Optional on its “infected” API is parametric polymorphism, or simply: generics. When you reason about types, you will quickly understand that: // This is a reference to a simple type: Number s;// This is a reference to a collection of // the above simple type: Collection<Number> c; Generics are often used for what is generally accepted as composition. We have a Collection of String. With Optional, this compositional semantics is slightly abused (both in Scala and Java) to “wrap” a potentially nullable value. We now have: // This is a reference to a nullable simple type: Optional<Number> s;// This is a reference to a collection of // possibly nullable simple types Collection<Optional<Number>> c; So far so good. We can substitute types to get the following: // This is a reference to a simple type: T s;// This is a reference to a collection of // the above simple type: Collection<T> c; But now enter wildcards and use-site variance. We can write // No variance can be applied to simple types: T s;// Variance can be applied to collections of // simple types: Collection<? extends T> source; Collection<? super T> target; What do the above types mean in the context of Optional? Intuitively, we would like this to be about things like Optional<? extends Number> or Optional<? super Number>. In the above example we can write: // Read a T-value from the source T s = source.iterator().next();// ... and put it into the target target.add(s); But this doesn’t work any longer with Optional Collection<Optional<? extends T>> source; Collection<Optional<? super T>> target;// Read a value from the source Optional<? extends T> s = source.iterator().next();// ... cannot put it into the target target.add(s); // Nope … and there is no other way to reason about use-site variance when we have Optional and subtly more complex API. If you add generic type erasure to the discussion, things get even worse. We no longer erase the component type of the above Collection, we also erase the type of virtually any reference. From a runtime / reflection perspective, this is almost like using Object all over the place! Generic type systems are incredibly complex even for simple use-cases. Optional makes things only worse. It is quite hard to blend Optional with traditional collections API or other APIs. Compared to the ease of use of Ceylon’s flow-sensitive typing, or even Groovy’s elvis operator, Optional is like a sledge-hammer in your face. Be careful when you apply it to your API! Primitive types One of the main reasons why Optional is still a very useful addition is the fact that the “object-stream” and the “primitive streams” have a “unified API” by the fact that we also have OptionalInt, OptionalLong, OptionalDouble types. In other words, if you’re operating on primitive types, you can just switch the stream construction and reuse the rest of your stream API usage source code, in almost the same way. Compare these two chains: // Stream and Optional Optional<Integer> anyInteger = Arrays.asList(1, 2, 3) .stream() .filter(i -> i % 2 == 0) .findAny(); anyInteger.ifPresent(System.out::println);// IntStream and OptionalInt OptionalInt anyInt = Arrays.stream(new int[] {1, 2, 3}) .filter(i -> i % 2 == 0) .findAny(); anyInt.ifPresent(System.out::println); In other words, given the scarce usage of these new types in JDK API, the dubious usefulness of such a type in general (if retrofitted into a very backwards-compatible environment) and the implications generics erasure have on Optional we dare say that The only reason why this type was really added is to provide a more unified Streams API for both reference and primitive types That’s tough. And makes us wonder, if we should finally get rid of primitive types altogether. Oh, and… … Optional isn’t Serializable. Nope. Not Serializable. Unlike ArrayList, for instance. For the usual reason: Making something in the JDK serializable makes a dramatic increase in our maintenance costs, because it means that the representation is frozen for all time. This constrains our ability to evolve implementations in the future, and the number of cases where we are unable to easily fix a bug or provide an enhancement, which would otherwise be simple, is enormous. So, while it may look like a simple matter of “implements Serializable” to you, it is more than that. The amount of effort consumed by working around an earlier choice to make something serializable is staggering. Citing Brian Goetz, from: http://mail.openjdk.java.net/pipermail/jdk8-dev/2013-September/003276.html Want to discuss Optional? Read these threads on reddit:/r/java /r/programmingStay tuned for more exciting Java 8 stuff published in this blog series.Reference: Java 8 Friday: Optional Will Remain an Option in Java from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Every Great Product Owner Needs a Great ScrumMaster

Summary The product owner and the ScrumMaster are two separate agile roles that complement each other. To do a great job, product owners need a strong ScrumMaster at their side. Unfortunately, I find that there is often a lack of ScrumMasters who can support the product owner. Sometimes there is confusion between the roles, or there is no ScrumMaster at all. This post explains the differences between the two roles, what product owners should expect from their ScrumMaster, and what the ScrumMasters are likely to expect from them. Product Owner vs. ScrumMaster The product owner and ScrumMaster are two different roles that complement each other. If one is not played properly, the other suffers. As the product owner, you are responsible for the product success — for creating a product that does a great job for the users and customers and that meets its business goals. You therefore interact with users and customers as well as the internal stakeholders, the development team and ScrumMaster, as the following diagram shows.The grey circle in the picture above describes the Scrum Team consisting of the product owner, the ScrumMaster and the cross-functional development team. The ScrumMaster is responsible for the process success — for helping the product owner and the team use the right process to create a successful product, and for facilitating organisational change and establishing an agile way of working. Consequently, the ScrumMaster collaborates with the product owner and the development team as well as senior management, human resources (HR), and the business groups affected by Scrum, as following pictures illustrates:Succeeding as a product owner requires the right skill set, time, effort, and focus. So does playing the ScrumMaster role. Combining both roles – even partially – is not only very challenging but means that some duties are neglected. If you are the product owner, then stay clear of the ScrumMaster duties! What the Product Owner should Expect from the ScrumMaster As a product owner, you should benefit from the ScrumMaster’s work in several ways. The ScrumMaster should coach the team so that the team members can build a great product, facilitate organisational change so that the organisation leverages Scrum, and help you do a great job:The following table details the support you should expect from the ScrumMaster:Service DetailsTeam coachingHelp the team collaborate effectively and manage their work successfully so that they can make realistic commitments and create product increments reliably. Encourage the team to work with the product owner on the product backlog. Ensure that the team has a productive work environment.Organisational changeWork with senior management, HR and other business groups to implement the necessary organisational changes required by Scrum. Educate the stakeholders about what’s new and different in Scrum, explain their role in the agile process, and generate support and buy-in. Resolve role conflicts such as product owner vs. product manager and product owner vs. project manager.Product owner coachingHelp the product owner choose the right agile product management techniques and tools. Support the product owner in making product decisions and tackle product owner empowerment issues. Help establish agile product management practices in the enterprise.The ScrumMaster supports you as the product owner so that you can focus on your job – making sure that the right product with the right user experience (UX) and the right features is created. If your ScrumMaster does not or cannot provide this support, then talk to the individual, and find out what’s wrong. Don’t jump in and take over the ScrumMaster’s job. If you don’t have a ScrumMaster, show the list above to your senior management sponsor or to your boss to explain why you need a qualified ScrumMaster at your side. What the ScrumMaster should Expect from the Product Owner It takes two to Tango, and it’s only fair that your ScrumMaster has expectations about your work as the product owner. The following picture illustrates some of them:The table below describes the ScrumMaster’s expectations in more detail: Service  DetailsVision and StrategyProvide a vision to the team that describes where the product is heading. Communicate the market, the value proposition and the business goals of the product. Formulate a product or release goal for the near to mid term.Product DetailsProactively work on the product backlog. Update it with new insights and and ensure that there are enough ready items. Provide direction and make prioritisation calls. Invite the right people and choose the right techniques to collect feedback and data, for instance, invite selected users the review meeting and carry out a usability test.CollaborationBe available for questions and spend time with the team. Buy into the process and attend the sprint meetings. Manage the stakeholders and make tough decisions; say no to some ideas and requests.You can find more a more comprehensive description of the product owner duties in my post “The Product Owner Responsibilities“.Reference: Every Great Product Owner Needs a Great ScrumMaster from our JCG partner Roman Pichler at the Pichler’s blog blog....

Fixing The Android Camera API

The other day I participated in a company hackathon and I decided to make use of the Android camera. I’ve always said that the Android APIs are very bad (to put it mildly), but I’ve never actually tried to explicitly say what is wrong and how it could be better. Until now. So, the Camera API is crappy. If you haven’t seen it before, take a look for a minute. You can use it in a lot of wrong ways, you can forget many important things, you don’t easily find out what the problem is, even with stackoverflow, and it just doesn’t feel good. To put it differently – there is something wrong with an API that requires you to read a list of 10 steps, some of which are highlighted as “Important”. How would you write that API? Let’s improve it. And so I did – my EasyCamera wrapper is on GitHub. Below are the changes that I made and the reason behind the change:setPreviewDisplay(..) is required before startPreview(), and it in turn is required before taking pictures. Why not enforce that? We could simply throw an exception “Preview display not set”, but that’s not good enough. Let’s get rid of the method that sets the preview display and then make startPreview(..) take the surface as parameter. Overload it to be able to take a SurfaceTexture as well. We’ve enforced the preview setting, so now let’s enforce starting the preview. We can again throw a “Preview not started” exception from takePicture(..), but that happens at runtime. Let’s instead get rid of the takePicture(..) method out of the Camera class and put it in a CameraActions class. Together with other methods that are only valid after preview is started (I don’t know which ones exactly they are – it is not clear from the current API). How do we obtain the CameraActions instance? It is returned by startPreview(..) So far we’ve made the main use-case straightforward and less error-prone by enforcing the right set of steps. But the takePicture(..) method still feels odd. You can supply nulls to all parameters, but somehow there are two overloaded methods. Arguably, passing null is fine, but there are other options. One is to introduce a PictureCallback interface that has all the four methods that can be invoked, and provide a blank implementation of all of them in a BasePictureCallback class. That, however, might not be applicable in this case, because it makes a difference if you pass null vs callback that does nothing (at least on my phone, if I pass a shutter callback, the shutter sound is played, and if I pass null, it is not). So, let’s introduce the Callbacks which is a builder-like class to contain all callbacks that you like. So far so good, but you need to restart preview after a picture is taken. And restarting it automatically may not be a good default. But in the situation we are in, you only have CameraActions and calling startPreview(..) now requires a surface to be passed. Should we introduce a restartPreview() method? No. We can make our interface methods return boolean and if the developer wants to restart preview, they should just return true. That would be fine, if there weren’t 4 different callbacks, and calculating the outcome based on all 4 is tricky. That’s why a sensible option is to add a restartPreview property to the Callbacks class, and restart only after the last callback is invoked and only if the property is set to true. The main process is improved now, but there are other things to improve. For example, there’s an asymmetry between the methods for opening and closing the camera. “open” and “release”. It should be either “open” and “close” or “acquire” and “release”. I prefer to have a .close(), and then (if you can use Java 7), make use of the AutoClosable interface, and therefore the try-with-resource construct. When you call getParameters() you can a copy of the parameters. That’s ok, but then you should set them back, and that’s counter-intuitive at first. Providing a simple camera.setParameter(..) method would be easier to work with, but the Parameters class has a lot of methods, so it’s not an option to bring them to the Camera class. We can make parameters mutable? (That isn’t implemented yet) One of the reasons the API is so cumbersome to use is the error reporting. You almost exclusively get “Came.takePicture failed”, regardless of the reason. With the above steps we eliminated the need of exceptions in some cases, but it would still be good to get better reports on the exact reason. We need to make the Camera mock-friendly (currently it isn’t). So EasyCamera is an interface, which you can easily mock. CameraActions is a simple mockable class as well.My EasyCamera project is only an initial version now and hasn’t been used in production code, but I’d like to get feedback on whether it’s better than the original and how to get it improved. At some point it can wrap other cumbersome Camera functionality, like getting front and back camera, etc. Broken APIs are the reason for hundreds of wasted hours, money and neurons. That’s why, when you design an API, you need some very specific skills and need to ask yourself many questions. Josh Bloch’s talk on API design is very relevant and I recommend it. I actually violated one of his advice – “if in doubt, leave it out” – you have access to the whole Camera in your CameraActions, and that might allow you to do things that don’t work. However, I am not fully aware of all features of the Camera, that’s why I didn’t want to limit users and make them invent clumsy workarounds. When I started this post, I didn’t plan to actually write EasyCamera. But it turned out to take 2 hours, so I did it. And I would suggest that developers, whenever confronted with a bad API, do the above exercise – think of how they would’ve written it. Then it may turn out to be a lot less effort to actually fix it or wrap it, than continue using it as it is.Reference: Fixing The Android Camera API from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Abstract Class Versus Interface in the JDK 8 Era

In The new Java 8 Date and Time API: An interview with Stephen Colebourne, Stephen Colebourne tells Hartmut Schlosser, “I think the most important language change isn’t lambdas, but static and default methods on interfaces.” Colebourne adds, “The addition of default methods removes many of the reasons to use abstract classes.” As I read this, I realized that Colebourne is correct and that many situations in which I currently use abstract classes could be replaced with interfaces with JDK 8 default methods. This is pretty significant in the Java world as the difference between abstract classes and interfaces has been one of the issues that vex new Java developers trying to understand the difference. In many ways, differentiating between the two is even more difficult in JDK 8.   There are numerous examples of online forums and blogs discussing the differences between interfaces and abstract classes in Java. These include, but are not limited to, JavaWorld‘s Abstract classes vs. interfaces, StackOverflow‘s When do I have to use interfaces instead of abstract classes?, Difference Between Interface and Abstract Class, 10 Abstract Class and Interface Interview Questions Answers in Java, As useful and informative as these once were, many of them are now outdated and may be part of even more confusion for those new to Java who start their Java experience with JDK 8. As I was thinking about the remaining differences between Java interfaces and abstract classes in a JDK 8 world, I decided to see what the Java Tutorial had to say on this. The tutorial has been updated to reflect JDK 8 and the Abstract Methods and Classes has a section called “Abstract Classes Compared to Interfaces” that has been updated to incorporate JDK 8. This section points out the similarities and differences of JDK 8 interfaces with abstract classes. The differences it highlights are the accessibility of data members and methods: abstract classes allow non-static and non-final fields and allow methods to be public, private, or protected while interfaces’ fields are inherently public, static, and final, and all interface methods are inherently public. The Java Tutorial goes on to list bullets for when an abstract class should be considered and for when an interface should be considered. Unsurprisingly, these are derived from the previously mentioned differences and have primarily to do with whether you need fields and methods to be private, protected, non-static, or not final (favor abstract class) or whether you need the ability to focus on typing without regard to implementation (favor interface). Because Java allows a class to implement multiple interfaces but extend only one class, the interface might be considered advantageous when a particular implementation needs to be associated with multiple types. Thanks to the JDK 8′s default methods, these interfaces can even provide default behavior for implementations. A natural question might be, “How does Java handle a class that implements two interfaces, both of which describe a default method with the same signature?” The answer is that this is a compilation error. This is shown in the next screen snapshot which shows NetBeans 8 reporting the error when my class implemented two interfaces that each defined a default method with the same signature [String speak()].As the screen snapshot above indicates, a compiler error is shown that states, “class … inherits unrelated defaults for … from types … and …” (where the class name, defaults method name, and two interface names are whatever are specified in the message). Peter Verhas has written a detailed post (“Java 8 default methods: what can and can not do?“) looking at some corner cases (gotchas) related to multiply implemented interfaces with default method names with the same signature. Conclusion JDK 8 brings arguably the abstract class’s greatest advantage over the interface to the interface. The implication of this is that a large number of abstract classes used today can likely be replaced by interfaces with default methods and a large number of future constructs that would have been abstract classes will now instead be interfaces with default methods.Reference: Abstract Class Versus Interface in the JDK 8 Era from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Secure DevOps – Seems Simple

The DevOps security story is deceptively simple. It’s based on a few fundamental, straight forward ideas and practices: Smaller Releases are Safer One of these ideas is that smaller, incremental and more frequent releases are safer and cause less problems than big bang changes. Makes sense. Smaller releases contain less code changes. Less code means less complexity and fewer bugs. And less risk, because smaller releases are easier to understand, easier to plan for, easier to test, easier to review, and easier to roll back if something goes wrong. And easier to catch security risks by watching out for changes to high risk areas of code: code that handles sensitive data, or security features or other important plumbing, new APIs, error handling. At Etsy for example, they identify this code in reviews or pen testing or whatever, hash it, and automatically alert the security team when it gets changed, so that they can make sure that the changes are safe. Changing the code more frequently may also make it harder for the bad guys to understand what you are doing and find vulnerabilities in your system – taking advantage of a temporary “Honeymoon Effect” between the time you change the system and the time that the bad guys figure out how to exploit weaknesses in it. And changing more often forces you to simplify and automate application deployment, to make it repeatable, reliable, simpler, faster, easier to audit. This is good for change control: you can put more trust in your ability to deploy safely and consistently, you can trace what changes were made, who made them, and when. And you can deploy application patches quickly if you find a problem.“…being able to deploy quick is our #1 security feature” Effective Approaches to Web Application Security, Zane Lackey Standardized Ops Environment through Infrastructure as Code DevOps treats “Infrastructure as Code”: infrastructure configurations are defined in code that is written and managed in the same way as application code, and deployed using automated tools like Puppet or Chef instead of by hand. Which means that you always know how your infrastructure is setup and that it is setup consistently (no more Configuration Drift). You can prove what changes were made, who made them, and when. You can deploy infrastructure changes and patches quickly if you find a problem. You can test your configuration changes in advance, using the same kinds of automated unit test and integration test suites that Agile developers rely on – including tests for security. And you can easily setup test environments that match (or come closer to matching) production, which means you can do a more thorough and accurate job of all of your testing. Automated Continuous Security Testing DevOps builds on Agile development practices like automated unit/integration testing in Continuous Integration, to include higher level automated system testing in Continuous Delivery/Continuous Deployment. You can do automated security testing using something like Gauntlt to “be mean to your code” by running canned attacks on the system in a controlled way. Other ways of injecting security into Devops include:Providing developers with immediate feedback on security issues through self-service static analysis: running Static Analysis scans on every check-in, or directly in their IDEs as they are writing code. Helping developers to write automated security unit tests and integration tests and adding them to the Continuous testing pipelines. Automating checks on Open Source and other third party software dependencies as part of the build or Continuous Integration, using something like OWASP’s Dependency Check to highlight dependencies that have known vulnerabilities.Fast feedback loops using automated testing means you can catch more security problems – and fix them – earlier. Operations Checks and Feedback DevOps extends the idea of feedback loops to developers from testing all the way into production, allowing (and encouraging developers visibility into production metrics and getting developers and ops and security to all monitor the system for anomalies in order to catch performance problems and reliability problems and security problems. Adding automated asserts and health checks to deployment (and before start/restart) in production to make sure key operational dependencies are met, including security checks: that the configurations correct, ports that should be closed are closed, ports that should be opened are opened, permissions are correct, SSL is setup properly… Or even killing system processes that don’t conform (or sometimes just to make sure that they failover properly, like they do at Netflix). People talking to each other and working together to solve problems And finally DevOps is about people talking together and solving problems together. Not just developers talking to the business/customers. Developers talking to ops, ops talking to developers, and everybody talking to security. Sharing ideas, sharing tools and practices. Bringing ops and security into the loop early. Dev and ops and security working together on planning and on incident response and learning together in Root Cause Analysis sessions and other reviews. Building teams across silos. Building trust. Making SecDevOps Work There’s good reasons to be excited by what these people are doing, the path that they are going down. It promises a new, more effective way for developers and security and ops to work together. But there are some caveats. Secure DevOps requires strong engineering disciplines and skills. DevOps engineering skills are still in short supply. And so are information security(and especially appsec) skills. People who are good at both DevOps and appsec are a small subset of these small subsets of the talent available. Outside of configuration management and monitoring, the tooling is limited – you’ll probably have to write a lot of what you need yourself (which leads quickly back to the skills problem). A lot more work needs to be done to make this apply to regulated environments, with enforced separation of duties and where regulators think of Agile as “the A Word” (so you can imagine what they think of developers pushing out changes to production in Continuous Deployment, even if they are using automated tools to do it). A small number of people are exploring these problems in a Google discussion group on DevOps for managers and auditors in regulated industries, but so far there are more people asking questions than offering answers. And getting dev and ops and security working together and collaborating across development, ops and security might take an extreme makeover of your organization’s structure and culture. Secure DevOps practices and ideas aren’t enough by themselves to make a system secure. You still need all of the fundamentals in place. Even if they are releasing software incrementally and running lots of automated tests, developers still need to understand software security and design security in and follow good software engineering practices. Whether they are using “Infrastructure as Code” or not, Ops still has to design and engineer the datacenter and the network and the rest of the infrastructure to be safe and reliable, and run things in a secure and responsible way. And security still needs to train everyone and followup on what they are doing, run their scans and pen tests and audits to make sure that all of this is being done right. Secure DevOps is not as simple as it looks. It needs disciplined secure development and secure ops fundamentals, and good tools and rare skills and a high level of organizational agility and a culture of trust and collaboration. Which is why only a small number of organizations are doing this today. It’s not a short term answer for most organizations. But it does show a way for ops and security to keep up with the high speed of Agile development, and to become more agile, and hopefully more effective, themselves.Reference: Secure DevOps – Seems Simple from our JCG partner Jim Bird at the Building Real Software blog....

Base64 in Java 8 – It’s Not Too Late To Join In The Fun

Finally, Java 8 is out. Finally, there’s a standard way to do Base64 encoding. For too long we have been relying on Apache Commons Codec (which is great anyway). Memory-conscious coders will desperately use sun.misc.BASE64Encoder and sun.misc.BASE64Decoder just to avoid adding extra JAR files in their programs, provided they are super sure of using only Sun/Oracle JDK. These classes are still lurking around in Java 8. To try things out, I’ve furnished a JUnit test to show how to use the following APIs to encode:  Commons Codec: org.apache.commons.codec.binary.Base64 Java 8′s new java.util.Base64 The sort-of evergreen internal code of Sun/Oracle’s JDK: sun.misc.BASE64Encoderpackage org.gizmo.util;import java.util.Random;import org.apache.commons.codec.binary.Base64; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.assertArrayEquals;import sun.misc.BASE64Encoder;public class Base64Tests {private static byte[] randomBinaryData = new byte[5000000]; private static long durationCommons = 0; private static long durationJava8 = 0; private static long durationSun = 0;private static byte[] encodedCommons; private static byte[] encodedJava8; private static String encodedSun;@BeforeClass public static void setUp() throws Exception {//We want to test the APIs against the same data new Random().nextBytes(randomBinaryData); }@Test public void testSunBase64Encode() throws Exception {BASE64Encoder encoder = new BASE64Encoder();long before = System.currentTimeMillis();encodedSun = encoder.encode(randomBinaryData);long after = System.currentTimeMillis(); durationSun = after-before; System.out.println("Sun: " + durationSun); }@Test public void testJava8Base64Encode() throws Exception {long before = System.currentTimeMillis();java.util.Base64.Encoder encoder = java.util.Base64.getEncoder(); encodedJava8 = encoder.encode(randomBinaryData);long after = System.currentTimeMillis(); durationJava8 = after-before; System.out.println("Java8: " + durationJava8); }@Test public void testCommonsBase64Encode() throws Exception {long before = System.currentTimeMillis();encodedCommons = Base64.encodeBase64(randomBinaryData);long after = System.currentTimeMillis(); durationCommons = after-before; System.out.println("Commons: " + durationCommons); }@AfterClass public static void report() throws Exception {//Sanity check assertArrayEquals(encodedCommons, encodedJava8); System.out.println(durationCommons*1.0/durationJava8); } } What about the performance of these 3 ways? Base64 seems to be a small enough method so there are less ways to screw it up, but you’ll never know what lies beneath the surface. From general timing (in the JUnit tests), it seems that the 3 methods can be arranged like this, from the fastest to the slowest: Java 8, Commons, Sun. A sample of the timing (encoding a byte array of size 5,000,000): Sun: 521 Commons: 160 Java8: 37 Java 8′s method ran 4x faster than Commons, and 14x faster than Sun. But this sample is just simplistic. Do try to benchmark for yourselves to come to your own conclusions. So, which APIs to use? As any expert will tell you…it depends. If you have enough power to dictate that your code should only run on Java 8 and above, then by all means use the new java.util.Base64. If you just need to support multiple JDK versions and vendors, you can stick with Commons Codec or some other 3rd party API. Or wait until the older Javas to be out of circulation or usage, and rewrite your precious codebase. Or move on to another programming language. Note: I did not even mention about using sun.misc.BASE64Encoder. Avoid it when possible. Perhaps one day this class will be removed in another (alos) version of JDK…it isn’t present in other (heteros) JDKs by other vendors. Resourceshttp://www.oracle.com/technetwork/java/javase/8-whats-new-2157071.html http://stackoverflow.com/questions/13109588/base64-encoding-in-java/22704819#22704819 http://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/binary/Base64.htmlReference: Base64 in Java 8 – It’s Not Too Late To Join In The Fun from our JCG partner Allen Chee at the YK’s Workshop blog....

Introduction to Nashorn

Java 8 introduced and new javascript engine named “Nashorn”. Nashorn is based on Da Vinci Machine, a project with aim of adding dynamic language support to JVM. Nashorn is a nice milestone to make the hybrid softwares easier than before. The nice features of this engine makes you able to make a full duplex communication between your java (any other compiled languages) codes and javascript. The simplest way to use Nashorn is a command line tool which is bundled in JDK 8 or OpenJDK 8 and you can find it in “/bin”.  with executing jjs you will face with jjs prompt that you can work with Nashorn interactively, also you can pass js files as arguments to jjs. You can find  a  basic example of using jjs in below:     Consider the following simple.js file: var name="Nashorn"; print(name);Now by calling: jjs simple.js the text “Nashorn” will be presented on your screen. I think jjs is enough for introduction, if you need more information you can type jjs -help. Also you can use Nashorn script engine in your java code. Consider the following Program.java file: public class Program { public static void main(String... args) throws ScriptException { ScriptEngineManager factory = new ScriptEngineManager(); ScriptEngine nashornEngine = factory.getEngineByName("nashorn"); nashornEngine.eval("print('hello world');"); } }With this simple code a very nice hello world will be showed on your screen. Also you can evaluate js files to your script engine, ScriptEngine interfaces has an eval method overload with Reader abstract class type. So simply you can pass any objects which is an instance of Reader class. Consider the following code: script1.js content: var version = 1; function hello(name) { return "hello " + name; }Program.java content: public class Program { public static void main(String... args) throws ScriptException, NoSuchMethodException { ScriptEngineManager factory = new ScriptEngineManager(); ScriptEngine nashornEngine = factory.getEngineByName("nashorn"); nashornEngine.eval(new InputStreamReader(Program.class.getResourceAsStream("script1.js"))); System.out.println(nashornEngine.get("version")); Invocable invocable = (Invocable) nashornEngine; Object result = invocable.invokeFunction("hello", "soroosh"); System.out.println(result); } }ScriptEngine interface has a get method, As you noticed in sample you can call it to retrieve any variables or any states defined in your ScriptEngine. In above example “version” is a variable declared in simple.js file. Every script engine has its own implementation of ScriptEngine class and there are some optional interfaces which script engines can implement to extend their functionality. If you check the source code of NashornSriptEngine the class signature is: public final class NashornScriptEngine extends javax.script.AbstractScriptEngine implements javax.script.Compilable, javax.script.InvocableSo Nashorn script engine makes you able to use these two interfaces too. In above example for calling functions which are declared in our script engine we used Invocable interface. Note: ScriptEngine is stateful, so if you call some functions or eval some codes on your script engine the state of objects and variables can effect on their result. Conclusion: In this post i tried to introduce Nashorn in a very basic and practical way, In future posts i will demonstrate Java + Nashorn interoperability more and its usages in real world.Reference: Introduction to Nashorn from our JCG partner Soroosh Sarabadani at the Just another Java blog blog....

db.person.find( { “role” : “DBA” } )

Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions. Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see that the audience was well balanced between developers and operations, mostly DBA.       What! I still need DBA?This is a good opportunity to raise a point, or comment a wrong idea: the fact that you are using MongoDB, or any other NoSQL datastore does not mean that you do not need a DBA… Like any project, an administrator is not mandatory, but if you have one it is better. So even when MongoDB is pushed by development team it is very important to understand the way the database works, and how to administer, monitor it. If you are lucky enough to have real operations teams, with good system and database administrators use them! They are very important for your application. Most DBA/System Administrators have been maintaining systems in production for many years. They know how to keep your application up and running. They also most of the time experienced many “disasters”, and then recover (I hope). Who knows, you may encounter big issues with your application and you will be happy to have them on your side at this moment. “Great, but the DBA is slowing down my development!” I hear this, sometimes, and I had this feeling in the past to as developer in large organization. Is it true? Developers and DBA are today, not living in the same worlds:Developers want to integrate new technologies as soon as possible, not only because it is fun and they can brag about it during meetups/conferences; but because these technologies, most of the time, are making them more productive, and offer better service/experience to the consumer DBA, are here to keep the applications up and running! So every time they do not feel confident about a technology they will push back. I think this is natural and I would be probably the same in their position. Like all geeks, they would love to adopt new technologies but they need to understand and trust it before.System administrators, DBAS look at the technology with a different angle than developers. Based on this assumption, it is important to bring the operation team as early as possible when  the development team wants to integrate MongoDB or any new data store. Having the operation team in the loop early will ease the global adoption of MongoDB in the company. Personally, and this will show my age, I have seen a big change in the way developers and DBAs are working together. Back in the 90′s, when the main architecture was based on client/server architecture  developers and DBAs where working pretty well togethers; probably because they were speaking the same language: SQL was everywhere.  I had regular meetings wit Then, since mid 2000, mots of applications have moved to a web based architecture , with for example Java middleware, and the developers stopped working with DBAs. Probably because the abstraction data layer provided by the ORM exposed the database as a “commodity” service that is supposed to work: “Hey Mr DBA, my application has been written with the best middleware technology on the market, so now deal with the performance and scalability! I am done!” Yes it is a cliché, but I am sure that some of you will recognize that. Nevertheless each time I can, I have been pushing developers to talk more to administrators and look closely to their database! A new era for operations and development teams The fast adoption of MongoDB by developers, is a great opportunity to fix what we have broken 10 years ago in large information systems:Let’s talk again!MongoDB has been built first for developers. The document oriented approach gives lot of flexibility to quickly adapt to change. So anytime your business users need a new feature you can implement it, even if this change impact the data structure. Your data model is now driven and controlled by the application, not the database engine. However, the applications still need to be available 24×7, and performs well. These topics are managed – and shared- by administrator and developers! This has been always the case but, as I described it earlier, it looks like some of us have forgotten that. Schemas design, change velocity, are driven by the application, so the business and development teams, but all this impacts the database, for example:How storage will grow ? Which indexes must be created to speed up my application? How to organize my cluster to leverage the infrastructure properly:Replica-Set organization (and related write concerns, managed by developer) Sharding optionsAnd the most important of them : backup/recovery strategiesSo many things that could be managed by the project team, but if you have an operation team with you, it will be better to do that as a single team. You, the developer, are convinced that MongoDB is the best database for your projects! Now it is time to work with the ops team and convince them too.  So you should for sure explain why MongoDB is good for you as developer, but also you should highlight all the benefits for the operations, starting with built-in high-availability with replica sets, and easy scalability with sharding. MongoDB is also here to make the life of the administrator easier! I have shared in the next paragraph a lit of resources that are interesting for operations people. Let’s repeat it another time, try to involve the operation team as soon as possible, and use that as an opportunity to build/rebuild the relationship between developers and system administrators! Resources You can find many good resources on the Site to helps operations or learn about this:Documentation : Operations Online Training :M102: MongoDB for DBAs M202: MongoDB Advanced Deployment and OperationsAnd many others such as White Papers and Webinars.Reference: db.person.find( { “role” : “DBA” } ) from our JCG partner Tugdual Grall at the Tug’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books