Featured FREE Whitepapers

What's New Here?


This is the Final Discussion!

Pun intended… Let’s discuss Java final. Recently, our popular blog post “10 Subtle Best Practices when Coding Java” had a significant revival and a new set of comments as it was summarised and linked from JavaWorld. In particular, the JavaWorld editors challenged our opinion about the Java keyword “final“:         More controversially, Eder takes on the question of whether it’s ever safe to make methods final by default: “If you’re in full control of all source code, there’s absolutely nothing wrong with making methods final by default, because:”“If you do need to override a method (do you really?), you can still remove the final keyword” “You will never accidentally override any method anymore”Yes, indeed. All classes, methods, fields and local variables should be final by default and mutable via keyword. Here are fields and local variables: int finalInt = 1; val int finalInt = 2; var int mutableInt = 3; Whether the Scala/C#-style val keyword is really necessary is debatable. But clearly, in order to modify a field / variable ever again, we should have a keyword explicitly allowing for it. The same for methods – and I’m using Java 8’s default keyword for improved consistency and regularity: class FinalClass { void finalMethod() {} }default class ExtendableClass { void finalMethod () {} default void overridableMethod() {} } That would be the perfect world in our opinion, but Java goes the other way round making default (overridable, mutable) the default and final (non-overridable, immutable) the explicit option. Fair enough, we’ll live with that … and as API designers (from the jOOQ API, of course), we’ll just happily put final all over the place to at least pretend that Java had the more sensible defaults mentioned above. But many people disagree with this assessment, mostly for the same reason: As someone who works mostly in osgi environments, I could not agree more, but can you guarantee that another api designer felt the same way? I think it’s better to preempt the mistakes of api designers rather than preempt the mistakes of users by putting limits on what they can extend by default. – eliasv on reddit Or… Strongly disagree. I would much rather ban final and private from public libraries. Such a pain when I really need to extend something and it cannot be done. Intentionally locking the code can mean two things, it either sucks, or it is perfect. But if it is perfect, then nobody needs to extend it, so why do you care about that. Of course there exists valid reasons to use final, but fear of breaking someone with a new version of a library is not one of them. – meotau on reddit Or also… I know we’ve had a very useful conversation about this already, but just to remind other folks on this thread: much of the debate around ‘final’ depends on the context: is this a public API, or is this internal code? In the former context, I agree there are some good arguments for final. In the latter case, final is almost always a BAD idea. – Charles Roth on our blog All of these arguments tend to go into one direction: “We’re working on crappy code so we need at least some workaround to ease the pain.” But why not think about it this way: The API designers that all of the above people have in mind will create precisely that horrible API that you’d like to patch through extension. Coincidentally, the same API designer will not reflect on the usefulness and communicativeness of the keyword final, and thus will never use it, unless required by the Java language. Win-win (albeit crappy API, shaky workarounds and patches). The API designers that want to use final for their API will reflect a lot on how to properly design APIs (and well-defined extension points / SPIs), such that you will never worry about something being final. Again, win-win (and an awesome API). Plus, in the latter case, the odd hacker will be kept from hacking and breaking your API in a way that will only lead to pain and suffering, but that’s not really a loss. Final interface methods For the aforementioned reasons, I still deeply regret that final is not possible in Java 8 interfaces. Brian Goetz has given an excellent explanation why this has been decideed upon like that. In fact, the usual explanation. The one about this not being the main design goal for the change! But think about the consistency, the regularity of the language if we had: default interface ImplementableInterface { void abstractMethod () ; void finalMethod () {} default void overridableMethod() {} } (Ducks and runs…) Or, more realistically with our status quo of defaulting to default: interface ImplementableInterface { void abstractMethod () ; final void finalMethod () {} void overridableMethod() {} } Finally So again, what are your (final) thoughts on this discussion? If you haven’t heard enough, consider also reading this excellent post by Dr. David Pearce, author of the whiley programming language.Reference: This is the Final Discussion! from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Use Cases for Elasticsearch: Index and Search Log Files

In the last posts we have seen some of the properties of using Elasticsearch as a document store, for searching text content and geospatial search. In this post we will look at how it can be used to index and store log files, a very useful application that can help developers and operations in maintaining applications. Logging When maintaining larger applications that are either distributed across several nodes or consist of several smaller applications searching for events in log files can become tedious. You might already have been in the situation that you have to find an error and need to log in to several machines and look at several log files. Using Linux tools like grep can be fun sometimes but there are more convenient ways. Elasticsearch and the projects Logstash and Kibana, commonly known as the ELK stack, can help you with this. With the ELK stack you can centralize your logs by indexing them in Elasticsearch. This way you can use Kibana to look at all the data without having to log in on the machine. This can also make Operations happy as they don’t have to grant access to every developer who needs to have access to the logs. As there is one central place for all the logs you can even see different applications in context. For example you can see the logs of your Apache webserver combined with the log files of your application server, e.g. Tomcat. As search is core to what Elasticsearch is doing you should be able to find what you are looking for even more quickly. Finally Kibana can also help you with becoming more proactive. As all the information is available in real time you also have a visual representation of what is happening in your system in real time. This can help you in finding problems more quickly, e.g. you can see that some resource starts throwing Exceptions without having your customers report it to you. The ELK Stack For logfile analytics you can use all three applications of the ELK stack: Elasticsearch, Logstash and Kibana. Logstash is used to read and enrich the information from log files. Elasticsearch is used to store all the data and Kibana is the frontend that provides dashboards to look at the data. The logs are fed into Elasticsearch using Logstash that combines the different sources. Kibana is used to look at the data in Elasticsearch. This setup has the advantage that different parts of the log file processing system can be scaled differently. If you need more storage for the data you can add more nodes to the Elasticsearch cluster. If you need more processing power for the log files you can add more nodes for Logstash. Logstash Logstash is a JRuby application that can read input from several sources, modify it and push it to a multitude of outputs. For running Logstash you need to pass it a configuration file that determines where the data is and what should be done with it. The configuration normally consists of an input and an output section and an optional filter section. This example takes the Apache access logs, does some predefined processing and stores them in Elasticsearch: input { file { path => "/var/log/apache2/access.log" } }filter { grok { match => { message => "%{COMBINEDAPACHELOG}" } } }output { elasticsearch_http { host => "localhost" } } The file input reads the log files from the path that is supplied. In the filter section we have defined the grok filter that parses unstructured data and structures it. It comes with lots of predefined patterns for different systems. In this case we are using the complete Apache log pattern but there are also more basic building block like parsing email and ip addresses and dates (which can be lots of fun with all the different formats). In the output section we are telling Logstash to push the data to Elasticsearch using http. We are using a server on localhost, for most real world setups this would be a cluster on separate machines. Kibana Now that we have the data in Elasticsearch we want to look at it. Kibana is a JavaScript application that can be used to build dashboards. It accesses Elasticsearch from the browser so whoever uses Kibana needs to have access to Elasticsearch. When using it with Logstash you can open a predefined dashboard that will pull some information from your index. You can then display charts, maps and tables for the data you have indexed. This screenshot displays a histogram and a table of log events but there are more widgets available like maps and pie and bar charts.As you can see you can extract a lot of data visually that would otherwise be buried in several log files. Conclusion The ELK stack can be a great tool to read, modify and store log events. Dashboards help with visualizing what is happening. There are lots of inputs in Logstash and the grok filter supplies lots of different formats. Using those tools you can consolidate and centralize all your log files. Lots of people are using the stack for analyzing their log file data. One of the articles that is available is by Mailgun, who are using it to store billions of events. And if that’s not enough read this post on how CERN uses the ELK stack to help running the Large Hadron Collider In the next post we will look at the final use case for Elasticsearch: Analytics.Reference: Use Cases for Elasticsearch: Index and Search Log Files from our JCG partner Florian Hopf at the Dev Time blog....

Gradle Goodness: Adding Dependencies Only for Packaging to War

My colleague, Tom Wetjens, wrote a blog post Package-only dependencies in Maven. He showed a Maven solution when we want to include dependencies in the WAR file, which are not used in any other scopes. In this blog post we will see how we solve this in Gradle. Suppose we use the SLF4J Logging API in our project. We use the API as a compile dependency, because our code uses this API. But in our test runtime we want to use the SLF4J Simple implementation of this API. And in our WAR file we want to include the Logback implementation of the API. The Logback dependency is only needed to be included in the WAR file and shouldn’t exist in any other dependency configuration. We first add the War plugin to our project. The war task uses the runtime dependency configuration to determine which files are added to the WEB-INF/lib directory in our WAR file. We add a new dependency configuration warLib that extends the runtime configuration in our project. apply plugin: 'war'repositories.jcenter()configurations { // Create new dependency configuration // for dependencies to be added in // WAR file. warLib.extendsFrom runtime }dependencies { // API dependency for Slf4j. compile 'org.slf4j:slf4j-api:1.7.7'testCompile 'junit:junit:4.11'// Slf4j implementation used for tests. testRuntime 'org.slf4j:slf4j-simple:1.7.7'// Slf4j implementation to be packaged // in WAR file. warLib 'ch.qos.logback:logback-classic:1.1.2' }war { // Add warLib dependency configuration classpath configurations.warLib// We remove all duplicate files // with this assignment. // geFiles() method return a unique // set of File objects, removing // any duplicates from configurations // added by classpath() method. classpath = classpath.files } We can now run the build task and we get a WAR file with the following contents: $ gradle build :compileJava UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :war :assemble :compileTestJava :processTestResources UP-TO-DATE :testClasses :test :check :buildBUILD SUCCESSFULTotal time: 6.18 secs $ jar tvf build/libs/package-only-dep-example.war 0 Fri Sep 19 05:59:54 CEST 2014 META-INF/ 25 Fri Sep 19 05:59:54 CEST 2014 META-INF/MANIFEST.MF 0 Fri Sep 19 05:59:54 CEST 2014 WEB-INF/ 0 Fri Sep 19 05:59:54 CEST 2014 WEB-INF/lib/ 29257 Thu Sep 18 14:36:24 CEST 2014 WEB-INF/lib/slf4j-api-1.7.7.jar 270750 Thu Sep 18 14:36:24 CEST 2014 WEB-INF/lib/logback-classic-1.1.2.jar 427729 Thu Sep 18 14:36:26 CEST 2014 WEB-INF/lib/logback-core-1.1.2.jar 115 Wed Sep 03 09:24:40 CEST 2014 WEB-INF/web.xml Also when we run the dependencies task we can see how the implementations of the SLF4J API relate to the dependency configurations: $ gradle dependencies :dependencies------------------------------------------------------------ Root project ------------------------------------------------------------archives - Configuration for archive artifacts. No dependenciescompile - Compile classpath for source set 'main'. \--- org.slf4j:slf4j-api:1.7.7default - Configuration for default artifacts. \--- org.slf4j:slf4j-api:1.7.7providedCompile - Additional compile classpath for libraries that should not be part of the WAR archive. No dependenciesprovidedRuntime - Additional runtime classpath for libraries that should not be part of the WAR archive. No dependenciesruntime - Runtime classpath for source set 'main'. \--- org.slf4j:slf4j-api:1.7.7testCompile - Compile classpath for source set 'test'. +--- org.slf4j:slf4j-api:1.7.7 \--- junit:junit:4.11 \--- org.hamcrest:hamcrest-core:1.3testRuntime - Runtime classpath for source set 'test'. +--- org.slf4j:slf4j-api:1.7.7 +--- junit:junit:4.11 | \--- org.hamcrest:hamcrest-core:1.3 \--- org.slf4j:slf4j-simple:1.7.7 \--- org.slf4j:slf4j-api:1.7.7warLib +--- org.slf4j:slf4j-api:1.7.7 \--- ch.qos.logback:logback-classic:1.1.2 +--- ch.qos.logback:logback-core:1.1.2 \--- org.slf4j:slf4j-api:1.7.6 -> 1.7.7(*) - dependencies omitted (listed previously)BUILD SUCCESSFULTotal time: 6.274 secs Code written with Gradle 2.1.Reference: Gradle Goodness: Adding Dependencies Only for Packaging to War from our JCG partner Hubert Ikkink at the JDriven blog....

Five Reasons Why High Performance Computing (HPC) startups will explode in 2015

1. The size of the social networks grew beyond any rational expectations Facebook(FB) official stats state that FB has 1.32 billion and 1.07 billion mobile monthly active users. Approximately 81.7% are outside the US and Canada. FB manages a combined of 2.4 Billion users, including mobile with 7,185 employees. The world population as estimated by United Nations as of 1 July 2014 at 7.243 billion. Therefore 33% of the world population is on FB. This includes every infant and person alive, and makes abstraction if they are literate or not.   Google reports 540 million users per month plus 1.5 billion photos uploaded per week. Add Twitter, Quora, Yahoo and a few more we reach 3 billion plus people who write emails, chat, tweet, write answers to questions and ask questions, read books, see movies and TV, and so on. Now we have the de-facto measurable collective unconscious of this word, ready to be analyzed. It contains information of something inside us that we are not aware we have. This rather extravagant idea come from Carl Jung  about 70 years ago. We should take him seriously as his teachings led to the development of Meyer Briggs  and a myriad of other personality and vocational tests that proved amazingly accurate. Social media life support  profits depends on meaningful information. FB reports revenues of $2,91 billion per Q2 2014, and only $0.23 billion come from user payments or fees. 77% of all revenues are processed information monetized through advertising and other related services. The tools of the traditional Big Data (the only data there is, is big data) are no longer sufficient. A few years ago we were talking in the 100 million users range, Now the data sets are in exabyte and zettabyte dimensions. 1 EB = 1000^6 bytes = 10^18 bytes = 1 000 000 000 000 000 000 B = 1 000 petabytes = 1 million terabytes = 1 billion gigabytes 1 ZB = 1,000 EB I compiled this chart from information published. It shows the growth of the world’s storage capacity, assuming optimal compression, over the years. The 2015 data is extrapolated from Cisco and crosses one zettabyte capacity.2. The breakthrough in high throughput and high performance computing. The successful search for Higgs particle exceeds anything in  terms of data size analyzed: The amount of data collected at the ATLAS detector from the  Large Hadron Collider (LHC) in CERN, Geneva  is described like this: If all the data from ATLAS would be recorded, this would fill 100,000 CDs per second. This would create a stack of CDs 450 feet high every second, which would reach to the moon and back twice each year. The data rate is also equivalent to 50 billion telephone calls at the same time. ATLAS actually only records a fraction of the data (those that may show signs of new physics) and that rate is equivalent to 27 CDs per minute. It took 20 years and 6,000 scientists. They created a grid which has a capacity of 200 PB of disk and 300,000 cores, with most of the 150 computing centers connected via 10 Gbps links. A new idea, the Dynamic Data Center Concept did not make it yet mainstream, but it would great if it does. This concept is described in a different blog entry. Imagine every computer and laptop of this world plugged into a worldwide cloud  when not in use and withdrawn just as easy storage USB card. Mind boggling, but this will be one day reality. 3. The explosion of HPC startups in San Francisco, California There is a new generation of performance computing physicists who senses the affinities of social networks with super computing. All are around 30 years old and you can meet some of them attending this meetup. Many come from Stanford and Berkeley, and have previously worked in Open Science Grid (OSG) or Fermi Lab but went to settle on the West Coast. Other are talented Russians, – with the same talent as Sergei Brin from Google. They are now now happily American . Some extraordinary faces are from China and India. San Francisco is a place where being crazy is being normal. Actually for me all are “normal” in San Francisco. HPC needs a city like this, to rejuvenate HPC thinkers and break away from the mentality where big bucks are spent for gargantuan infrastructures, similar to the palace of Ceausescu in Romania. The dictator had some 19 churches, six synagogues and 30,000 homes demolished. No one knew what to do with the palace. The dilemma was to make it a shopping center or the Romanian Parliament. Traditional HPC has similar stories, like Waxahacie Watch what I say about user experience in this video. 95% of the scientists do not have access to super commuting marvels. I say we must make high performance computing accessible to every scientist. In its’ ultimate incarnation, any scientists can do higgs-like-events searches on lesser size data and be most of the time successful. See for example PiCloud. See clearly how it works. All written in Python. See clearly how much it costs. They still have serious solutions for Academia and HPC. For comparison look at HTCondor documentation, see the installation or try to learn something called dagman. Simply adding a feature, no one paid attention to make it easy to learn and use. I did work with HTCondor engineers and let me say it, they are of the finest I ever met. All they need an exposure to San Francisco in a consistent way. 4. Can social networks giants acquire HPC HTC competency using HR? No. They can’t. Individual HPC employees recruited through HR will not create a new culture. They will mimic the dominant thinking inside groups and loose original identity and creativity. As Drop-box wisely discovered, the secret is to acquihire, and create an internal core competency with  a startup who delivers something they don’t have yet. 5. The strategy to make HPC / HTC start ups successful. Yes it is hard to have 1 million users as PiCloud. Actually, it is impossible. But PiCloud technology can literally deliver hundreds of millions dollars via golden discoveries using HPC / HTC in social company that already have 100 million users and more. The lesson we learn is this: HPC / HTC  cannot parrot the social media business model of accumulating millions, – never mind billions – of users. Success is not made up of features. Success is about making someone happy. You have to know that someone. Social networks are experts in making easy for people to use everything they offer. And HPC /HTC should make the social media companies happy. It is only through this symbiosis HPC/HTC – on one side – and Social Media plus Predictive Analytics everywhere – on the other side – that high performance computing will be financially successful as a minimum viable product (MVP).Reference: Five Reasons Why High Performance Computing (HPC) startups will explode in 2015 from our JCG partner Miha Ahronovitz at the The memories of a Product Manager blog....

Hosting a Maven repository on github (with sources and javadoc)

How to make a small open sourced library available to other developers via maven? One way is to deploy it on Maven Central Repository. What I’d like to do is to deploy it to github, so I can modify it freely. This post will tell you how to do that. The typical way I deploy artifacts to a github is to use mvn deploy. Here are steps:Use site-maven-plugin to push the artifacts to github Use maven-javadoc-plugin to push the javadoc Use maven-source-plugin to push the source Configure maven to use the remote mvn-repo as a maven repository  Configure maven-deploy-plugin First, I add the following snippnet to tell maven to deploy artifacts to a temporary location inside my target directory: <distributionManagement> <repository> <id>internal.repo</id> <name>Temporary Staging Repository</name> <url>file://${project.build.directory}/mvn-repo</url> </repository> </distributionManagement> <plugins> <plugin> <artifactId>maven-deploy-plugin</artifactId> <version>2.8.1</version> <configuration> <altDeploymentRepository> internal.repo::default::file://${project.build.directory}/mvn-repo </altDeploymentRepository> </configuration> </plugin> </plugins> Configure maven Then I add my github.com authentication information to ~/.m2/settings.xml so that the github site-maven-plugin can push it to github: <settings> <servers> <server> <id>github</id> <password>OAUTH2TOKEN</password> </server> </servers> </settings> or <settings> <servers> <server> <id>github</id> <username>GitHubLogin</username> <password>GitHubPassw0rd</password> </server> </servers> </settings> Personally, I prefer the first way, because it is safer (without explicitly showing the password). To get the OAUTH2TOKEN of the github project, please go to settings --> Applications --> Genreate new token Configure the site-maven-plugin Configure the site-maven-plugin to upload from my temporary location to the mvn-repo branch on github: <plugin> <groupId>com.github.github</groupId> <artifactId>site-maven-plugin</artifactId> <version>0.9</version> <configuration> <message>Maven artifacts for ${project.version}</message> <noJekyll>true</noJekyll> <outputDirectory>${project.build.directory}/mvn-repo </outputDirectory> <branch>refs/heads/mvn-repo</branch> <includes> <include>**/*</include> </includes> <repositoryName>pengyifan-commons</repositoryName> <repositoryOwner>yfpeng</repositoryOwner> <server>github</server> </configuration> <executions> <execution> <goals> <goal>site</goal> </goals> <phase>deploy</phase> </execution> </executions> </plugin> When this post was writen, there was a bug in version 0.9 of site-maven-plugin. To work around, please git clone the 0.10-SNAPSHOT version and mvn install it mannually. Configure maven-source-plugin To add source code package into the mvn-repo, we need to configure the maven-source-plugin. Add the following code in pom.xml: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-source-plugin</artifactId> <version>2.3</version> <executions> <execution> <id>attach-sources</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> Configure maven-javadoc-plugin To add java doc package into the mvn-repo, we need to configure the maven-javadoc-plugin. Add the following code in pom.xml: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <executions> <execution> <id>attach-javadocs</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> Now run mvn clean deploy. I saw maven-deploy-plugin “upload” the files to my local staging repository in the target directory, then site-maven-plugin commit those files and push them to the server. To verfy all binaries are there, visit github in the browser, and select the mvn-repo branch. Configure maven to use the remote mvn-repo as a maven repository There’s one more step we should take, which is to configure any poms to know where our repository is. We can add the following snippet to any project’s pom.xml: <repositories> <repository> <id>PROJECT-NAME-mvn-repo</id> <url>https://raw.github.com/USERNAME/PROJECT-NAME/mvn-repo/</url> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories>Reference: Hosting a Maven repository on github (with sources and javadoc) from our JCG partner Yifan Peng at the PGuru blog....

Testing mail code in Spring Boot application

Whilst building a Spring Boot application you may encounter a need of adding a mail configuration. Actually, configuring the mail in Spring Boot does not differ much from configuring it in Spring Bootless application. But how to test that mail configuration and submission is working fine? Let’s have a look. I assume that we have a simple Spring Boot application bootstrapped. If not, the easiest way to do it is by using the Spring Initializr. Adding javax.mail dependency We start by adding javax.mail dependency to build.gradle: compile 'javax.mail:mail:1.4.1'. We will also need Spring Context Support (if not present) that contains JavaMailSender support class. The dependency is: compile("org.springframework:spring-context-support") Java-based Configuration Spring Boot favors Java-based configuration. In order to add mail configuration, we add MailConfiguration class annotated with @Configuration annotation. The properties are stored in mail.properties (it is not required, though). Property values can be injected directly into beans using the @Value annotation: @Configuration @PropertySource("classpath:mail.properties") public class MailConfiguration {@Value("${mail.protocol}") private String protocol; @Value("${mail.host}") private String host; @Value("${mail.port}") private int port; @Value("${mail.smtp.auth}") private boolean auth; @Value("${mail.smtp.starttls.enable}") private boolean starttls; @Value("${mail.from}") private String from; @Value("${mail.username}") private String username; @Value("${mail.password}") private String password;@Bean public JavaMailSender javaMailSender() { JavaMailSenderImpl mailSender = new JavaMailSenderImpl(); Properties mailProperties = new Properties(); mailProperties.put("mail.smtp.auth", auth); mailProperties.put("mail.smtp.starttls.enable", starttls); mailSender.setJavaMailProperties(mailProperties); mailSender.setHost(host); mailSender.setPort(port); mailSender.setProtocol(protocol); mailSender.setUsername(username); mailSender.setPassword(password); return mailSender; } } The @PropertySource annotation makes mail.properties available for injection with @Value. annotation. If not done, you may expect an exception: java.lang.IllegalArgumentException: Could not resolve placeholder '<name>' in string value "${<name>}". And the mail.properties: mail.protocol=smtp mail.host=localhost mail.port=25 mail.smtp.auth=false mail.smtp.starttls.enable=false mail.from=me@localhost mail.username= mail.password= Mail endpoint In order to be able to send an email in our application, we can create a REST endpoint. We can use Spring’s SimpleMailMessage in order to quickly implement this endpoint. Let’s have a look: @RestController class MailSubmissionController {private final JavaMailSender javaMailSender;@Autowired MailSubmissionController(JavaMailSender javaMailSender) { this.javaMailSender = javaMailSender; }@RequestMapping("/mail") @ResponseStatus(HttpStatus.CREATED) SimpleMailMessage send() { SimpleMailMessage mailMessage = new SimpleMailMessage(); mailMessage.setTo("someone@localhost"); mailMessage.setReplyTo("someone@localhost"); mailMessage.setFrom("someone@localhost"); mailMessage.setSubject("Lorem ipsum"); mailMessage.setText("Lorem ipsum dolor sit amet [...]"); javaMailSender.send(mailMessage); return mailMessage; } } Running the application We are now ready to run the application. If you use CLI, type: gradle bootRun, open the browser and navigate to localhost:8080/mail. What you should see is actually an error, saying that mail server connection failed. As expected. Fake SMTP Server FakeSMTP is a free Fake SMTP Server with GUI, written in Java, for testing emails in applications. We will use it to verify if the submission works. Please download the application and simply run it by invoking: java -jar fakeSMTP-<version>.jar. After launching Fake SMTP Server, start the server. Now you can invoke REST endpoint again and see the result in Fake SMTP! But by testing I did not mean manual testing! The application is still useful, but we want to automatically test mail code. Unit testing mail code To be able to automatically test the mail submission, we will use Wiser – a framework / utility for unit testing mail based on SubEtha SMTP. SubEthaSMTP’s simple, low-level API is suitable for writing almost any kind of mail-receiving application. Using Wiser is very simple. Firstly, we need to add a test dependency to build.gradle: testCompile("org.subethamail:subethasmtp:3.1.7"). Secondly, we create an integration test with, JUnit, Spring and and Wiser: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) @WebAppConfiguration public class MailSubmissionControllerTest {private Wiser wiser;@Autowired private WebApplicationContext wac; private MockMvc mockMvc;@Before public void setUp() throws Exception { wiser = new Wiser(); wiser.start(); mockMvc = MockMvcBuilders.webAppContextSetup(wac).build(); }@After public void tearDown() throws Exception { wiser.stop(); }@Test public void send() throws Exception { // act mockMvc.perform(get("/mail")) .andExpect(status().isCreated()); // assert assertReceivedMessage(wiser) .from("someone@localhosts") .to("someone@localhost") .withSubject("Lorem ipsum") .withContent("Lorem ipsum dolor sit amet [...]"); } } The SMTP server is initialized, started in @Before method and stopped in @Teardown method. After sending a message, the assertion is made. The assertion needs to be created, as the framework does not provide any. As you will notice, we need to operate on Wiser object, that provides a list of received messages: public class WiserAssertions {private final List<WiserMessage> messages;public static WiserAssertions assertReceivedMessage(Wiser wiser) { return new WiserAssertions(wiser.getMessages()); }private WiserAssertions(List<WiserMessage> messages) { this.messages = messages; }public WiserAssertions from(String from) { findFirstOrElseThrow(m -> m.getEnvelopeSender().equals(from), assertionError("No message from [{0}] found!", from)); return this; }public WiserAssertions to(String to) { findFirstOrElseThrow(m -> m.getEnvelopeReceiver().equals(to), assertionError("No message to [{0}] found!", to)); return this; }public WiserAssertions withSubject(String subject) { Predicate<WiserMessage> predicate = m -> subject.equals(unchecked(getMimeMessage(m)::getSubject)); findFirstOrElseThrow(predicate, assertionError("No message with subject [{0}] found!", subject)); return this; }public WiserAssertions withContent(String content) { findFirstOrElseThrow(m -> { ThrowingSupplier<String> contentAsString = () -> ((String) getMimeMessage(m).getContent()).trim(); return content.equals(unchecked(contentAsString)); }, assertionError("No message with content [{0}] found!", content)); return this; }private void findFirstOrElseThrow(Predicate<WiserMessage> predicate, Supplier<AssertionError> exceptionSupplier) { messages.stream().filter(predicate) .findFirst().orElseThrow(exceptionSupplier); }private MimeMessage getMimeMessage(WiserMessage wiserMessage) { return unchecked(wiserMessage::getMimeMessage); }private static Supplier<AssertionError> assertionError(String errorMessage, String... args) { return () -> new AssertionError(MessageFormat.format(errorMessage, args)); }public static <T> T unchecked(ThrowingSupplier<T> supplier) { try { return supplier.get(); } catch (Throwable e) { throw new RuntimeException(e); } }interface ThrowingSupplier<T> { T get() throws Throwable; } } Summary With just couple of lines of code we were able to automatically test mail code. The example presented in this article is not sophisticated but it shows how easy it is to get started with SubEtha SMTP and Wiser. How do you test your mail code?Reference: Testing mail code in Spring Boot application from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

Getters/Setters. Evil. Period.

There is an old debate, started in 2003 by Allen Holub in this Why getter and setter methods are evil famous article, about whether getters/setters is an anti-pattern and should be avoided or if it is something we inevitably need in object-oriented programming. I’ll try to add my two cents to this discussion. The gist of the following text is this: getters and setters is a terrible practice and those who use it can’t be excused. Again, to avoid any misunderstanding, I’m not saying that get/set should be avoided when possible. No. I’m saying that you should never have them near your code. Arrogant enough to catch your attention? You’ve been using that get/set pattern for 15 years and you’re a respected Java architect? And you don’t want to hear that nonsense from a stranger? Well, I understand your feelings. I felt almost the same when I stumbled upon Object Thinking by David West, the best book about object-oriented programming I’ve read so far. So please. Calm down and try to understand while I try to explain. Existing Arguments There are a few arguments against “accessors” (another name for getters and setters), in an object-oriented world. All of them, I think, are not strong enough. Let’s briefly go through them. Ask, Don’t Tell: Allen Holub says, “Don’t ask for the information you need to do the work; ask the object that has the information to do the work for you”. Violated Encapsulation Principle: An object can be teared apart by other objects, since they are able to inject any new data into it, through setters. The object simply can’t encapsulate its own state safely enough, since anyone can alter it. Exposed Implementation Details: If we can get an object out of another object, we are relying too much on the first object’s implementation details. If tomorrow it will change, say, the type of that result, we have to change our code as well. All these justifications are reasonable, but they are missing the main point. Fundamental Misbelief Most programmers believe that an object is a data structure with methods. I’m quoting Getters and Setters Are Not Evil, an article by Bozhidar Bozhanov: But the majority of objects for which people generate getters and setters are simple data holders. This misconception is the consequence of a huge misunderstanding! Objects are not “simple data holders”. Objects are not data structures with attached methods. This “data holder” concept came to object-oriented programming from procedural languages, especially C and COBOL. I’ll say it again: an object is not a set of data elements and functions that manipulate them. An object is not a data entity. What is it then? A Ball and A Dog In true object-oriented programming, objects are living creatures, like you and me. They are living organisms, with their own behaviour, properties and a life cycle. Can a living organism have a setter? Can you “set” a ball to a dog? Not really. But that is exactly what the following piece of software is doing: Dog dog = new Dog(); dog.setBall(new Ball()); How does that sound? Can you get a ball from a dog? Well, you probably can, if she ate it and you’re doing surgery. In that case, yes, we can “get” a ball from a dog. This is what I’m talking about: Dog dog = new Dog(); Ball ball = dog.getBall(); Or an even more ridiculous example: Dog dog = new Dog(); dog.setWeight("23kg"); Can you imagine this transaction in the real world? Does it look similar to what you’re writing every day? If yes, then you’re a procedural programmer. Admit it. And this is what David West has to say about it, on page 30 of his book: Step one in the transformation of a successful procedural developer into a successful object developer is a lobotomy. Do you need a lobotomy? Well, I definitely needed one and received it, while reading West’s Object Thinking. Object Thinking Start thinking like an object and you will immediately rename those methods. This is what you will probably get: Dog dog = new Dog(); dog.take(new Ball()); Ball ball = dog.give(); Now, we’re treating the dog as a real animal, who can take a ball from us and can give it back, when we ask. Worth mentioning is that the dog can’t give NULL back. Dogs simply don’t know what NULL is! Object thinking immediately eliminates NULL references from your code.Besides that, object thinking will lead to object immutability, like in the “weight of the dog” example. You would re-write that like this instead: Dog dog = new Dog("23kg"); int weight = dog.weight(); The dog is an immutable living organism, which doesn’t allow anyone from the outside to change her weight, or size, or name, etc. She can tell, on request, her weight or name. There is nothing wrong with public methods that demonstrate requests for certain “insides” of an object. But these methods are not “getters” and they should never have the “get” prefix. We’re not “getting” anything from the dog. We’re not getting her name. We’re asking her to tell us her name. See the difference? We’re not talking semantics here, either. We are differentiating the procedural programming mindset from an object-oriented one. In procedural programming, we’re working with data, manipulating them, getting, setting, and deleting when necessary. We’re in charge, and the data is just a passive component. The dog is nothing to us — it’s just a “data holder”. It doesn’t have its own life. We are free to get whatever is necessary from it and set any data into it. This is how C, COBOL, Pascal and many other procedural languages work(ed). On the contrary, in a true object-oriented world, we treat objects like living organisms, with their own date of birth and a moment of death — with their own identity and habits, if you wish. We can ask a dog to give us some piece of data (for example, her weight), and she may return us that information. But we always remember that the dog is an active component. She decides what will happen after our request. That’s why, it is conceptually incorrect to have any methods starting with set or get in an object. And it’s not about breaking encapsulation, like many people argue. It is whether you’re thinking like an object or you’re still writing COBOL in Java syntax. PS. Yes, you may ask, — what about JavaBeans, JPA, JAXB, and many other Java APIs that rely on the get/set notation? What about Ruby’s built-in feature that simplies the creation of accessors? Well, all of that is our misfortune. It is much easier to stay in a primitive world of procedural COBOL than to truly understand and appreciate the beautiful world of true objects. PPS. Forgot to say, yes, dependency injection via setters is also a terrible anti-pattern. About it, in one of the next posts! Related Posts You may also find these posts interesting:Anti-Patterns in OOP Avoid String Concatenation Objects Should Be Immutable Why NULL is Bad? OOP Alternative to Utility ClassesReference: Getters/Setters. Evil. Period. from our JCG partner Yegor Bugayenko at the About Programming blog....

Nightmare on Agile Street 2: Managed Agile

Blow me down, its happening again… I’m awake. I’m wet, its a cold sweat. Its the small hours of the morning and the dream is horrid…. I’ve been sent to Coventry. I’m in a clients office waiting for a meeting to start. The development manager is telling me she has selected me to help them become Agile, she checked me out online and recognises that I am pragmatic. Thats why they chose a new tool called Kjsb, its pragmatic too. Pragmatic. God does she know how much I hate that word? Pragmatic to me? I recognise that Agile and Waterfall are points on a spectrum and that most organizations, for better or worse fall somewhere in-between. I recognise that every organisation exists within a context and you need to consider that. And even change the context. But pragmatic? Pragmatic is the Satan’s way of saying “Heaven doesn’t work in the Real World(TM)”. The CTO enters and is putting down redlines. He knows all Agile, but his people… its his people you see … you can’t trust them, they are like children, you can’t let them have too much say. They need a strong man to lead them. They had a developer here once who practiced Agile. He did that test driven stuff. He didn’t give out dates. He gave Agile a bad name in the company. The PMO will never accept that. Fortunately they have just bought Kjsb. This wonderful tool will fix everything. Kjsb has a feature that translates burn-downs into Gantt charts at the click-of-a-mouse. And back again. The problem is: teams still aren’t shipping on schedule. They need predictability. Predictability is what the one thing really need. And flexibility. Flexibility is important. Flexibility and predictability, the two things they really need. And now variation in features. They can’t trade features for time. Fixed scope, Flexibility and Predictability are the three things they need. But… they have unforeseen technical problems – not bugs you understand, but unforeseen technical problems. They really need to be able to deal with those things. Technical fixes, fixed scope, Flexibility and Predictability are the four things they need. Nobody expects… I want to explain queuing theory… a grasp of basic queuing theory is the one thing they need – stick their feet on the ground and cement them to it. One of the teams runs Agile. It is run by the CTO himself and its good. The other teams… well they don’t really have that much experience. Though the managers are going to get their Scrum certificates real soon now. How, he asks, can we get everyone else to buy in? How can we get the PMO to buy? How can they make the Product Owners buy in? Mention of the PMO stirs the old guy in the corner, the one who’s hiding behind his laptop, the widescreen laptop with the numeric keypad. And mention of the Product Owners causes the Analyst in the other corner – the one hiding behind the ultra thin laptop – to raise an eyebrow. Now I see they all have laptops out in front of them… and some of them phones too. In between moving their mouths each of them is staring at their screens. I’d better say something. “Well,” I start…., “how about we get people from the team who are doing this well to talk about their experience?” Blank looks all round, are they listening? Or doing e-mailing? “Could you them your own case study?” No – that won’t work because that teams are so very different from everyone else in the company nobody will believe it. They are all individuals. Besides, the developers won’t be at the buy-in meeting. Its for managers to buy in. Once the managers buy in the developers will be told what to do. …. I try a different approach: “Instead of talking to the PMO one day, and the Product Managers the next day, and the Development Managers the day after… why don’t we go vertical and take each development team in turn, with the appropriate project, product and development managers?” No. Managers manage, they are the only ones who need to know. And they are the ones who will be allocating the work with Kjsb. “Need to know” – “Allocating work” Did I really just hear those words? Whose version of Agile have they been reading? O my god, these guys are going on a Scrum Master course next week, there is going to be a bun fight, I don’t know who I worry about most these guys or the poor sod who is teaching the class…. “Can I just check,” I ask, “each team has a project manager assigned, a product manager, a team lead, they will soon have a Scrum Master too?” Heads nod, “and… there are several development managers spanning several teams each?” Yes. “So if I’m counting right…. each team contains about 4 developers and 1 tester? (Plus UAT cycle lagging several weeks later)” Yes. “O see…” Am I keeping a straight face? Does my face hide my horror? 3+ managers for every 5 workers? – either this business prints cash or they will soon be bust. …. “Really,” says the development manager, “we are talking about change, I have 12 years change management experience from call centres to financial services, the CTO hand picked me to lead this change, software development is just the same as any other change”. When did Fred Brooks come into the room, in fact what is he doing in Coventry, he lives in Carolina, why is he wearing a dog collar? And why is it 1974? He’s now standing at the lectern reading from a tatted copy of Mythical Man Month. “In many ways” says Brooks, “managing a large computer programming project is like managing any other large undertaking – in more ways than most programmers believe. But in many other ways it is different – in more ways than most professional managers expect.” Well this is a dream, what do I expect? Its 2014 again… “The key is to set the framework,” she continues, “establish boundaries so people know what their responsibilities are then we can empower them”. Fred has gone, standing at the lectern in dog collar is Henry Mintzberg – my management hero – he is reading from another tattered book entitled Managing: “the later term empowerment did not change [manager control], because the term itself indicated that the power remained with the manager. Truly empowered workers, such as doctors in a hospital, even bees in the hive, do not await gifts from their managerial gods; they know what they are there to do and just do it.” Empowerment is dis-empowered: using the words say one thing but the message given by using those words is the opposite. “What we want is consistency across teams” says the CTO who now resembles Basil Fawlty. (What happened to “all my teams are different”?) “And a stage gate process” says the PMO man, or is it Terry Jones? “And clear roles and responsibilities” says the Cardinal Fang. “Nobody expects the Spanish Inquisition” says Michael Palin – where did he come from? …. “It seems to me” starts the Product Owner “that we are making a lot more paperwork for ourselves here”. O the voice of sanity! “Yes” I begin…. “if you attempt to run both an Agile and a Waterfall process that is what you will have!” Silence. I continue, “Over time, as you see Agile work, as people understand it, I would expect you will move to a more Agile approach in general and be able to reduce the documentation.” “No.” The PMO seems quite certain of this, “I don’t think that will happen here, we need the control and certainty that the waterfall and our stage gates provide. We won’t be doing that.” Poor Product Owner, if he is lucky he’ll be shown the door, if he’s unlucky he’ll be retained. … “If you want people to buy in” I suggest, “we must let people have their say.” The PMO is ready for this: “Yes, we know that, we’ve already arranged for a survey” and she reads the questions: Q1: “Do you agree our development process needs to change?” Yes or No. Q2: “Our organization wishes to remain in control but we want the benefits of Agile, do you think we should:Embrace Marxism in its entirety, Mandate Waterfall throughout the organization or Create a Managed Agile process?”Q3: “Have you seen the features provided by Kjsb?” Yes or No. O my god, its a North Korean election. I suggest the questions are a little bit leading. “Well we don’t want people being awkward” chips in the CTO. … We get up to leave. “You know,” I say, “when you’ve had a chance to run this process for a while you will want to inspect it and modify it” – but while I’m saying that I’m think “No plan survives contact with the enemy, start small, see what happens.” “O we’ve already done that. This process is the result of doing that. We won’t be changing it.” … Back in my kitchen, a warm milk in my hand. A bad dream. It was a bad dream. That stuff never happened. How could it? The contradictions are so obvious even a small child could see them. Couldn’t they? As I climb the stairs back to bed a terrible thought: what if it wasn’t a nightmare? what if it was real? and what if they call me back for help? Could anyone help such people?Reference: Nightmare on Agile Street 2: Managed Agile from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....

Agile is a simple topic

Agile manifesto is probably one of the best ever written manifestos in software development if not the best. Simple and elegant. Good vs Bad 1 2 3 4, done. It is so simple that I am constantly disappointed by the amount of stuff that floating on the Internet about, what is agile what is not, how to do agile, Scrum, Kanban and who knows what will pop up next year claiming to to be another king of agile. If I ever tell you we are the purist agile team and we don’t have sprint, we don’t have stand up meetings, we don’t story board, we don’t have burn down charts, we don’t have planning poker cards, we don’t have any of the buzzwords, most of the so called IT consultants will hang me on the spot.   Let’s face it, being pure isn’t about what you have, it is about what you don’t! The pure gold has nothing but gold that’s why it is super valuable. We should build our teams on developers, codes and business needs. The three pure ingredient of a team, any one taken away a team is no more.Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away. Antoine de Saint-ExuperyExactly the manifesto is saying “we value less on processes and tools” and yet we have seen all kinds of weird super imposed processes and tools everywhere. “Look, we have standups, we have sprints, we have story boards therefore we agile”. NO, absolutely NOT. You can walk like a duck, quack like duck, but you are still not a duck. But why the hype anyway? Partly the consulting companies are to be blamed, they try to sell the buzz words to the management so that they can make $$$ by simply asking the developers to do what they already know, writing codes, but in a different way. The biggest enemies are all the developers especially the team leaders and managers. Because they are lazy to know the developers (the people), lazy to learn the codes (the working software) the lazy to analyse the business needs. Because “in the end of the day I need to show my developers that I am doing a manager’s work”, “what is the shortcut?”, “look, I just got this scrum from a random blog post, standups 5 mins, no problems. Poker cards, easy. Story boards, no big deal … “. “Done, now we are scrum, now we are agile, if the things fail, it is the developers problem”. Goodbye, there goes a team. So now you question me, “you said agile is simple, why it looks so hard now?”Any fool can make something complicated. It takes a genius to make it simple.Woody Guthrie People are born equal, a genius doesn’t magically popup, it takes real hard work to reach that level. Let’s go back to the origin, the mighty manifesto. Get rid of all unnecessary processes and tools, and go talk to people. “What is Jimmy’s strength? What can we do to make up for Sam’s weakness? Is David and Carl a good pair?”. Stop typing inside Words or Excel, go read the real codes, “What can we do the enhance the clarity of the codes, how to improve the performance without too much sacrifice, what are the alternative ways to extend our software”. Stop coming up with imaginary use cases, go meet the customer “What are your point points, what are the 3 most important features that need to be enhanced and delivered. Based on our statistics, we believe if we build feature X in such a way, the business can grow in Y%, do you think we should do this?” Stop wasting our life on keeping a useless backlog, go see the 3 biggest opportunities and threats and work on them, rise and repeat. If fact that is exactly how evolution bring our human to this stage, “eliminate the immediate threat to ensure the short term survival, and seek the opportunities for long term growth”. As we all decadents of the mother nature, we are incapable of out smart her, so learn from her. real process/methodology grows from the team not super imposed on to the team real process/methodology does not have a name because it is unique to each team Grow your own dream team! Thanks for wasting your time reading my rant.Reference: Agile is a simple topic from our JCG partner Dapeng Liu at the Developers Corner blog....

Some quality metrics that helped my teams

I’ve been asked the question “what are the best metrics to improve software quality?” (or similar) a million times, this blog post is a selfish time saver, you are probably reading this because you asked me a similar question and I sent you here. Firstly, i am not a fan of metrics and I consider a good 99% of the recommended software quality metrics pure rubbish. Having said that there are a few metrics that have helped teams I worked with and these are the ones I will share. Secondly, metrics should be used to drive change. I believe it is fundamental that the metric tracked is clearly associated to the reason why the metric is tracked so that people don’t focus on the number but on the benefit that observing the number will drive. Good metric#1: In order to be able to re-factor without worrying about breaking what we have already built we decided to raise the unit test coverage to >95% and measure it. Builds would fail if the metric was not respected. Good metric#2: In order to reduce code complexity, improve readability and make changes easier, we set a limit and measured the maximum size of each method (15 lines) and the cyclomatic complexity (don’t remember the number but I think it was <10). Builds would fail if the metric was not respected. Good metric#3: In order to continuously deliver low complexity easily testable units of work and help with predictability we started measuring the full cycle time of user stories from inception to production with the goal of keeping it between 3 and 5 days. When we had user stories that took more than 5 days we retrospected and examined the reasons. In the 3 cases above, the focus is on the goal, the number is what we think will drive the change and can always be changed. If people don’t understand why they write unit tests, they will achieve unit test coverage without guaranteeing the ability to refactor, for example by writing fake tests that don’t have assertions. We should never decouple the metric from the reason we are measuring something. These are the good metrics, for me. If you want to see some of the bad ones, have a look at this article I wrote some time ago on confrontational metrics and delivery teams that don’t give a damn about their customers. http://mysoftwarequality.wordpress.com/2012/12/27/the-wrath-of-the-mighty-metric/Reference: Some quality metrics that helped my teams from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: