What's New Here?


Migrating from javaagent to JVMTI: our experience

When you need to gather data from within the JVM, you will find yourself working dangerously close to the Java Virtual Machine internals. Luckily, there are ways you can avoid getting bogged down by JVM implementation details. The fathers of Java have given you not one but two beautiful tools to work with. In this post we will explain the differences between the two approaches and explain why we recently ported a significant part of our algorithms.         Javaagent The first option is to use the java.lang.instrument interface. This approach loads your monitoring code into the JVM itself using the -javaagent startup parameter. Being an all Java option, javaagents tend to be the first path to take if your background is in Java development. The best way to illustrate how you can benefit from the approach is via an example. Let us create a truly simple agent, which would be responsible for monitoring all method invocations in your code. And when the agent faces a method invocation, it will log the invocation to the standard output stream: import org.objectweb.asm.*;public class MethodVisitorNotifyOnMethodEntry extends MethodVisitor { public MethodVisitorNotifyOnMethodEntry(MethodVisitor mv) { super(Opcodes.ASM4, mv); mv.visitMethodInsn(Opcodes.INVOKESTATIC, Type.getInternalName(MethodVisitorNotifyOnMethodEntry.class), "callback", "()V"); }public static void callback() { System.out.println("Method called!"); } } You can use the example above, package it as a javaagent (essentially a small JAR file with a special MANIFEST.MF), and launch it using the agent’s premain() method similar to the following: java -javaagent:path-to/your-agent.jar com.yourcompany.YourClass When launched, you would see a bunch of “Method called!” messages in your log files. And in our case nothing more. But the concept is powerful, especially when combined with bytecode instrumentation tools such as ASM or cgLib as in our example above. In order to keep the example easy to understand, we have skipped some details. But it is relatively simple – when using java.lang.instrument package you start by writing your own agent class, implementing public static void premain(String agentArgs, Instrumentation inst). Then you need to register your ClassTransformer with inst.addTransformer. As you most likely wish to avoid direct manipulation of class bytecode, you would use some bytecode manipulation library, such as ASM in the example we used. With it, you just have to implement a couple more interfaces – ClassVisitor (skipped for brevity) and MethodVisitor. JVMTI The second path to take will eventually lead you to JVMTI. JVM Tool Interface (JVM TI) is a standard native API that allows  native libraries capture events and control the Java Virtual Machine. Access to JVMTI is usually packaged in a specific library called an agent. The example below demonstrates the very same callback registration already seen in the javaagent section, but this time it is implemented as a JVMTI call: void JNICALL notifyOnMethodEntry(jvmtiEnv *jvmti_env, JNIEnv* jni_env, jthread thread, jmethodID method) { fputs("method was called!\n", stdout); }int prepareNotifyOnMethodEntry(jvmtiEnv *jvmti) { jvmtiError error; jvmtiCapabilities requestedCapabilities, potentialCapabilities; memset(&requestedCapabilities, 0, sizeof(requestedCapabilities));if((error = (*jvmti)->GetPotentialCapabilities(jvmti, &potentialCapabilities)) != JVMTI_ERROR_NONE) return 0;if(potentialCapabilities.can_generate_method_entry_events) { requestedCapabilities.can_generate_method_entry_events = 1; } else { //not possible on this JVM return 0; }if((error = (*jvmti)->AddCapabilities(jvmti, &requestedCapabilities)) != JVMTI_ERROR_NONE) return 0;jvmtiEventCallbacks callbacks; memset(&callbacks, 0, sizeof(callbacks)); callbacks.MethodEntry = notifyOnMethodEntry;if((error = (*jvmti)->SetEventCallbacks(jvmti, &callbacks, sizeof(callbacks))) != JVMTI_ERROR_NONE) return 0; if((error = (*jvmti)->SetEventNotificationMode(jvmti, JVMTI_ENABLE, JVMTI_EVENT_METHOD_ENTRY, (jthread)NULL)) != JVMTI_ERROR_NONE) return 0;return 1; } There are several differences between the approaches. For example, you can get more information via JVMTI than the agent.  But the most crucial difference between the two is derived from the loading mechanics. While the Instrumentation agents are loaded inside the heap, they are governed by the same JVM. Whereas the JVMTI agents are not governed by the JVM rules and are thus not affected by the JVM internals such as the GC or runtime error handling. What it means, is best explained via our own experience. Migrating from -javaagent to JVMTI When we started building our memory leak detector three years ago we did not pay much attention to pros and cons of those approaches. Without much hesitation we implemented the  solution as a -javaagent. Throughout the years we have started to understand implications. Some of which were not too pleasant, thus in our latest release we have ported a significant part of our memory leak detection mechanics to the native code. What made us jump to such conclusion? First and foremost – when residing in the heap you need to accommodate yourself next to the application’s own memory structures. Which, as learned through painful experience can lead to problems in itself. When your app has already filled the heap close to the full extent the last thing you need is a memory leak detector that would only seem to speed up the arrival of the OutOfMemoryError. But the added heap space was lesser of the evils haunting us. The real problem was related to the fact that our data structures were cleaned using the same garbage collector that the monitored application itself was using. This resulted in longer and more frequent GC pauses. While most applications did not mind the few extra percentage points we added to heap consumption, we learned that the unpredictable impact on Full GC pauses was something we needed to get rid of.To make things worse – how Plumbr works is that it monitors all object creations and collections. When you monitor something, you need to keep track. Keeping track tends to create objects. Created objects will be eligible for GC. And when it is now GC you are monitoring, you have just created a vicious circle – the more objects are garbage collected, the more monitors you create triggering even more frequent GC runs, etc. When keeping track of objects, we are notified about the death of objects by the JVMTI. However, JVMTI does not permit the use of JNI during those callbacks. So if we keep the statistics about tracked objects in Java, it is not possible to instantly update the statistics when we are notified of changes. Instead the changes need to be cached and applied when we know the JVM is in the correct state. This created unnecessary complexity and delays in updating the actual statistics.Reference: Migrating from javaagent to JVMTI: our experience from our JCG partner Ago Allikmaa at the Plumbr Blog blog....

Implementing Static Analysis isn’t that easy

Static Analysis Testing (SAST) for software bugs and vulnerabilities should be part of your application security – and software quality – program. All that you need to do is run a tool and it will find bugs in the code, early in development when they are cheaper and easier to fix. Sounds easy. But it takes more than just buying a tool and running a scan – or uploading code to a testing service and having them run the scans for you. You need the direct involvement and buy-in from developers, and from their managers. Because static analysis doesn’t find bugs. It finds things in the code that might be bugs, and you need developers to determine what are real problems and what aren’t.     This year’s SANS Institute survey on Appsec Programs and Practices which Frank Kim and I worked on found that use of static analysis ranks towards the bottom of the list of tools and practices that organizations find useful in their appsec programs. This is because you need a real commitment from developers to make static analysis testing successful, and securing this commitment isn’t easy. You’re asking developers to take on extra work and extra costs, and to change how they do their jobs. Developers have to take time from their delivery schedules to understand and use the tools, and they need to understand how much time this is going to require. They need to be convinced that the problems found by the tools are worth taking time to look at and fix. They may need help or training to understand what the findings mean and how to fix them properly. They will need time to fix the problems and more time to test and make sure that they didn’t break anything by accident. And they will need help with integrating static analysis into how they build and test software going forward. Who Owns and Runs the Tools? The first thing to decide is who in the organization owns static analysis testing: setting up and running the tools, reviewing and qualifying findings, and getting problems fixed. Gary McGraw at Cigital explains that there are two basic models for owning and running static analysis tools. In some organizations, Infosec owns and runs the tools, and then works with developers to get problems fixed (or throws the results over the wall to developers and tells them that they have a bunch of problems that need to be fixed right away). This is what McGraw calls a “Centralized Code Review Factory”. The security team can enforce consistent policies and make sure that all code is scanned regularly, and follows up to make sure that problems get fixed. This saves developers the time and trouble of having to understand the tool and setting up and running the scans, and the Infosec team can make it even easier for developers by reviewing and qualifying the findings before passing them on (filtering out false positives and things that don’t look important). But developers don’t have control over when the scans are run, and don’t always get results when they need them. The feedback cycle may be too slow, especially for fast-moving Agile and Devops teams who rely on immediate feedback from TDD and Continuous Integration and may push out code before the scan results can even get back to them. A more scalable approach is to make the developers directly responsible for running and using the tools. Infosec can help with setup and training, but it’s up to the developers to figure out how they are going to use the tools and what they are going to fix and when. In a “Self Service” model like this, the focus is on fitting static analysis into the flow of development, so that it doesn’t get in the way of developers’ thinking and problem solving. This might mean adding automated scanning into Continuous Integration and Continuous Delivery toolchains, or integrating static analysis directly into developers’ IDEs to help them catch problems immediately as they are coding (if this is offered with the tool). Disciplined development and Devops teams who are already relying to automated developer testing and other quality practices shouldn’t find this difficult – as long as the tools are set up correctly from the start so that they see value in what the tools find. Getting Developers to use Static Analysis There are a few simple patterns for adopting static analysis testing that we’ve used, or that I have seen in other organizations, patterns that can be followed on their own or in combinations, depending on how much software you have already written, how much time you have to get the tools in, and how big your organization is Drop In, Tune Out, Triage Start with a pilot, on an important app where bugs really matter, and that developers are working on today. The pilot could be done by the security team (if they have the skills) or consultants or even the vendor with some help from development; or you could make it a special project for a smart, senior developer who understands the code, convince them that this is important and that you need their help, give them some training if they need it and assistance from the vendor, and get them to run a spike – a week or two should be enough to get started. The point of this mini-project should be to make sure that the tool is installed and setup properly (integrate it into the build, make sure that it is covering the right code), understand how it provides feedback, make sure that you got the right tool, and then make it practical for developers to use. Don’t accept how the tool runs by default. Run a scan, see how long it takes to run, review the findings and focus on cutting the false positives and other noise down to a minimum. Although vendors continue to improve the speed and accuracy of static analysis tools, most static analysis tools err on the side of caution by pointing out as many potential problems as possible in order to minimize the chance of false negatives (missing a real bug). Which means a lot of noise to wade through and a lot of wasted time. If you start using SAST early in a project, this might not be too bad. But it can be a serious drag on people’s time if you are working with an existing code base: depending on the language, architecture, coding style (or lack of), the size of the code base and its age, you could end up with hundreds or thousands of warnings when you run a static analysis scan. Gary McGraw calls this the “red screen of death” – a long list of problems that developers didn’t know that they had in their code yesterday, and are now told that they have to take care of today. Not every static analysis finding needs to be fixed, or even looked at in detail. It’s important to figure out what’s real, what’s important, and what’s not, and cut the list of findings down to a manageable list of problems that are worth developers looking into and maybe fixing. Each application will require this same kind of review, and the the approach to setup and tuning may be different. A good way to reduce false positives and unimportant noise is by looking at the checkers that throw off the most findings – if you’re getting hundreds or thousands of the same kind of warning, it’s less likely to be a serious problem (let’s hope) than an inaccurate checker that is throwing off too many false positives or unimportant lint-like nitpicky complaints that can safely be ignored for now. It is expensive and a poor use of time and money to review all of these findings – sample them, see if any of them make sense, get the developer to use their judgement and decide whether to filter them out. Turn off any rules that aren’t important or useful, knowing that you may need to come back and review this later. You are making important trade off decisions here – trade-offs that the tool vendor couldn’t or wouldn’t make for you. By turning off rules or checkers you may be leaving some bugs or security holes in the system. But if you don’t get the list down to real and important problems, you run the risk of losing the development team’s cooperation altogether. Put most of your attention on what the tool considers serious problems. Every tool (that I’ve seen anyway) has a weighting or rating system on what it finds, a way to identify problems that are high risk and a confidence rating on what findings are valid. Obviously high-risk, high-confidence findings are where you should spend most of your time reviewing and the problems that probably need to be fixed first. You may not understand them all right away, why the tool is telling you that something is wrong or how to fix it correctly. But you know where to start. Cherry Picking Another kind of spike that you can run is to pick low hanging fruit. Ask a smart developer or a small team of developers to review the results and start looking for (and fixing) real bugs. Bugs that make sense to the developer, bugs in code that they have worked on or can understand without too much trouble, bugs that they know how to fix and are worth fixing. This should be easy if you’ve done a good job of setting up the tool and tuning upfront. Look for different bugs, not just one kind of bug. See how clearly the tool explains what is wrong and how to correct it. Pick a handful and fix them, make sure that you can fix things safely, and test to make sure that the fixes are correct and you didn’t break anything by accident. Then look for some more bugs, and as the developers get used to working with the tool, do some more tuning and customization. Invest enough time to for the developers to build some confidence that the tool is worth using, and to get an idea of how expensive it will be to work with going forward. By letting them decide what bugs to fix, you not only deliver some real value upfront and get some bugs fixed, but you also help to secure development buy-in: “see, this thing actually works!” And you will get an idea of how much it will cost to use. If it took this long for some of your best developers to understand and fix some obvious bugs, expect it to take longer for the rest of the team to understand and fix the rest of the problems. You can use this data to build up estimates of end-to-end costs, and for later trade-off decisions on what problems are or aren’t worth fixing. Bug Extermination Another way to get started with static analysis is to decide to exterminate one kind of bug in an application, or across a portfolio. Pick the “Bug of the Month”, like SQL injection – a high risk, high return problem. Take some time to make sure everyone understands the problem, why it needs to be fixed, how to test for it. Then isolate the findings that relate to this problem, figure out what work is required to fix and test and deploy the fixes, and “get er done”. This helps to get people focused and establish momentum. The development work and testing work is simpler and lower risk because everyone is working on the same kind of problem, and everyone can learn how to take care of it properly. It creates a chance to educate everyone on how to deal with important kinds of bugs or security vulnerabilities, patch them up and hopefully stop them from occurring in the future. Fix Forward Reviewing and fixing static analysis findings in code that is already working may not be worth it, unless you are having serious reliability or security problems in production or need to meet some compliance requirement. And as with any change, you run the risk of introducing new problems while trying to fix old ones, making things worse instead of better. This is especially the case for code quality findings. Bill Pugh, the father of Findbugs, did some research at Google which found that“many static warnings in working systems do not actually manifest as program failures.”It can be much less expensive and much easier to convince developers to focus only on reviewing and fixing static analysis findings in new code or code that they are changing, and leave the rest of the findings behind, at least to start. Get the team to implement a Zero Bug Tolerance program or some other kind of agreement within the development team to review and cleanup as many new findings from static scans as soon as they are found – make it part of their “Definition of Done”. At Intuit, they call this “No New Defects”. Whatever problems the tools find should be easy to understand and cheap to fix (because developers are working on the code now, they should know it well enough to fix it) and cheap to test – this is code that needs to be tested anyway. If you are running scans often enough, there should only be a small number of problems or warnings to deal with at a time. Which means it won’t cost a lot to fix the bugs, and it won’t take much time – if the feedback loop is short enough and the guidance from the tool is clear enough on what’s wrong and why, developers should be able to review and fix every issue that is found, not just the most serious ones. And after developers run into the same problems a few times, they will learn to avoid them and stop making the same mistakes, improving how they write code. To do this you need to be able to differentiate between existing (stale) findings and new (fresh) issues introduced with the latest check-in. Most tools have a way to do this, and some, like Grammatech’s CodeSonar are specifically optimized to do incremental analysis. This is where fast feedback and a Self-Service approach can be especially effective. Instead of waiting for somebody else to run a scan and pass on the results or running ad hoc scans, try to get the results back to the people working on the code as quickly as possible. If developers can’t get direct feedback in the IDE (you’re running scans overnight, or on some other less frequent schedule instead), there are different ways to work with the results. You could feed static analysis findings directly into a bug tracker. Or into the team’s online code review process and tools (like they do at Google) so that developers and reviewers can see the code, review comments and static analysis warnings at the same time. Or you could get someone (a security specialist or a developer) to police the results daily, prioritize them and either fix the code themselves or pass on bugs or serious warnings to whoever is working on that piece of code (depending on your Code Ownership model). It should only take a few minutes each morning – often no time at all, since nothing may have been picked up in the nightly scans. Fixing forward gets you started quicker, and you don’t need to justify a separate project or even a spike to get going – it becomes just another part of how developers write and test code, another feedback loop like running unit tests. But it means that you leave behind some – maybe a lot of – unfinished business. Come Back and Clean House Whatever approach you take upfront – ignoring what’s there and just fixing forward, or cherry picking, or exterminating one type of bug – you will have a backlog of findings that still should be reviewed and that could include real bugs which should be fixed, especially security vulnerabilities in old code. Research on “The Honeymoon Effect” shows that there can be serious security risks in leaving vulnerabilities in old code unfixed, because this gives attackers more time to find them and exploit them. But there are advantages to waiting until later to review and fix legacy bugs, until the team has had a chance to work with the tool and understand it better, and until they have confidence in their ability to understand and fix problems safely. You need to decide what to do with these old findings. You could mark them and keep them in the tool’s database. Or you could export them or re-enter them (at least the serious ones) into your defect tracking system. Then schedule another spike: get a senior developer, or a few developers, to review the remaining findings, drop the false positives, and fix, or put together a plan to fix, the problems that are left. This should be a lot easier and less expensive, and safer, now that the team knows how the tool works, what the findings mean, what findings aren’t bugs, what bugs are easy to fix, what bugs aren’t worth fixing and what bugs they should be careful with (where there may be a high chance of introducing regression bug by trying to make the tool happy). This is also the time to revisit any early tuning decisions that you made, see if it is worthwhile to turn some checkers or rules back on. Act and Think for the Long Term Don’t treat static analysis testing like pen testing or some other security review or quality review. Putting in static analysis might start with a software security team (if your organization is big enough to have one and they have the necessary skills) or some consultants, but your goal has to be more than just handing off a long list of tool findings to a development lead or project manager. You want to get those bugs fixed – the real ones at least. But more importantly, you want to make static analysis testing an integral part of how developers think and work going forward, whenever they are changing or fixing code, or whenever they are starting a new project. You want developers to learn from using the tools, from the feedback and guidance that the tools offer, to write better, safer and more secure code from the beginning. In “Putting the Tools to Work: How to Succeed with Source Code Analysis” Pravir Chandra, Brian Chess and John Steven (three people who know a lot about the problem) list five keys to successfully adopting static analysis testing:Start small – start with a pilot, learn, get some success, then build out from there. Go for the throat – rather than trying to stomp out every possible problem, pick the handful of things that go wrong most often and go after them first. You’ll get a big impact from a small investment. Appoint a champion – find developers who know about the system, who are respected and who care, sell them on using the tool, get them on your side and put them in charge. Measure the outcome – monitor results, see what bugs are being fixed, how fast, which bugs are coming up, where people need help. Make it your own – customize the tools, write your own application-specific rules. Most organizations don’t get this far, but at least take time early to tune the tools to make them efficient and easy for developers to use, and to filter out as much noise as soon as possible.Realize that all of this is going to take time, and patience. Be practical. Be flexible. Work incrementally. Plan and work for the long term. Help people to learn and change. Make it something that developers will want to use because they know it will help them do a better job. Then you’ve done your job.Reference: Implementing Static Analysis isn’t that easy from our JCG partner Jim Bird at the Building Real Software blog....

A Better Query Language than SQL

Leland Richardson, Founder of Tech.Pro has recently published a very interesting article about BQL, his visions of a better query language (than SQL). The deciding feat of his new language proposal is the fact that it is really a superset of SQL itself. SQL is a very rich and expressive language to query relational databases. But it is awkward in many aspects, and a lot of people perceive it to be evolving only slowly – even if that is not true, considering the pace of SQL standards. But the standard is one thing, the implementations another – especially in the enterprise. When we blog about SQL, we’re constantly surprised ourselves, how awesome the PostgreSQL dialect is. But often, PostgreSQL actually just implements the standard. So there is hope that we’re getting somewhere. Nonetheless, in Leland’s article, there are a couple of ideas worth picking up. From our point of view, these are mainly: Flexibility in ordering the SELECT clause and the table expression In SQL, SELECT is always the first keyword. It must be expressed before the table expression. We’ve shown in a previous article that this is quite confusing for many SQL users. While the existing syntax should continue to exist, it would be good to be able to inverse the SELECT clause and the table expression. FROM table WHERE predicate GROUP BY columns SELECT columns Remember, the table expression contains FROM, WHERE, GROUP BY clauses, as well as vendor-specific CONNECT BY clauses and others: <query specification> ::= SELECT [ <set quantifier> ] <select list> <table expression> This language feature is already available in LINQ, by the way. Implicit KEY JOINs This feature is also available in jOOQ, using the ON KEY clause. Note that Sybase also supports ON KEY joins: from post key join user key join comment select * Named projections This is one of the features we really wish that the SQL language had. However, we wouldn’t count on specifying projections in a dedicated syntax. We had rather use an extension to the table expression syntax, allowing for a table to produce “side-tables” as such: from dbo.users with projection as ( firstName, lastName, phoneNumber, email ) select projection.* In the above example, projection is really nothing else than another table expression that is derived from the users table. From a SQL syntax semantics, this would be extremely powerful, because such projections would inherit all syntactic features of a regular table. We’ve blogged about this before, when we called that feature “common column expressions”. Conclusion Leland has lots of other ideas. He’s just at the beginning of a project that will still need a lot of refinement. The feedback he got on reddit, however, is rather good. Clearly, there is a lot of potential in creating “BQL” for SQL whatless is for CSS Groovy is for Java Xtend is for Java jQuery is for JavaScriptLet’s see where this endeavour leads. We’ll certainly be keeping an eye out for BQL’s next steps.Reference: A Better Query Language than SQL from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Trigger Continuous Delivery every GitHub commit

Crucial piece of puzzle when developing web application is Continuous Delivery. Testers or users can by early access to alpha version contribute to development process. Design,  requirements, architecture or performance problems can be catched much sooner. I am going to show how to set up this process with usage of Maven and Jenkins. Target environment is hosted on Tomcat7. Source code is hosted on GitHub. Because I am type of developer that tries to avoid polling as much as possible, I am going to show how to trigger this process by GitHub’s cool feature called WebHooks.     1. Create Continuous Delivery job Creation of Jenkins job and integrating it with Maven is very easy. Will quickly cover this:Create it with “New Item” -> “Build a maven2/3 project“ Set up GitHub URL in section “Source Code Management” (Authentication is not needed in my case, because my GitHub repository is public) Skip section “Build Triggers” for now, will come back to this later. Configure “Build” section with POM path and goals you are using for building your WAR file Set up “Build Settings” -> “E-Mail Notification”Save and try to run the Jenkins job. This is very common and basic Jenkins job configuration. Now we are about to set up deployment of WAR file into Tomcat7. But here comes dilemma into play. There are two very mature ways for deployment. I will cover both and let reader pick one. a) Continuous Delivery with usage of tomcat7-maven-pluginFirst of all we need to enable access into Tomcat7. Edit $CATALINA_HOME/conf/tomcat-users.xml (CATALINA_HOME is Tomcat’s home directory) and configure role and user as follows<role rolename="manager-script"/> <user username="deployer" password="===PASSWORD===" roles="manager-script"/>Configure Tomcat7 credentials for Maven in configuration file settings.xml. This file is usually located in <user_home>/.m2.<server> <id>tomcat-server-alpha</id> <username>deployer</username> <password>===PASSWORD===</password> </server>Set up tomcat7-maven-plugin into pom.xml<plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> <configuration> <url>http://[JENKINS-URL]/manager/text</url> <server>tomcat-server-alpha</server> <path>/[TOMCAT-DEPLOY-PATH]</path> </configuration> </plugin>Lastly add additional Maven goal “tomcat7:redeploy” into Jenkins jobb) Continuous Delivery with usage of Jenkins Deploy pluginInstall Jenkins Deploy plugin In Jenkins job that builds WAR, configure “Add post-build action” -> “Deploy ear/war to container”2. Jenkins – GitHub integrationBlocking requirement here is to have Jenkins server accessible from web. If you can’t for whatever reason, you must stick with polling the source control in Jenkins. Install GitHub plugin into Jenkins Generate Personal access token in GitHub for Jenkins. This can be found under “Edit Your Profile” -> “Applications”Set up GitHub plugin to use generated token in Jenkins. You can find this section in “Manage Jenkins” -> “Configure System” -> “GitHub Web Hook”. Note that you don’t need to use password. API URL is “https://api.github.com”Create WebHook in Github. Open repository -> “Settings” -> “Webhooks & Services” -> “Create Webhook” Use Jenkins URL with suffix “/github-webhook”. Jenkins will replace automatically as you configure jobs, so that it’s not needed to create GitHub hook for each Jenkins job After creation you can test webhook via three dots in “Recent Deliveries”. HTML error code “302 Found” means that it’s working fine (even when GitHub highlights it with exclamation mark).Finally enable GitHub triggering in Jenkins jobThat’s it. Github commit should cause deploy into Tomcat now. ResourcesJenkins GitHub plugin documentation Codehous Tomcat Maven plugin documentationReference: Trigger Continuous Delivery every GitHub commit from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....

Simplifying ReadWriteLock with Java 8 and lambdas

Considering legacy Java code, no matter where you look, Java 8 with lambda expressions can definitely improve quality and readability. Today let us look at ReadWriteLock and how we can make using it simpler. Suppose we have a class called Buffer that remembers last couple of messages in a queue, counting and discarding old ones. The implementation is quite straightforward:             public class Buffer { private final int capacity; private final Deque<String> recent; private int discarded; public Buffer(int capacity) { this.capacity = capacity; this.recent = new ArrayDeque<>(capacity); } public void putItem(String item) { while (recent.size() >= capacity) { recent.removeFirst(); ++discarded; } recent.addLast(item); } public List<String> getRecent() { final ArrayList<String> result = new ArrayList<>(); result.addAll(recent); return result; } public int getDiscardedCount() { return discarded; } public int getTotal() { return discarded + recent.size(); } public void flush() { discarded += recent.size(); recent.clear(); } } Now we can putItem() many times, but the internal recent queue will only keep last capacity elements. However it also remembers how many items it had to discard to avoid memory leak. This class works fine, but only in single-threaded environment. We use not thread-safe ArrayDeque and non-synchronized int. While reading and writing to int is atomic, changes are not guaranteed to be visible in different threads. Also even if we use thread safe BlockingDeque together with AtomicInteger we are still in danger of race condition because those two variables aren’t synchronized with each other. One approach would be to synchronize all the methods, but that seems quite restrictive. Moreover we suspect that reads greatly outnumber writes. In such cases ReadWriteLock is a fantastic alternative. It actually consists of two locks – one for reading and one for writing. In reality they both compete for the same lock which can be obtained either by one writer or multiple readers at the same time. So we can have concurrent reads when no one is writing and only occasionally writer blocks all readers. Using synchronized will just always block all the others, no matter what they do. The sad part of ReadWriteLock is that it introduces a lot of boilerplate. You have to explicitly open a lock and remember to unlock() it in finally block. Our implementation becomes hard to read: public class Buffer { private final int capacity; private final Deque<String> recent; private int discarded; private final Lock readLock; private final Lock writeLock; public Buffer(int capacity) { this.capacity = capacity; recent = new ArrayDeque<>(capacity); final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock(); readLock = rwLock.readLock(); writeLock = rwLock.writeLock(); } public void putItem(String item) { writeLock.lock(); try { while (recent.size() >= capacity) { recent.removeFirst(); ++discarded; } recent.addLast(item); } finally { writeLock.unlock(); } } public List<String> getRecent() { readLock.lock(); try { final ArrayList<String> result = new ArrayList<>(); result.addAll(recent); return result; } finally { readLock.unlock(); } public int getDiscardedCount() { readLock.lock(); try { return discarded; } finally { readLock.unlock(); } } public int getTotal() { readLock.lock(); try { return discarded + recent.size(); } finally { readLock.unlock(); } } public void flush() { writeLock.lock(); try { discarded += recent.size(); recent.clear(); } finally { writeLock.unlock(); } } } This is how it was done pre-Jave 8. Effective, safe and… ugly. However with lambda expressions we can wrap cross-cutting concerns in a utility class like this: public class FunctionalReadWriteLock { private final Lock readLock; private final Lock writeLock; public FunctionalReadWriteLock() { this(new ReentrantReadWriteLock()); } public FunctionalReadWriteLock(ReadWriteLock lock) { readLock = lock.readLock(); writeLock = lock.writeLock(); } public <T> T read(Supplier<T> block) { readLock.lock(); try { return block.get(); } finally { readLock.unlock(); } } public void read(Runnable block) { readLock.lock(); try { block.run(); } finally { readLock.unlock(); } } public <T> T write(Supplier<T> block) { writeLock.lock(); try { return block.get(); } finally { writeLock.unlock(); } public void write(Runnable block) { writeLock.lock(); try { block.run(); } finally { writeLock.unlock(); } } } As you can see we wrap ReadWriteLock and provide a set of utility methods to work with. In principle we would like to pass a Runnable or Supplier<T> (interface having single T get() method) and make sure calling it is surrounded with proper lock. We could write the exact same wrapper class without lambdas, but having them greatly simplifies client code: public class Buffer { private final int capacity; private final Deque<String> recent; private int discarded; private final FunctionalReadWriteLock guard; public Buffer(int capacity) { this.capacity = capacity; recent = new ArrayDeque<>(capacity); guard = new FunctionalReadWriteLock(); } public void putItem(String item) { guard.write(() -> { while (recent.size() >= capacity) { recent.removeFirst(); ++discarded; } recent.addLast(item); }); } public List<String> getRecent() { return guard.read(() -> { return recent.stream().collect(toList()); }); } public int getDiscardedCount() { return guard.read(() -> discarded); } public int getTotal() { return guard.read(() -> discarded + recent.size()); } public void flush() { guard.write(() -> { discarded += recent.size(); recent.clear(); }); } } See how we invoke guard.read() and guard.write() passing pieces of code that should be guarded? Looks quite neat. BTW have you noticed how we can turn any collection into any other collection (here: Deque into List) using stream()? Now if we extract couple of internal methods we can use method references to even further simplify lambdas: public void flush() { guard.write(this::unsafeFlush); } private void unsafeFlush() { discarded += recent.size(); recent.clear(); } public List<String> getRecent() { return guard.read(this::defensiveCopyOfRecent); } private List<String> defensiveCopyOfRecent() { return recent.stream().collect(toList()); } This is just one of the many ways you can improve existing code and libraries by taking advantage of lambda expressions. We should be really happy that they finally made their way into Java language – while being already present in dozens of other JVM languages.Reference: Simplifying ReadWriteLock with Java 8 and lambdas from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

ClojureScript Routing and Templating with Secretary and Enfocus

A good while ago I was looking for good ways to do client-side routing and templating in ClojureScript. I investigated using a bunch of JavaScript frameworks from ClojureScript, of which Angular probably gave the most promising results but still felt a bit dirty and heavy. I even implemented my own routing/templating mechanism based on Pedestal and goog.History, but something felt wrong still. Things have changed and today there’s a lot buzz about React-based libraries like Reagent and Om. I suspect that React on the front with a bunch of “native” ClojureScript libraries may be a better way to go. Before I get there though, I want to revisit routing and templating. Let’s see how we can marry together two nice libraries: Secretary for routing and Enfocus for templating. Let’s say our app has two screens which fill the entire page. There are no various “fragments” to compose the page from yet. We want to see one page when we navigate to /#/add and another at /#/browse. The “browse” page will be a little bit more advanced and support path parameters. For example, for /#/browse/Stuff we want to parse the “Stuff” and display a header with this word. The main HTML could look like: <!DOCTYPE html> <html> <body> <div class="container-fluid"> <div id="view">Loading...</div> </div><script src="js/main.js"></script> </body> </html> Then we have two templates. add.html: <h1>Add things</h1> <form> <!-- boring, omitted --> </form> browse.html: <h1></h1> <div> <!-- boring, omitted --> </div> Now, all we want to do is to fill the #view element on the main page with one of the templates when location changes. The complete code for this is below. (ns my.main (:require [secretary.core :as secretary :include-macros true :refer [defroute]] [goog.events :as events] [enfocus.core :as ef]) (:require-macros [enfocus.macros :as em]) (:import goog.History goog.History.EventType))(em/deftemplate view-add "templates/add.html" [])(em/deftemplate view-browse "templates/browse.html" [category] ["h1"] (ef/content category))(defroute "/" [] (.setToken (History.) "/add"))(defroute "/add" [] (em/wait-for-load (ef/at ["#view"] (ef/content (view-add)))))(defroute "/browse/:category" [category] (em/wait-for-load (ef/at ["#view"] (ef/content (view-browse category)))))(doto (History.) (goog.events/listen EventType/NAVIGATE #(secretary/dispatch! (.-token %))) (.setEnabled true)) What’s going on?We define two Enfocus templates. view-add is trivial and simply returns the entire template. view-browse is a bit more interesting: Given category name, alter the template by replacing content of h1 tag with the category name. Then we define Secretary routes to actually use those templates. All they do now is replace content of the #view element with the template. In case of the “browse” route, it passes the category name parsed from path to the template. There is a default route that redirects from / to /add. It doesn’t lead to example.com/add, but only sets the fragment: example.com/#/add. Finally, we plug in Secretary to goog.History. I’m not sure why it’s not in the box, but it’s straightforward enough. Note that in the history handler there is the em/wait-for-load call. It’s necessary for Enfocus if you load templates with AJAX calls.That’s it, very simple and straightforward. Update: Fixed placement of em/wait-for-load, many thanks to Adrian!Reference: ClojureScript Routing and Templating with Secretary and Enfocus from our JCG partner Konrad Garus at the Squirrel’s blog....

Getting JUnit Test Names Right

Finding good names is one of the challanges of crafting software. And you need to find them all the time and for everything – classes, methods, variables, just to name a few. But what makes a name a good name? To quote Oncle Bob: ‘Three things: Readability, readability, and readability!’ Which he defines later one by clarity, simplicity and density of expression1. Though this makes sense to me, I watch myself struggling in particular with test method naming a bit. To better unterstand of what I am talking about, one needs to know that I write my code test driven. And doing this for a while I changed my focus of work gradually from the unit under test more to the test itself. This is probably because I like to think of a test case as a living specification and quality assurance in one piece and hence that it is vitally important2. So whenever a test breaks, ideally I would be able to recognize at a glance what specification was broken and why. And the best way to achieve this seems to be by finding an expressive test name, because this is the first information displayed in the reporting view:Seen from this angle I am not always happy with what shows up in this view and so I spent a bit of time on research to see what school of thought might be helpful. Unfortunately most of the results I found were somewhat dated and – less surprising – the opinions on this topic are divided. This post represents my reflections based on those findings and a bit of personal experience. Tests per Method- or Behavior Test-Names? In its pure form the tests per method approach is often provided by tools that e.g. generate a single test stub after the fact. In case you have a class Foo with the method bar the generated method would be called testBar. I was always sceptical about the usefulness of such a development style or naming convention and would have argued like this quote from an old JavaRanch thread: ‘you shouldn’t think about it as testing methods at all, you should think about it as testing behavior of the class. Consequently, I like my test method names to communicate the expected behavior’3. Interestingly enough I am about to change my opinion a bit on that one. The idea of communicating the ‘behavior’ as stated above requires to find a concise name that expresses this ‘behavior’ comprehensively. But then the term behavior implies a transition from one state to another conducted by an action, or denoted in BDD terms for example a Given-When-Then pattern. Honestly, I do not think that it is in general a good idea to put all this information in a single name4: @Test public void givenIsVisibleAndEnabledWhenClickThenListenerIsNotified() {} @Test public void givenIsVisibleAndNotEnabledWhenClickThenListenerIsNotNotified() {} @Test public void givenIsNotVisibleAndEnabledWhenClickThenListenerIsNotNotified() {} Maybe its just a question of taste but from my experience this approach often lacks readability due to the absence of simplicity and/or clarity no matter what kind of format style I chose. Furthermore such overloaded names tend to have the same problem as comments – the names get easily out of date as the content evolves. Because of this I would rather like to go with the BUILD-OPERATE-CHECK5 pattern instead. This would allow to split up the the phases into separate sub method names placed withing a single test: @Test public void testNameHasStillToBeFound() { // do what is needed to match precondition givenIsVisibleAndEnabled();// execute the transition whenClick();// verify the expected outcome thenListenerIsNotified(); } Unfortunately this leads us to where we started. But if you take a closer look at the examples above, all the methods group around a common denominator. They all belong to the same action that fires the transition. In our case the click event. Considering that from the development process point of view I regard a test case more important than the unit under test, one could interprete this as a sign to reflect the action by an appropriate method name in the unit under development6. So for the sake of example assume we have a ClickAction that wraps around a UI control. And introducing a method called ClickAction#execute() might seem appropriate to us, given the situation above. As simplicity matters we could use that name also for the test method that represents the transition from the default state of the ClickAction – control construct via ClickAction#execute(): class ClickActionTest {@Test public void execute() { Control control = mock( Control.class ); ClickAction clickAction = new ClickAction( control );clickAction.execute();verify( control ).notifyListeners(...) } } To keep things simple the next test name may mention only the state information that is important as it differs from the default and leads to another outcome: class ClickActionTest {[...]@Test public void executeOnDisabledControl() { Control control = mock( Control.class ); when( control.isEnabled() ).thenReturn( false ); ClickAction clickAction = new ClickAction( control );clickAction.execute();verify( control, never() ).notifyListeners(...) }@Test public void executeOnInvisibleControl() { [...] } As you can see, this approach results in a set of test names that technically spoken represents a variety of the ‘tests per method’ pattern – but not for completely bad reasons as I think. Given the context I consider this naming pattern is simple, clear and expressive up to one point: The expected test outcome is still not mentioned at all. On first glance this looks unsatisfactory, but from my current point of view I am willing to accept this as a sound trade off. Especially as the cause for a failing test is usually also shown in the JUnit reporting view. Because of this that problem can be handled by providing meaningful test failures7. Conclusion Actually I am using the test naming pattern described above for some time now. So far it works out not too bad. In particular when working with pretty small units as I usually do, there is little room for misinterpretation. However this approach does not match all cases and sometimes it simply feels better and is still readable enough to mention the outcome. I will not harping on about principles here and maybe I am getting it all wrong. So I would be happy for any pointers to more elaborated approaches that you might be aware of to broaden my point of view.Robert C. Martin about clean tests, Clean Code, Chapter 9 Unit Tests ↩ What would be worse: losing the unit under test or the test case? With a good test case restoring the unit should be most of the time straightforward, however vice versa you could easily miss out one of the corner cases that were specified in the lost test case ↩ Naming convention for methods with JUnit, Naming convention for methods with JUnit ↩ To prevent misunderstandings: BDD does nothing the like and comes with its own testing framework. I just mentioned it here as the term ‘behavior’ seems to suggest it and the term ‘givenWhenThen’ floads around in many discussions about test names. However you find actually proposals like Roy Osherove’s naming conventions labelled ‘UnitOfWork_StateUnderTest_ExpectedBehavior’ that still seem to be well accepted albeit the post has seen most of the days of the last decade ↩ Robert C. Martin, Clean Code, Chapter 9, Clean Tests ↩ Or even to extract the whole functionality into a separate class. But this case is described in my post More Units with MoreUnit ↩ Which is probably a topic of its own and as I have to come to an end I leave it at that way! ↩Reference: Getting JUnit Test Names Right from our JCG partner Frank Appel at the Code Affine blog....

Java 8 Friday Goodies: Lean Concurrency

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. We have blogged a couple of times about some nice Java 8 goodies, and now we feel it’s time to start a new blog series, the… Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.   Java 8 Goodie: Lean Concurrency Someone once said that (unfortunately, we don’t have the source anymore): Junior programmers think concurrency is hard. Experienced programmers think concurrency is easy. Senior programmers think concurrency is hard. That is quite true. But on the bright side, Java 8 will at least improve things by making it easier to write concurrent code with lambdas and the many improved APIs. Let’s have a closer look: Java 8 improving on JDK 1.0 API java.lang.Thread has been around from the very beginning in JDK 1.0. So has java.lang.Runnable, which is going to be annotated with FunctionalInterface in Java 8. It is almost a no-brainer how we can finally submit Runnables to a Thread from now on. Let’s assume we have a long-running operation: public static int longOperation() { System.out.println("Running on thread #" + Thread.currentThread().getId());// [...] return 42; } We can then pass this operation to Threads in various ways, e.g. Thread[] threads = {// Pass a lambda to a thread new Thread(() -> { longOperation(); }),// Pass a method reference to a thread new Thread(ThreadGoodies::longOperation) };// Start all threads Arrays.stream(threads).forEach(Thread::start);// Join all threads Arrays.stream(threads).forEach(t -> { try { t.join(); } catch (InterruptedException ignore) {} }); As we’ve mentioned in our previous blog post, it’s a shame that lambda expressions did not find a lean way to work around checked exceptions. None of the newly added functional interfaces in the java.util.function package allow for throwing checked exceptions, leaving the work up to the call-site.In our last post, we’ve thus published jOOλ (also jOOL, jOO-Lambda), which wraps each one of the JDK’s functional interfaces in an equivalent functional interface that allows for throwing checked exceptions. This is particularly useful with old JDK APIs, such as JDBC, or the above Thread API. With jOOλ, we can then write:         // Join all threads Arrays.stream(threads).forEach(Unchecked.consumer( t -> t.join() )); Java 8 improving on Java 5 API Java’s multi-threading APIs had been pretty dormant up until the release of Java 5′s awesome ExecutorService. Managing threads had been a burden, and people needed external libraries or a J2EE / JEE container to manage thread pools. This has gotten a lot easier with Java 5. We can now submit a Runnable or a Callable to an ExecutorService, which manages its own thread-pool. Here’s an example how we can leverage these Java 5 concurrency APIs in Java 8: ExecutorService service = Executors .newFixedThreadPool(5);Future[] answers = { service.submit(() -> longOperation()), service.submit(ThreadGoodies::longOperation) };Arrays.stream(answers).forEach(Unchecked.consumer( f -> System.out.println(f.get()) )); Note, how we again use an UncheckedConsumer from jOOλ to wrap the checked exception thrown from the get() call in a RuntimeException. Parallelism and ForkJoinPool in Java 8 Now, the Java 8 Streams API changes a lot of things in terms of concurrency and parallelism. In Java 8, you can write the following, for instance: Arrays.stream(new int[]{ 1, 2, 3, 4, 5, 6 }) .parallel() .max() .ifPresent(System.out::println); While it isn’t necessary in this particular case, it’s still interesting to see that the mere calling of parallel() will run the ...

Go for Java Programmers: Control Structures

Go (Golang) has only three basic control structures.  However, each of them are significantly more versatile than their Java counterparts. If The if statement should be immediately recognizable.  In its basic form it appears the same as in Java, just without parentheses around the condition:         ... if err != nil { fmt.Println("An error occurred!") } ... However, Go’s version differs from Java’s if in two regards.  First, when the body of an if statement is only a single line, Java allows you to omit the opening and closing braces:... if(inputString == null) return "null"; ... Go, on the other hand, has some strong opinions on well-formatted and readable code (more on this to come in the next article).  A Go if statement requires an opening and closing brace no matter what:... if inputString == null { return "null" } ... Secondly, Go makes up for this restriction with a flexible initialization feature. Inside of the if declaration itself, you can place an initialization statement just before the condition: ... if err := os.Remove("data.csv"); err != nil { fmt.Println("Unable to delete the file 'data.csv'") } ... Here, the if statement performs a file delete operation, returning an error variable that’s populated if the delete failed.  The condition then checks this value, and enters the if block if the condition is true. Why do it this way, as opposed to simply performing the file delete operation on a separate line above the if statement?  The key feature of this approach is that variables created in an initialization statement have scope limited to the if block. The example snippet above is a very common pattern in Go, and a later article in this series will deal with error handling more thoroughly.  However, the point is that this initialization construct can create any variable that you need to have in-scope prior to the if condition, yet out-of-scope after the if block. for A basic for loop in Go is likewise similar to its Java counterpart.  It contains an initialization statement, a condition to be checked before each iteration, and code to executed following each iteration.  The only difference is that there are no paretheses surrounding these three things: ... for index := 0; index < 10; index++ { fmt.Printf("index == %d", index) } ... However the for loop has two other forms besides this basic version.  Go does NOT have the while or do loop control structures found in Java, so the other for loop forms cover the same functionality in their place. If you write a for statement with only a condition, and no initialization or post-iteration code, then it is essentially a Java while loop: ... for len(slice) < maxSize { slice = append(slice, getNextItemToAdd()) } ... Of course, you could do something similar with a Java for loop, by inserting empty statements for the initialization and/or post-iteration parts.  However, the Java version always requires three parts, strictly speaking.  You have to insert semicolons for the parts you wish to fill with empty statements.  In Go, the condition-only construct is a distinct type of for loop, and no placeholder semicolons are necessary. In the third for loop form, you can omit even the condition.  This creates an infinite loop, requiring a break statement to terminate at some point.... consoleReader := bufio.NewReader(os.Stdin) for { command, hasMoreInLine, err := bio.ReadLine() if err != nil || command == "exit" { // The user typed an "exit" command (or something went wrong). Time to exit this loop! break; }// do something based on the command typed} ... The bare for loop is the Go equivalent to Java’s:... while(true) { String command = System.console().readLine(); if(command.equals("exit")) { break; }// do something based on the command typed} ... switch In Java, a switch statement executes one or more blocks of code, based on the value of some variable.  For most of Java’s history, you could only switch on primitive or enum type variables, although Java 7 finally added support for String as a switch type.... switch(deviceType) { case "PHONE" : renderForPhone(); break; case "TABLET" : renderForTablet(); break; case "DESKTOP" : renderForDesktop(); break; default : System.out.println("Unrecognized device: " + deviceType); } ... In Go, a switch statement does not merely compare a variable to a list of possible matches.  Rather, a Go switch tests multiple free-form conditions.  The conditions may be completely unrelated to each other, and can each be as complex as you like.  The first condition that passes will have its corresponding code executed:... switch { case customerStatus == "DELINQUENT" : rejectOrder() case orderAmount > 1000 || customerStatus == "GOLD_LEVEL" : processOrderHighPriority() case orderAmount > 500 : processOrderMediumPriority() default : processOrder() } Aside from flexible case conditions, the other key difference from Java’s switch is that Go’s version does NOT fall through.  In Java, a switch statement might execute multiple case blocks.  If you do not put a break statement at the end of a case block, then the next case condition will be evaluated for a match as well. With Go, a switch statement is basically just a cleaner way of writing a messy series of if-else blocks. Conclusion The if, for, and switch keywords are extremely similar to their Java counterparts.  However, each of them offers extended flexibility to help you write more clear and concise logic.  In the next article of the Go for Java Programmers series, we’ll look at Go’s particular rules for well-formatted code.Reference: Go for Java Programmers: Control Structures from our JCG partner Steve Perkins at the steveperkins.net blog....

How you can benefit from Groovy Shell

This is a post about the Groovy Shell and how it can help you with your daily work (as long as you are working as software developer). You can benefit from the Groovy Shell no matter what programming language(s) or technologies you are using. The only real requirement is that you are able to write (and read) small pieces of Groovy code. Getting started I think the purpose of the Groovy shell is best described by the official documentation:     The Groovy Shell, aka. groovysh is a command-line application which allows easy access to evaluate Groovy expressions, define classes and run simple experiments. The Groovy Shell is included in the distribution of the Groovy Programming language and can be found in <groovy home>/bin. To start the Groovy Shell simply run groovysh from the command line: GROOVY_HOME\bin>groovysh Groovy Shell (2.2.2, JVM: 1.7.0) Type 'help' or '\h' for help. -------------------------------------------------------------------- groovy:000> Within the shell you can now run Groovy commands: groovy:000> println("hu?") hu? ===> null groovy:000> It supports variables and multi line statements: groovy:000> foo = 42 ===> 42 groovy:000> baz = { groovy:001> return 42 * 2 groovy:002> } ===> groovysh_evaluate$_run_closure1@3c661f99 groovy:000> baz(foo) ===> 84 groovy:000> (Note that you have to skip the def keyword in order to use variables and closures later) A few words for Windows Users I can clearly recommend Console(2) which is a small wrapper around the awkward cmd window. It provides Tab support, better text selection and other useful things. Unfortunately the Groovy 2.2.0 shell has a problem with arrow keys on Windows 7/8 in some locales (including German). However, you can use CTRL-P and CTRL-N instead of UP and DOWN. As an alternative you can use the shell of an older Groovy Version (groovysh from Groovy 2.1.9 works fine). So, for what can we use it? The most obvious thing we can do is evaluating Groovy code. This is especially useful if you are working on applications that make use of Groovy. Maybe you know you can use the << operator to add elements to lists, but you are not sure if the operator works the same for maps? In this case, you can start googling or look it up in the documentation. Or you can just type it into Groovy Shell and see if it works: groovy:000> [a:1] << [b:2] ===> {a=1, b=2} It works! You are not sure if you can iterate over enum values? groovy:000> enum Day { Mo, Tu, We } ===> true groovy:000> Day.each { println it } Mo Tu We ===> class Day It works too! It is a Calculator! The Groovy Shell can be used for simple mathematical calculations: groovy:000> 40 + 2 ===> 42 groovy:000> groovy:000> 123456789123456789 * 123456789123456789123456789 ===> 15241578780673678530864199515622620750190521 groovy:000> groovy:000> 2 ** 1024 ===> 179769313486231590772930519078902473361797697894230657273430081157732675805500963132708477322407536021120113879871393357658789768814416622492847430639474124377767893424865485276302219601246094119453082952085005768838150682342462881473913110540827237163350510684586298239947245938479716304835356329624224137216 groovy:000> As you can see Groovy can work well with numbers that would cause overflows in other programming languages. Groovy uses BigInteger and BigDecimal for these computations. By the way, you can verify this yourself very quickly: groovy:000> (2 ** 1024).getClass() ===> class java.math.BigInteger Note that Groovy math tries to be as natural as possible: groovy:000> 3/2 ===> 1.5 groovy:000> 1.1+0.1 ===> 1.2 In Java these computations would result in 1 ( integer division) and 1.2000000000000002 ( floating point arithmetic). Do more Maybe you need the content of a certain web page? This can be easily accomplished with Groovy: groovy:000> "http://groovy.codehaus.org".toURL().text ===> <!DOCTYPE html> <html> <head>     <meta charset="utf-8"/>     <meta http-equiv="content-type" content="text/html; charset=utf-8"/>     <meta name="description" content="Groovy Wiki"/>     ... Maybe you only want the <meta> tags for some reason? groovy:000> "http://groovy.codehaus.org".toURL().eachLine { if (it.contains('<meta')) println it }     <meta charset="utf-8"/>     <meta http-equiv="content-type" content="text/html; charset=utf-8"/>     <meta name="description" content="Groovy Wiki"/>     <meta name="keywords"     <meta name="author" content="Codehaus Groovy Community"/> ===> null I am sure you were in a situation where you needed the url encoded version of some text: groovy:000> URLEncoder.encode("foo=bar") ===> foo%3Dbar Of course you do not need to remember the exact class and method names. Just type in the first characters and then press tab to get the possible options: groovy:000> URL URL                       URLClassLoader            URLConnection             URLDecoder                URLEncoder URLStreamHandler          URLStreamHandlerFactory It works with methods too: groovy:000> URLEncoder.e each(            eachWithIndex(   encode(          every(           every() Customize it To truly benefit from the Groovy Shell you should customize it to your needs and provide functions that help you in your daily work. For this you can add your custom Groovy code to $HOME/.groovy/groovysh.profile (just create the file if it does not exist). This file is loaded and executed when groovysh starts. Let’s assume you want to decode a piece of Base64 encoded text. A viable approach is to start googling for an online Base64 decoder. An alternative is to add a few lines to your groovysh.profile to accomplish the job: encodeBase64 = { str ->   return str.bytes.encodeBase64().toString() }decodeBase64 = { str ->   return new String(str.decodeBase64()) } Now you can use the encodeBase64() and decodeBase64() functions within Groovy Shell to do the job: groovy:000> encoded = encodeBase64('test') ===> dGVzdA== groovy:000> decodeBase64(encoded) ===> test This approach might be a bit slower the first time you are using it but you will benefit from it the next time you need to encode/decode a Base64 message. Note that autocomplete also works on your own  methods, so you do not need to remember the exact name. Another example function that can be useful from time to time is one that computes the MD5 hash from a passed string. We can use Java’s MessageDigest class to accomplish this task in Groovy: import java.security.MessageDigestmd5 = { str ->   // thanks to https://gist.github.com/ikarius/299062   MessageDigest digest = MessageDigest.getInstance("MD5")   digest.update(str.bytes)   return new BigInteger(1, digest.digest()).toString(16).padLeft(32, '0') } To compute a MD5 hash we then just have to call the md5() function: groovy:000> md5('test') ===> 098f6bcd4621d373cade4e832627b4f6 But what if we want to compute the MD5 value of a file? If the file is not that large getting the content of it is as simple as this: new File('test.txt').text We just have to pass this to the md5() function to compute the md5 hash of the file: groovy:000> md5(new File('test.txt').text) ===> a4ba431c56925ce98ff04fa7d51a89bf Maybe you are working a lot with date and times. In this case it can be useful to add Joda-Time support to your Groovy Shell. Just add the following lines to groovysh.profile: @Grab('joda-time:joda-time:2.3') import org.joda.time.DateTime import org.joda.time.DateTime If you run groovysh the next time Joda-Time will be downloaded using Grape. Additionally the Joda DateTime class is imported so it can be used in Groovy Console without prefixing the package name: groovy:000> new DateTime().plusDays(42) ===> 2014-04-22T22:27:20.860+02:00 You commonly need to convert time values to/from unix timestamps? Just add two simple functions for it and you no longer need your bookmark for an online converter: import java.text.SimpleDateFormat dateFormat = new SimpleDateFormat('yyyy-MM-dd HH:mm:ss')toUnixTimestamp = { str ->   return dateFormat.parse(str).getTime() / 1000 }fromUnixTimestamp = { timestamp ->   return dateFormat.format(new Date(timestamp.toLong() * 1000)) } Usage in Groovy Shell: groovy:000> toUnixTimestamp('2014-04-15 12:30:00') ===> 1397557800 groovy:000> fromUnixTimestamp('1397557800') ===> 2014-04-15 12:30:00 Maybe you want to execute a command on a remote machine? You only need another simple function to accomplish this task with Groovy Shell: ssh = { cmd ->   def proc = "ssh -i keyfile user@host $cmd".execute()   proc.waitFor()   println "return code: ${proc.exitValue()}"   println "stderr: ${proc.err.text}"   println "stdout: ${proc.in.text}"  } Usage: groovy:000> ssh 'ls -l' return code: 0 stderr: stdout: total 1234 -rw-r--r-- 1 foo foo 7678563 Oct 28  2009 file drwxr-xr-x 4 foo foo    4096 Mar  1 17:07 folder -rw-r--r-- 1 foo foo      19 Feb 27 22:19 bar ... In case you did not know: In Groovy you can skip parentheses when calling a function with one or more parameters. So ssh ‘ls -l’ is the same as ssh(‘ls -l’). Conclusion Before I switched to the Groovy Shell, I used the Python shell for nearly the same reasons (even if I was not working with Python at all). Within the last year I used a lot of Groovy and I quickly discovered that the Groovy Web Console is a very valuable tool for testing and prototyping. For me the Groovy Shell replaced both tools. It is clearly a development tool I do not want to miss. I think it is really up to you how much you let the Groovy Shell help you.Reference: How you can benefit from Groovy Shell from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books
Get tutored by the Geeks! JCG Academy is a fact... Join Now
Hello. Add your message here.