Featured FREE Whitepapers

What's New Here?


5 things I learned from a Hacker attack

On Friday evening I got an e-mail from my provider. They told me my webspace was subject of a hackers attack and they would shut it down until they have analysed its root cause. There was no more information and the only thing I could do was to wait. Fortunately they wrote me back on Saturday morning with some explanation and tips how to clean my websites up. Here is what I have learned from the past night and from the attack of some script kids. And on a side note, I really dislike these idiots who were browsing the Internet and go on everybody else nerves!1. Update! Yes, it’s my fault. I have made a quick sample installation of WordPress for a potential customer. The customer did not want it and forgot about it. The current WordPress version is 3.4.1 and my server had 3.1.4 installed. I have heard the WordPress developers are quick with security fixes, but if you don’t update your installation it’s your fault.2. Delete what you don’t need. Now. As mentioned, I didn’t need the WordPress instance but was to lazy to delete it right now and later forget about it. I will not do this mistake again. If I don’t need it, I will delete it instantly. To my defence, I have a pretty bad Internet connection and uploading does take me ages. This is why I have become lazy. But of course I could have moved it into an invisible folder. In addition, these web sites are not my main business. Therefore I have bought a standard hosting package and thought i could trust that nobody would find my old files. Of course this was idiotic to think, i know it and knew it.3. Check what happens When I got the e-mail the script kiddies were already acting a while. I was unaware they did weird stuff. If I would have known, I would have avoided the outage: I could disable all my websites, look for the root cause and fix the system before my provider takes me off for 12 hours. Therefore I decided to check more regularly whats going on. The following script helps me: find -newermt yesterday -ls | mail -s 'Changed Files Report' mail@example.com This will run as a cronjob. It will mail me the files which changed yesterday. This way I can double check about the changes and have a higher chance to act quickly (and hopefully quicker than my provider).4. Go static Before a while I played with Jekyll. It’s a nice Ruby-Tool which lets you generate static HTML pages, similar to Mavens Site. It is great, because it supports templates, Markdown and many more stuff which helps to use “dynamic power” to generate static pages. The projects I have started with it are not ready yet, but the Dartlang.org homepage is build with Jekyll itself. You can read on Seth Ladds blog how it works. What i have learned of yesterday was that I will replace all dynamic web pages (mostly on WordPress) with static HTML pages generated by Jekyll, when I am not urgently needing some of the dynamic power. Be honest, in some cases we need PHP just as some kind of templating mechanism. You can do templating with Jekyll. Even standard blogs can be done perfectly with it. In addition you can commit the whole Jekyll project to GIT and the project layout is very easy to understand. In my case, I have various webpages in mind which will now turn to Jekyll-pages. And yes, I will take the performance bonus as well as the fact that HTML pages are not so easily opening security wholes to script kids. UPDATE: My colleg Torsten Curdt recommended me awestruct for static site generation. Looks promising!5. Read exploit sites The idiots who thought it would be a good idea to break into my webspace and put links up for their trivial websites copied a PHP script to my web server which gave them some a lot of information on my environment, like writable folders and such. The funny thing is, the script was GPLed and they stayed conform to the licensing conditions. In the header was the original source of the script which is exploit-db dot com. On this page are tons of exploits collected. Script Kids can download it from there and attack you. The website says, it’s intention is to give people like us the chance to protect our work against hackers. I am not sure how many of us do read such pages compared to script kids. But well, from now on I will look at that site from time to time and check if the software I use is vulnerable to a a specific exploit which has not been fixed yet. Reference: 5 things I learned from a Hacker attack from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

Spring Security: Prevent brute force attack

Spring Security can do lot of stuff for you. Account blocking, password salt. But what about brute force blocker. That what you have to do by yourself. Fortunately Spring is quite flexible framework so it is not a big deal to configure it. Let me show you little guide how to do this for Grails application. First of all you have to enable springSecurityEventListener in your config.groovy grails.plugins.springsecurity.useSecurityEventListener = true then implement listeners in /src/bruteforce create classes /** Registers all failed attempts to login. Main purpose to count attempts for particular account ant block user*/ class AuthenticationFailureListener implements ApplicationListener {LoginAttemptCacheService loginAttemptCacheService@Override void onApplicationEvent(AuthenticationFailureBadCredentialsEvent e) { loginAttemptCacheService.failLogin(e.authentication.name) } }next we have to create listener for successful logins in same package /** Listener for successfull logins. Used for reseting number on unsuccessfull logins for specific account */ class AuthenticationSuccessEventListener implements ApplicationListener{LoginAttemptCacheService loginAttemptCacheService@Override void onApplicationEvent(AuthenticationSuccessEvent e) { loginAttemptCacheService.loginSuccess(e.authentication.name) } }We were not putting them in our grails-app folder so we need to regiter these classes as spring beans. Add next lines into grails-app/conf/spring/resources.groovy beans = { authenticationFailureListener(AuthenticationFailureListener) { loginAttemptCacheService = ref('loginAttemptCacheService') }authenticationSuccessEventListener(AuthenticationSuccessEventListener) { loginAttemptCacheService = ref('loginAttemptCacheService') } }You probably notice usage of LoginAttemptCacheService loginAttemptCacheService Let’s implement it. This would be typical grails service package com.picsel.officeanywhereimport com.google.common.cache.CacheBuilder import com.google.common.cache.CacheLoader import com.google.common.cache.LoadingCacheimport java.util.concurrent.TimeUnit import org.apache.commons.lang.math.NumberUtils import javax.annotation.PostConstructclass LoginAttemptCacheService {private LoadingCache attempts; private int allowedNumberOfAttempts def grailsApplication@PostConstruct void init() { allowedNumberOfAttempts = grailsApplication.config.brutforce.loginAttempts.allowedNumberOfAttempts int time = grailsApplication.config.brutforce.loginAttempts.timelog.info 'account block configured for $time minutes' attempts = CacheBuilder.newBuilder() .expireAfterWrite(time, TimeUnit.MINUTES) .build({0} as CacheLoader); }/** * Triggers on each unsuccessful login attempt and increases number of attempts in local accumulator * @param login - username which is trying to login * @return */ def failLogin(String login) { def numberOfAttempts = attempts.get(login) log.debug 'fail login $login previous number for attempts $numberOfAttempts' numberOfAttempts++if (numberOfAttempts > allowedNumberOfAttempts) { blockUser(login) attempts.invalidate(login) } else { attempts.put(login, numberOfAttempts) } }/** * Triggers on each successful login attempt and resets number of attempts in local accumulator * @param login - username which is login */ def loginSuccess(String login) { log.debug 'successfull login for $login' attempts.invalidate(login) }/** * Disable user account so it would not able to login * @param login - username that has to be disabled */ private void blockUser(String login) { log.debug 'blocking user: $login' def user = User.findByUsername(login) if (user) { user.accountLocked = true; user.save(flush: true) } } } We will be using CacheBuilder from google guava library. So add next line to BuildConfig.groovy dependencies { runtime 'com.google.guava:guava:11.0.1' }And the last step add service configuration to cinfig.groovy brutforce { loginAttempts { time = 5 allowedNumberOfAttempts = 3 }That’s it, you ready to run you application. For typical java project almost everething will be the same. Same listeners and same services.More about Spring Security Events More about caching with google guava Grails user can simple use this plugin https://github.com/grygoriy/bruteforcedefender Happy coding and don’t forget to share! Reference: Prevent brute force attack with Spring Security from our JCG partner Grygoriy Mykhalyuno at the Grygoriy Mykhalyuno’s blog blog. ...

Android Dialog – Android Custom Dialog

In this tutorial I am going to describe how to create Android Custom Dialg. Android DialogCreate Android Project AndroidDialog ; File -> New -> Android ProjectAndroid Layoutactivity_android_dialog.xml <RelativeLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' ><Button android:id='@+id/btn_launch' android:layout_width='wrap_content' android:layout_height='wrap_content' android:layout_alignParentTop='true' android:layout_centerHorizontal='true' android:layout_marginTop='115dp' android:text='Launch Dialog' /><TextView android:id='@+id/textView1' android:layout_width='wrap_content' android:layout_height='wrap_content' android:layout_alignParentLeft='true' android:layout_alignParentTop='true' android:layout_marginLeft='28dp' android:layout_marginTop='54dp' android:text='@string/app_desc' android:textAppearance='?android:attr/textAppearanceLarge' /> </RelativeLayout>Dialog Layoutdialog_layout.xml <?xml version='1.0' encoding='utf-8'?> <LinearLayout xmlns:android='http://schemas.android.com/apk/res/android' android:layout_width='fill_parent' android:layout_height='fill_parent' android:orientation='vertical' android:padding='10sp' ><EditText android:id='@+id/txt_name' android:layout_width='fill_parent' android:layout_height='wrap_content' android:hint='@string/dialog_uname' android:singleLine='true' ><requestFocus /> </EditText><EditText android:id='@+id/password' android:layout_width='match_parent' android:layout_height='wrap_content' android:ems='10' android:inputType='textPassword' > </EditText><RelativeLayout android:layout_width='match_parent' android:layout_height='wrap_content' ><Button android:id='@+id/btn_login' android:layout_width='120dp' android:layout_height='wrap_content' android:text='@string/dialog_submit' /><Button android:id='@+id/btn_cancel' android:layout_width='120dp' android:layout_height='wrap_content' android:layout_alignParentTop='true' android:layout_marginLeft='10dp' android:layout_toRightOf='@+id/btn_login' android:text='@string/dialog_cancel' /> </RelativeLayout></LinearLayout>AndroidDialog ActivityOverride both onCreateDialog(int id) and onPrepareDialog(int id, Dialog dialog) methods and add following code which will create your custom Android Dialog. import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog;public class AndroidDialog extends Activity {final private static int DIALOG_LOGIN = 1;@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_android_dialog);Button launch_button = (Button) findViewById(R.id.btn_launch);launch_button.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { showDialog(DIALOG_LOGIN); } }); }@Override protected Dialog onCreateDialog(int id) {AlertDialog dialogDetails = null;switch (id) { case DIALOG_LOGIN: LayoutInflater inflater = LayoutInflater.from(this); View dialogview = inflater.inflate(R.layout.dialog_layout, null);AlertDialog.Builder dialogbuilder = new AlertDialog.Builder(this); dialogbuilder.setTitle('Login'); dialogbuilder.setView(dialogview); dialogDetails = dialogbuilder.create();break; }return dialogDetails; }@Override protected void onPrepareDialog(int id, Dialog dialog) {switch (id) { case DIALOG_LOGIN: final AlertDialog alertDialog = (AlertDialog) dialog; Button loginbutton = (Button) alertDialog .findViewById(R.id.btn_login); Button cancelbutton = (Button) alertDialog .findViewById(R.id.btn_cancel); final EditText userName = (EditText) alertDialog .findViewById(R.id.txt_name); final EditText password = (EditText) alertDialog .findViewById(R.id.password);loginbutton.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { alertDialog.dismiss(); Toast.makeText( AndroidDialog.this, 'User Name : ' + userName.getText().toString() + ' Password : ' + password.getText().toString(), Toast.LENGTH_LONG).show(); } });cancelbutton.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { alertDialog.dismiss(); } }); break; } } }Happy coding and don’t forget to share! Reference: Android Dialog – Android Custom Dialog from our JCG partner Chathura Wijesinghe at the Java Sri Lankan Support blog....

Coding and Cynicism

We had no reasons to be anxious about this component. It has been running for about an year now. It used to handle around 1000 messages per day and email out a automated report twice every day. The solution was based on robust integration tools and technologies i.e. TIBCO EMS for delivering messages and Spring Integration for reading and handling them. Everything was predictable, boring and nice. And one morning everything changed. This component froze with a null pointer exception. Nothing more, nothing less. There were no logs. They never are when you need them. Nothing had changed in the code or in the mode of delivery. There were no obvious miscreants. Business had found out the break – as one of the automated reports had failed – and were demanding an estimated time of fix. It was a picture perfect start for the firefighters of the product team – and they poured out their first cup of coffee. So, the team swung into action. Half a day later – after multiple calls with business (not very pleasant, any one of them, mind you) – it was suggested that it might – just might be – that a couple of messages in the 1000 or so, did not have a required field – which by the way was guaranteed to be there by the business processes. So we took these two messages off and switched on the component. Lo and behold, crashed again. This time because there were much more messages than it could handle (remember messages kept coming in while the team was troubleshooting the problem). I will not bore you with the multitude of calls that followed, and how a fix was arrived and delivered. It suffices to say that too many man hours were spent on this for my comfort. And this lead me to write down my thoughts on this. I am all for communications, meetings, workshops, creation of all sorts of requirements and design documents. I see the value in all of them. I really do – although it has been accused many a times that I don’t. But, at the end of the day, there is no substitute for a minimal amount of street smartness. A healthy amount of cynicism goes a long way in designing a resilient system. In this particular case, a couple of things had gone wrong. 1. We trusted the data quality of the feed coming in from a different system. And we should not have. No. This is not going to be written down in any book discussing integration patterns. It is just something that a seasoned developer would not do, but a new one – although as sharp as a tac – would slip up on. Folks had trusted the requirement document that guaranteed that certain fields would be populated. But, the fact is, when the fields were not populated, it was not Ok for our component to go down. A seasoned developer would have consulted the requirements document and developed to it – but would not have trusted the requirement document. He would have been cynical. 2. We trusted the data volume of the feed. And we should not have. Again, this was something written down in the document and the code hence was technically correct. But, if only the developer would have said, ‘Hang on, if you are saying 1000 is the tops that you expect, fine, I will pull only 1000 at one go. If there are more, I will pull a second batch. And more batches if I need. But never more than 1000.’ we would have been fine. We should not have pulled all data from message queue – assuming it will be less than 1000, because it was written down in the document. A seasoned developer would have been cynical of the document. The component is fixed and everything is back in business. It is no biggie. This was not the first time something like this happened and I am willing to wager that it will not be the last. The point that I am trying to make is that the business of software production is not – and perhaps will never be – like the production line of a hardware commodity. It is most unlikely to enjoy the stability, predictability, and repeatability of the production line of – say a car. So, the proliferation of processes, documents, meetings will not going to be as successful in this business. Processes are fine. Documents are fine. Productivity measuring tools and code quality matrices are great. Workshops are great. Peer reviews are a must. But they are quite unlikely to be a substitute for a person who loves coding, takes pride in it, and goes that extra mile to ensure that his code does not fail. These people will always be in short supply and in great demand. As an industry, sooner or later we will have to find a way to create, foster and retain these individuals. That’s it for today. Don’t forget to share! Reference: Coding and Cynicism. from our JCG partner Partho at the Tech for Enterprise blog....

Three mobile projects from the Apache Software Foundation

When somebody says “Apache Software Foundation” in a discussion on Open Source, some people try to provoke with a big and loud yawn. They think on C++ coding dinosaurs who are hacking on Apache HTTPD, the worlds most used web server. Indeed the trend tells us to use NGINX or work with Node.js on the server side instead. Besides the “grandpa software” arguments they claim about SVN. I have a surprise for these people:The Apache Software Foundation support GIT. We have GIT-Mirrors on Github and some projects already switched to native GIT on ASF machines. No reason to claim anymore. We do not code only in C++ or in Java. A lot of “JavaScript projects” arrived or still arrive, all targeting on mobile development. In other words: the Apache Software Foundation still runs innovating projects. And NO, the people behind Apache HTTPD are NOT dinosaurs. They are pretty cool and the still make our daily web experience. Let us be glad and thankful for our HTTPD, besides all trends.The grumblers usually mention something on “social coding” in that discussion and forget the ASF motto: “community over code”. But in this post I don’t want to elaborate the difference between Github and the ASF. I would like to show you three cool projects in the mobile and javascript area. These projects prove: the Apache Software Foundation still does innovating Software. And with the Apache Software Foundation I mean all the nice individuals who spend endless hours, often in their prime time (like me *sigh*).Apache Cordova Apache Cordova started as Phonegap. It was Open Source for a good while and gained great success. Then Phonegap has been bought by Adobe. While the name “Phonegap” still refers to the company, the sourcecode, documentation and all related assets have been donated to the Apache Software Foundation. The donation got the name Apache Cordova. Now Phonegap is offering great services around Apache Cordova. What does it do? In easy terms: with Cordova you can create Apps for the iOS, Android, Windows Phone, Blackberry and so on. You write your App in HTML and JavaScript. Cordova is some kind of container which runs your code inside a webview. If you are good with design, you can create native looking Apps with just HTML and JavaScript. And – of course – with only little extra work it is pretty easy to port your code from one device to another. Of course, the code is hosted local on your devices harddisc. No remote host is necessary – you can stay local, if you want. Apache Cordova is an API which lets you access even native functions, like Camera or the Accelerometer – all with JavaScript. It does not give you widgets which makes it “good looking”, like jQuery mobile does. But well, it is pretty trivial to design with HTML (compared with some native languages). And you can use jQuery Mobile, of course (or something else). I wrote myself Cordova Apps and I am pretty much impressed by the power of the API. I have had some pain with jQuery mobile on the other hand, but could solve everything with using my own nice looking widgets.Apache DeviceMap Apache DeviceMap is a project which creates a data repository with device information and images for all devices you can imagine. The plan is to collect data on the devices first and second vcreate an API to use or manage it. This project is pretty new, but it got already some significant contributions. If you need information on the various devices you should keep an eye on this project.Ripple, A Mobile Environment Emulator Wait, not Apache Ripple? Not yet! This project proposal has hit the Apache Incubator recently! There is no acceptance vote so far and thus this project is NOT YET an Apache project. But the feedback was so good already, that I expect the Ripple-Developers to become Apache Ripple-Developers pretty soon. I am excited on this project myself, as it is a great addition to Apache Cordova or Apache Devicemap. Ripple is a browser based mobile phone emulator! It will help you to develop Apache Cordova Apps in your browser. It means, you write HTML/JavaScript in your editor of your choice and open the project in for example Chrome. Ripple let it look like it would run in a real device. This will speed up your development cycle a lot – esp. if you have experienced the sluggish boot/redeploy times of Android devices. The iPhone Simulator is pretty nice of course, but Ripple tries to emulate some native functions which are not supported by the iPhone Simulator yet. Ripple and Apache Cordova have much in common. That Ripple joins the ASF is only a logical step for me. Glad they want to do it.What to heck is the Apache Incubator? You may have recognized Apache Cordova and Apache DeviceMap are “incubating”. And I wrote Ripple might (!) join the Apache Incubator very soon. But what is it, this “Incubator”? Before a project joins the Apache Software Foundation, we need to make sure IP is clear, no trademarks are hurt and people understand the so called “Apache Way”. With latter ASF people ususally mean: “Community over code”. Of course there is a little more behind it, but you get the idea. The ASF is a group of people who want to write software together – but also sometimes want to drink a beer together, chat or maybe phone. We are a community and new projects need to grow into the rest of the community. Not every project feels good at the ASF and some leave. You can think of the Incubator not only as the trial phase in which the ASF tests the project if it fits – it is also vice versa. The project needs to decide if they want to join our community. Anyway, only when the project leaves the Incubator it is an “official” Apache project. It gets its own subdomain alá logging.apache.org. I expect to leave Apache Cordova pretty soon. As some people on the Ripple project are involved in Apache Cordova, I believe Ripple might only need pretty short time in the Incubator too. The case of Apache DeviceMap is a little different. The community there is building up from scratch and so I guess it will take a few more weeks to complete incubation. But let’s see.You can help! Even Incubator projects (and especially these) are open for contributions of any kind! You are welcome to help! And as you have seen, some of our projects are “in trend” and modern. If you help, you can even tell your friends you are participating on a new and modern project. JavaScript and Mobile has truly arrived at the Apache Software Foundation. It’s up to you if you join us – if you do, let me know and we’ll probably have a beer together. Reference: Three mobile projects from the Apache Software Foundation from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

Android books giveaway for celebrating Packt’s 1000 title Roundup

Hello fellow Java Geeks! Our second giveaway of Packt Publishing’s best selling books on Android has ended.The Prize Winners The 4 lucky winners that will receive the e-book prizes are (names are as appeared on their emails) :Joakim Lagström Rizwan Ahamath Burhanudeen Isreal esau sugganbugganEach one of the 4 winners will receive a copy of their favorite e-books on Android programming for free : The Prizes AndEngine for Android Game Development Cookbook: RAWOverview of the uber-cool AndEngine game engine for Android Game Development. Includes step by step detailed instructions and information on a number of AndEngine functions, including illustrations and diagrams for added support and resultsThis book is currently available as a RAW (Read As we Write) book. Android 3.0 Application Development CookbookQuickly develop applications that take advantage of the very latest mobile technologies, including web apps, sensors, and touch screens. Excellent for developing a full Android application. Appcelerator Titanium Smartphone App Development CookbookLeverage your JavaScript skills to write mobile applications using Titanium Studio tools with the native advantage! Android User Interface Development: Beginner’s GuideLeverage the Android platform’s flexibility and power to design impactful user-interfaces. Congratulations to the winners! Happy reading/coding! The Java Code Geeks team...

3 things Java developers should know

Here is an interesting article for those of should who have been following remotely the JavaOne 2012 conference. A recent interview with the Java Champion Heinz Kabutz was brought to my attention; including his Java memory puzzle program which was quite instructive from a Java memory management perspective. One particular section of the interview captured my attention: things Java developers should know and currently do not. Heinz made some really good points in his interview. This article will revisit and expand a few of those today. Heinz is also sharing his concerns regarding the removal of the HotSpot VM PermGen space which is now targeted more for the Java 8 release. Java concurrency principles: should you care or not? As Heinz pointed out, this is often a topic that some Java developers prefer to avoid. Unless you are developing a single thread main program, you do have to worry about thread concurrency and all the associated problems. As a Java EE developer, your code will be running within a high concurrent threads environment. Simple Java coding mistakes can expose your code to severe thread race conditions, stability and performance problems. Lack of key thread knowledge can also prevent you to properly fine tune the Java EE container thread pool layer. From my perspective, every Java developer should try to understand the basic Java concurrency principles from both a development and troubleshooting perspective such as JVM Thread Dump analysis. Raise your IDE skills to the next level: learn shortcut keys Next Heinz’s recommendation is to acquire deeper knowledge of your Java IDE environment. This tip may sound obvious for some but you would actually be surprised how many Java developers can quickly “plateau” with their IDE usage and productivity. Such “plateau” is often due to a lack of deeper exploration of your IDE shortcut keys and capabilities. This article from DZone is a nice starting point to learn useful shortcuts if using the Eclipse IDE. Java memory management: learn how to read GC logs Last but not the least: learn how to read GC logs. This is my favorite of all Hein’s recommendations. As you saw from my previous tutorial, JVM GC logs contain some crucial information on your Java VM memory footprint and garbage collection health. This data is especially critical when performing tuning of your JVM or troubleshooting OutOfMemoryError: Java Heap Space related problems. Let’s be honest here, It will take time before you can acquire about just half the knowledge of Java Champions such as Kirk Pepperdine but starting to analyze and understand your application GC logs and the Java memory management fundamentals is a perfect place to start. Don’t forget to share! Reference: 3 things Java developers should know from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

JavaOne 2012: Looking into the JVM Crystal Ball

I returned to Plaza A/B in the Hilton to attend the fourth session on Monday, but first went up to the top floor of the Hilton to pick up lunch. I’m reminded every year on the first day of JavaOne how surprisingly frustrating the first day’s lunch acquiring process is for everyone involved. I know I found the experience a little confusing my first year at JavaOne as I wasn’t sure where the lunches were available and I wasn’t aware of the ticket for lunch included with my badge (that’s what I get for my not reading the instructions first mentality). There was obvious confusion today as I heard people asking, ‘What ticket?’ when asked to produce their lunch tickets. It didn’t help that those trying to organize the hungry horde advised us to stay away from the top of the escalator, but didn’t know exactly where we should go instead.Mikael Vidstedt and Staffan Friberg presented ‘Looking into the JVM Crystal Ball.’ They stated that the two primary areas of coverage for this presentation are technical VM details and the VM roadmap. An early slide, ‘VM Convergence,’ talked about the convergence of JRockit and HotSpot as well as the CDC (Jave ME) and HotSpot Embedded convergence. A slide on ‘Serviceability: Introspection and Analysis’ talked about desire for ‘unified logging’ (JEP 158) and ‘native memory tracking.’ Another slide with the same title talked about ‘Java Flight Recorder and Java Mission Control’ that is a licensed feature in JRockit that will be available in HotSpot (still as a licensed feature). A ‘Just Say Java’ bullet refers to intent to ‘remove artificial memory limits and required tuning’ and to ‘reduce the complexity of tuning the garbage collector.’ The end goal is a ‘single scalable VM for both client and server’ using a ‘multi-tiered optimizing compiler.’ Another slide with the same ‘Enterprise: Server Java’ title talked about ‘instant performance,’ ‘low latency garbage collector,’ and big data (requiring big heaps). ‘Cloud and Virtualization: Multi Tenancy’ was the title of a slide talking about ‘dynamic scaling and on-demand availability,’ maintaining ‘full isolation’ and maximizing ‘resource utilization.’ The ‘Developer Experience: Continued Improvement’ slide referenced the value of multiple languages supported on the virtual machine. The slide and speaker also referenced improving the development experience with ‘dynamic development and debugging’ through ‘close cooperation with IDE developers.’ A JEP is a Java Enhancement Proposal and JEPs document via community process what is to be added to the virtual machine. It was stated in this session that the JVM can now be scaled from the small Raspberry Pi to the huge Exalogic T3-1B. The point was made that many of the things that benefit one of these extremes also benefit the opposite extreme and everything in between. The ‘Footprint: Every byte counts!’ slide covered some examples of features of the embedded JVM that the HotSpot VM developers are working to add to the HotSpot VM. These include ‘compact JVM internal structures’ (JEP 147) and ‘dynamic sizing’ of ‘interned string table,’ ‘system dictionary,’ and ‘caches.’ Both enterprise and embedded extremes benefit from these changes. In conjunction with the bullet ‘Java Heap is ‘Easy’,’ there was mention of HPROF and Java Mission Control. Native Memory Tracking is ‘really useful for hunting footprints in general.’ JSR 292/JEP 160 (invokedynamic had some issues (NoClassDefFoundError) in its initial release, but they believe these issues have been addressed. As was stated in The Road to Lambda earlier today, Project Lambda is using invokedynamic. The point was made that this is evidence that invokedynamic is not just for ‘alternate JVM languages,’ but is useful for the Java language itself. Project Nashorn will also benefit from invokedynamic. Three actions were outlined that optimize for multiple languages. These are ‘inlining’ (all of which is done upfront today, but they’d like to enable compiler to incrementally inline), ‘escape analysis improvements’ (analysis of ways to improve code), and ‘boxing elimination’ (avoid extraneous object creation). JEP 165 deals with ‘fine-grained compiler control’ and JEP 143 exists to improve lock contention. There was discussion of the slide ‘G1 – Garbage First: The Future of Garbage Collection.’ It was explained that this changes the approach from ‘one ginormous Java heap’ to heap treated as ‘many small parts.’ The -XX:+UseG1GC option was mentioned as a way to try out this new garbage collector as of JDK 7 Update 4. JEP 144 is designed to reduce garbage collection latency for large heaps. ‘PermGen is no more!’ is a bullet on the slide on the new JVM memory layout anjd is a result of JEP 122. This change is supposed to be ‘transparent to user,’ but they would like Java developers to try it out to make sure the change is truly invisible. JEP 159 deals with ‘Enhanced Class Redefinition.’ They would like to relax today’s ‘redefinition using java.lang.instrument, JVMTI, etc.’ to more than just redefining code body. Another direction for the JVM developers is toward heterogeneous computing. ‘GPUs are very powerful and more available than in the past.’ Project Sumatra attempts to support GPUs and Arrays 2.0 concept. The point was made that ‘the Cloud makes the deployment environment more fluid,’ but that ‘the JVM is in a unique position to help.’ Their goal is to ensure that the JVM can pick up cloud-related changes and maintain isolation. It was pointed out that ‘a nice outcome of the removal of the Permanent Generation’ is that ‘Class Data Sharing’ now can work with all garbage collectors rather than working only with the serial collector. JEP 145 aims to reduce start-up time and to reduce warm-up time of a Java application. It was emphasized several times in this presentation that developers can help test out and drive fixes and improvements by downloading the latest versions of the VM and language compiler, trying them out, and providing feedback. The JDK8 early access builds are available for download and the versions without permanent generation should be available soon. Don’t forget to share! Reference: JavaOne 2012: Looking into the JVM Crystal Ball from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

5 ways how a recruiter can **** me off

I get called from recruiters on a regular basis. It just happened right now which motivated me to collect my thoughts on IT recruiting – something on my to-do list for a while. To be honest, like many of my colleagues I’m not very fond of the “scene”. At some points of time it felt more like trading with camels or cars rather than speaking about a new job opportunity, the camel being me. Am I a coding machine? Even when they laugh or make jokes, I can feel much often that they are really not interested in making the perfect match for their customers, but just want to convince me to apply for the job. It seems it does not matter if I actually match the technical requirements. This is my list of the biggest mistakes one can do when he tries to hire me. It is not complete though.1. Phone calls are the 2nd step and not the 1st Some recruiters seem to think I am just waiting for their opportunity and call me instantly. Sometimes I have no clue where they got my phone number. Sometimes it feels all recruiters are somehow connected and share my private data. Maybe some kind of candidate-poker similar to programmers scrum-poker? Don’t know. My mobile ONLY rings when I’m concentrated. I have many customers and it is hard to differ a customer from a recruiter. I find it rather annoying to interrupt my work just to tell a complete stranger that I am not available for a new project. Many projects I do are happening because somebody recommended me. So far it worked out so well that I never accepted a Recruiter job until now. The likeliness that I join a recruiter project is pretty low. I know many others who are in the same situation. An e-mail could clarify that within 10 seconds, a phone call takes much more time. Currently I get five unwanted phone calls a week. I can identify a few recruiters on their numbers and leave them to my mailbox. Unfortunately they know so well that I would love their offer and so they ring multiple times a day, until I sacrifice lunch or another break to speak with them. Oh or even my evening hours because one of the recruiters once called me at 10pm. Meanwhile I refuse all projects which are offered to me by an unwanted phone call. I expect serious recruiters to write me an e-mail. I read all of my e-mails and respond in time when I am interested and if I feel the recruiter writes to me directly and does not use a mailing list.2. Read my CV Some recruiters got in touch with me before a good while. They have my e-mail address and they have my CV. I expect a good recruiter to read it and decide if it does match. BEFORE I get any e-mails. It is not my job to filter out the matching projects, it is the recruiter’s job to be a filter. Actually some recruiters send me every project they have, ignoring my CV completely. In one case I get three e-mails a day asking me to join a project in the .NET world. It is pretty clear from my CV that I have no clue on Windows development. I do not own a Windows machine and I do not intend to buy one. And, no I am not an expert on C++ also and I have no MQ Series administration skills. Good, there are filters on GMail. I have not subscribed to a mailing list. I just sent my CV to recruiter to find a matching job. It is like using a dating site to meet a new partner and the system propose you simply everybody in the database. I can accept if somebody asks if I would join a Java Swing project because I have a lot of Java experience. Some Jobs make sense, even when I have not worked in the field before. But asking me to do hack MQ Series is so far from my CV that I categorize that e-mail as Spam. I never cooperate with Spammers. When the recruiter has read my CV he should also know at which experience level I operate. I code PHP since 1998, Java since 2001. I am not interested in Junior roles as “PHP Programmer”. In my CV is also a reference to the Open Source projects I participate, it should prove what I wrote.3. Know what you are doing I don’t expect recruiters to be programmers. But they should know a little bit about todays technologies. Recently a recruiter rang me unwanted. I instantly told him that I’m not interested in a new job and that I dislike calls before e-mails. He was a bit confused about my reaction and he wanted to improve the situation. So he started to speak about the Microsoft cloud and mentioned that he did not understand what it actually does. Azure is an important part of the project he was recruiting for. I’m sorry, but I’m not giving free training lessons to recruiters. I expect people to learn themselves. We all need to do that. In another case somebody said: if you can code in Java, you can code in .NET too. He was the opinion it is the same, somehow. Well, somehow yes. But actually not really. It is dead simple to find out that Java is a whole ecosystem and .NET is another one. Either this guy is ignorant or never read Wikipedia. The same guy wanted me to join a Java Swing project later. He said, I have the Spring framework on my CV and it sounds pretty similar to Java Swing, so it should be no problem to join the project. I do not remember what I answered, but I needed to laugh. When I am attending a meeting with a new customer together with the recruiter, I’m expecting that the recruiter at least has some basic knowledge about what we are speaking off. And, if not he should better remain silent while we speak. I have had several weird situations caused by recruiters who suddenly discussed the service bus of an application this the customer’s architect. To the defense of the recruiters, this last story was happening with a Sales person of the company I was employed, not a real recruiter. But hey, the outcome is the same: Shut up if you have nothing to say.4. Don’t ask me to fool my customer When I have a customer I work hard for him. He can expect me to give my 110%. My customers are usually very thankful for that and they recommend me to others. I have a good relationship with all my customers. It happened to me one day a recruiter called me and asked for my availability. I told him I have a project already and that I am not available until the end of the year. What followed surprised me very much. He suggested telling my customer that I’m sick. While my sick leave I could work for the recruiter’s project. Once I am sick for the 4th week my customer will surely look for a replacement and I could join officially. I know it’s recruiting war. But this guy has lost it. I told him to delete my data and to never call me again.5. Respect my requirements When I was younger I did not care where I worked. But now I do. Furthermore, I know exactly which projects I like and which not. When I told a recruiting company that I am only working in a specific area and for a specific amount of money, I don’t want offers which are not fulfilling these requirements. I don’t want to discuss them except stated otherwise. My requirements for a job should be taken as a filter when looking for the right candidates. But for a while and recruiter wanted to explain to me the benefits of a city that I really dislike. He could not be stopped. I told him I was there and I simply didn’t like the city. I told him it was too far away from my family. I told him I have some offers with better locations for me. But he kept arguing and explained how good everything was there. He even explained how my family could move there too. Well, it was such a big waste of time and I was finally forced to interrupt him.Positions I really consider There were some good recruiters crossing my path. They did not want to “sell” me a project. They looked at my web page or at my public social profiles and learned about my skills. They wrote me an e-mail that I would make a good match for the new project. No “act quick” or “impressive chance” and thanks heaven not another “exciting opportunity” (it seems even maintenance projects are exciting for some). If I was lacking a skill we could clarify easily if I’m interested in learning it or not. They had some good knowledge on the environment that they knew if I could do it or not. They wrote me a personalized e-mail and did not use a template. When I called them I have had the feeling that they were actually thinking if I would make a good match or not. They didn’t want to disappoint their customers. They didn’t want to waste my time. They worked somehow like good software developers: they learned the background they need, evaluate potential candidates and estimate if it would be a good match. It is just as easy as that. These recruiters do not need mass emails. But unfortunately they are rare.Conclusion Don’t treat me like a camel. Nothing else. Reference: 5 ways how a recruiter can **** me off from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

Getting started with Scala and Scalatra – Part IV

Welcome to the last part of this series of tutorials on scala and scalatra. In this part we’ll look at how you can use Akka to handle your requests using an asynchronous dispatcher, how to use subcut for dependency injection and finally how you can run the complete API in the cloud. In this example I’ve used openshift from JBoss to run the API on JBoss Application Server 7.1. Now what have we seen in the previous tutorials:Tutorial I: Setup scala and scalatra for use within Eclipse and create your first application. Tutorial II: Start scalatra embedded, create a REST api that used JSON and test with specs2 Tutorial III: Add persistency with scalaquery and add a hmac based security layerThe examples in this tutorial assume you’ve completed the previous three tutorials. We won’t show all the details, but focus on adding new functionality to the existing application (from part III). To be precise in this example we’ll show you the following steps:First, we’ll introduce subcut to the application for dependency injection Next, we’ll make our requests asynchronous by using Akka’s futures Fiinally, we’ll enable CORS, package the application and deploy it to openshiftAnd we’ll have an API we can call on the openshift cloud.Let’s start with subcut Adding dependency injection to the application In java there are many dependency injection frameworks. Most people have heard of Spring and Guice and dependency injection even has its own JSR and specifications. In scala, however, this isn’t the case. There has been lots of talk whether scala application need a DI framework, since these concepts can also be applied using standard Scala language constructs. When you start investigating dependency injection for Scala you’ll quickly run into the cake pattern (see here for a very extensive explanation). I won’t go into the details why you should or should not use the cake pattern, but for me personally it felt like it introduced too much cruft and glue code and I wanted something simpler. For this article I’m going to use subcut. Subcut is a really small and simple to use framework, which makes using DI in scala very easy and unobtrusive. Nothing works like examples. So what do you need to do have subcut manage your dependencies. First of, we of course need to nicely separate our implementation from our interface/trait. In part III we create a set of repositories which we used directly from the scalatra routes by creating them as class variables: // repo stores our items val itemRepo = new ItemRepository; val bidRepo = new BidRepository; The problem is this binds our routes directly to the implementation, which is something we don’t want. So first lets expand the repositories by defining a trait for these repositories. trait BidRepo { def get(bid: Long, user: String) : Option[Bid] def create(bid: Bid): Bid def delete(user:String, bid: Long) : Option[Bid] } trait ItemRepo { def get(id: Number) : Option[Item] def delete(id: Number) : Option[Item] } trait KeyRepo { def validateKey(key: String, app:String, server: String): Boolean } Nothing out of the ordinary. We use this trait from our implementations, like shown below, and we’re done. class BidRepository extends RepositoryBase with BidRepo { ... } Now that we’ve defined our traits, we can start using subcut to manage our dependencies. For this we need a couple of things:Which implementation are bound to what trait Which classes need to have resources injected Bootstrap the ‘root’ object with our configurationBefore we start. We first need to update our build.sbt with the subcut dependency and add the correct repository. libraryDependencies ++= Seq( 'com.escalatesoft.subcut' %% 'subcut' % '2.0-SNAPSHOT', 'org.scalaquery' %% 'scalaquery' % '0.10.0-M1', 'postgresql' % 'postgresql' % '9.1-901.jdbc4', 'net.liftweb' %% 'lift-json' % '2.4', 'org.scalatra' % 'scalatra' % '2.1.0', 'org.scalatra' % 'scalatra-scalate' % '2.1.0', 'org.scalatra' % 'scalatra-specs2' % '2.1.0', 'org.scalatra' % 'scalatra-akka' % '2.1.0', 'ch.qos.logback' % 'logback-classic' % '1.0.6' % 'runtime', 'org.eclipse.jetty' % 'jetty-webapp' % '8.1.5.v20120716' % 'container', 'org.eclipse.jetty' % 'test-jetty-servlet' % '8.1.5.v20120716' % 'test', 'org.eclipse.jetty.orbit' % 'javax.servlet' % '3.0.0.v201112011016' % 'container;provided;test' artifacts (Artifact('javax.servlet', 'jar', 'jar')) ) resolvers ++= Seq('Scala-Tools Maven2 Snapshots Repository' at 'https://oss.sonatype.org/content/groups/public/', 'Typesafe Repository' at 'http://repo.typesafe.com/typesafe/releases/') This not only adds the subcut dependencies, but also the akka onces we’ll see further in this articleBind implementations to a trait Bindings in subcut are defined in a binding module. So by extending a module you create a configuration for your application. For instance you could define a configuration for test, one for QA and another one for production. // this defines which components are available for this module // for this example we won't have that much to inject. So lets // just inject the repositories. object ProjectConfiguration extends NewBindingModule(module => { import module._ // can now use bind directly // in our example we only need to bind to singletons, all bindings will // return the same instance. bind [BidRepo] toSingle new BidRepository bind [ItemRepo] toSingle new ItemRepository // with subcut however, we have many binding option as an example we bind the keyrepo and want a new // instance every time the binding is injected. We'll use the toProvider option for this bind [KeyRepo] toProvider {new KeyRepository} } ) Without diving too deep into subcut. What we do in this code fragment is that we bind an implementation to a trait. We do this for all the resources we want to inject, so subcut knows which implementation to create when it encounters a specific interface. If we want to inject different implementations of a specific trait we can also add an id to the binding, so we can uniquely reference them.Configure classes that need to have resources injected Now that we have a set of traits bound to an implementation we can let subcut inject the resources. For this we need to do two things. First we need to add an implicit val to the HelloScalatraServlet class. class HelloScalatraServlet(implicit val bindingModule: BindingModule) extends ScalatraServlet with Authentication with RESTRoutes { .... } This needs to be added to all classes that want to have there resources injected by subcut. With this implicit value subcut has access to the configuration and can use it to inject dependencies. We’ve defined our routes in the RESTRoutes trait, so lets look at how we configure this trait to work with subcut: trait RESTRoutes extends ScalatraBase with Injectable { // simple logger val logger = Logger(classOf[RESTRoutes]); // This repository is injected based on type. If no type can be found an exception is thrown val itemRepo = inject[ItemRepo] // This repo is injected optionally. If none is provided a standard one will be created val bidRepo = injectOptional[BidRepo] getOrElse {new BidRepository}; ... } We added the Injectable trait from subcut so we can use the inject functions (of which there are multiple variants). In this example the itemRepo is injected using the inject function. If no suitable implementation can be found an error message is thrown. And the bidRepo is injected using injectOptional. If nothing was bound, a default is used. Since this trait is used by the servlet we just saw (the one with the implicit bindingmodule) it has access to the binding configuration and subcut will inject the required dependencies. Bootstrap the ‘root’ object with our configuration All we need to do now is tell our root object (the servlet) which configuration it should use, and everything will be wired together. We do this from the generated Scalatra listener, where we add the following: ... override def init(context: ServletContext) { // reference the project configuation, this is implicatly passed into the // helloScalatraServlet implicit val bindingModule = ProjectConfiguration // Mount one or more servlets, this will inject the projectconfiguration context.mount(new HelloScalatraServlet, '/*') } ... Here we create the bindingModule, which is implicitly passed into the constructor of the HelloScalatraServlet. And that’s it, when you now start the application, subcut will determine which dependency needs to be injected. And that’s it. If we now start the application subcut will handle the dependencies. If all goes well, and all dependencies are found, the application will start up successfully. If one of the dependencies can’t be found an error will be thrown like this: 15:05:51.112 [main] WARN o.eclipse.jetty.webapp.WebAppContext - Failed startup of context o.e.j.w.WebAppContext{/,file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/src/main/webapp/},src/main/webapp org.scala_tools.subcut.inject.BindingException: No binding for key BindingKey(org.smartjava.scalatra.repository.ItemRepo,None) at org.scala_tools.subcut.inject.BindingModule$class.inject(BindingModule.scala:66) ~[subcut_2.9.1-2.0-SNAPSHOT.jar:2.0-SNAPSHOT] On to the next item on the list, Akka.Add asynchronous processing with Akka Akka provides you with a complete Actor framework you can use to create scalable, multi-threading applications. Scalatra has support for Akka out of the box, so getting it to work is very easy. Just add the correct trait, wrap functions with the Future function and you’re pretty much done. All the action happens in the RESTRoutes trait where we’ve defined our routes. Lets enable a couple of these methods to use Akka. trait RESTRoutes extends ScalatraBase with Injectable with AkkaSupport{ ... /** * Handles get based on items id. This operation doesn't have a specific * media-type since we're doing a simple GET without content. This operation * returns an item of the type application/vnd.smartbid.item+json */ get('/items/:id') { // set the result content type contentType = 'application/vnd.smartbid.item+json' // the future can't access params directly, so extract them first val id = params('id').toInt; Future { // convert response to json and return as OK itemRepo.get(id) match { case Some(x) => Ok(write(x)); case None => NotFound('Item with id ' + id + ' not found'); } } } /** * Delete the specified item */ delete('/items/:id') { val id = params('id').toInt; Future { itemRepo.delete(id) match { case Some(x) => NoContent(); case None => NotFound('Item with id ' + id + ' not found'); } } } ... } Not much too see here. We just added the AkkaSupport trait and wrap our method body with the Future function. This will run the code block asynchronously. Scalatra will wait until this block is done, and return the result. One thing to note here, is that you don’t have access to the request context variables provided by scalatra. So if you want to set the response content-type you need to do this outside the future. The same goes for instance for accessing parameters or the request body. All you need to do now is setup an Akka ActorSystem. The easiest way to do this, is by just using the default actor system. See the Akka documentation for the advanced options. class HelloScalatraServlet(implicit val bindingModule: BindingModule) extends ScalatraServlet with Authentication with AkkaSupport with RESTRoutes { // create a default actor system. This is used from the futures in the web routes val system = ActorSystem() } Now when you run the servlet container you’ll be using Akka futures to handle the requests.Add CORS and deploy on the cloud As a final step lets add CORS. with CORS you can open up your API for use from other domains. This avoids the need for JSONP. Using this in scalatra is suprisingly simple. Just add the trait CorsSupport and you’re done. You’ll see something like this, when you start the application: 15:31:28.505 [main] DEBUG o.s.scalatra.HelloScalatraServlet - Enabled CORS Support with: allowedOrigins: * allowedMethods: GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH allowedHeaders: Cookie, Host, X-Forwarded-For, Accept-Charset, If-Modified-Since, Accept-Language, X-Forwarded-Port, Connection, X-Forwarded-Proto, User-Agent, Referer, Accept-Encoding, X-Requested-With, Authorization, Accept, Content-Type You can fine tune what you support by using a set of init parameters explained here. Now all that is left is to package everything up, and deploy it to openshift. If you haven’t done so already, register on openshift (it’s free). For my example I use a standard ‘JBoss Application Server 7.1′ application, without any cartridges.I didn’t want to configure postgresql, so I created a dummy repo implementation: class DummyBidRepository extends BidRepo{ val dummy = new Bid(Option(10l),10,10,20,'FL',10l,12345l, List()); def get(bid: Long, user: String) : Option[Bid] = { Option(dummy); } def create(bid: Bid): Bid = { dummy; } def delete(user:String, bid: Long) : Option[Bid] = { Option(dummy); } } And used subcut to inject this one, instead of the repo that requires a database: bind [BidRepo] toSingle new DummyBidRepository With this small change we can use sbt to create war file. jos@Joss-MacBook-Pro.local:~/Dev/scalatra/firststeps/hello-scalatra$ sbt package && cp target/scala-2.9.1/hello-scalatra_2.9.1-0.1.0-SNAPSHOT.war ~/dev/git/smartjava/deployments/ [info] Loading project definition from /Users/jos/Dev/scalatra/firststeps/hello-scalatra/project [info] Set current project to hello-scalatra (in build file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/) [info] Compiling 2 Scala sources to /Users/jos/Dev/scalatra/firststeps/hello-scalatra/target/scala-2.9.1/classes... [info] Packaging /Users/jos/Dev/scalatra/firststeps/hello-scalatra/target/scala-2.9.1/hello-scalatra_2.9.1-0.1.0-SNAPSHOT.war ... [info] Done packaging. [success] Total time: 7 s, completed Oct 5, 2012 1:57:12 PM And use git to deploy it to openshift jos@Joss-MacBook-Pro.local:~/git/smartjava/deployments$ git add hello-scalatra_2.9.1-0.1.0-SNAPSHOT.war && git commit -m 'update' && git push [master b1c6eae] update 1 files changed, 0 insertions(+), 0 deletions(-) Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 11.16 KiB, done. Total 4 (delta 3), reused 0 (delta 0) remote: Stopping application... remote: Done remote: ~/git/smartjava.git ~/git/smartjava.git remote: ~/git/smartjava.git remote: Running .openshift/action_hooks/pre_build ... remote: Emptying tmp dir: /var/lib/stickshift/3bc81f5b0d7c48ad84442698c9da3ac4/smartjava/jbossas-7/standalone/tmp/work remote: Running .openshift/action_hooks/deploy remote: Starting application... remote: Done remote: Running .openshift/action_hooks/post_deploy To ssh://3bc81f5b0d7c48ad84442698c9da3ac4@smartjava-scalatra.rhcloud.com/~/git/smartjava.git/ a45121a..b1c6eae master -> master You’ll probably see something similar, and now you’re done. Or at least, almost done. Cause what happens when you access a resource:Hmm.. something went wrong. This is the message that’s interesting to us: java.lang.IllegalStateException: The servlet or filters that are being used by this request do not support async operation Hmmm.. apparently JBoss AS handles servlets a bit different from Jetty. The reason we see this message is that by default, according to the servlet 3.0 spec, servlets aren’t enabled to support async operations. Since we use Akka Futures as a result for our routes, we need this async support. Normally you enable this support in a web.xml or using annotations on a servlet. In our case, however, our servlet is started from a listener: override def init(context: ServletContext) { // reference the project configuation, this is implicatly passed into the // helloScalatraServlet implicit val bindingModule = ProjectConfiguration // Mount one or more servlets, this will inject the projectconfiguration context.mount(new HelloScalatraServlet, '/*') } Context.mount is a convenience method provided by scalatra that registers the servlet. However, this doesn’t enable async support. If we register the servlet ourself we can enable this async support. So replace the previous function with this function: override def init(context: ServletContext) { // reference the project configuation, this is implicatly passed into the // helloScalatraServlet implicit val bindingModule = ProjectConfiguration val servlet = new HelloScalatraServlet val reg = context.addServlet(servlet.getClass.getName,servlet); reg.addMapping('/*'); reg.setAsyncSupported(true); } Now we explicitly enable async support. Create a package again, and use git to deploy the web app to openshift sbt package git add hello-scalatra_2.9.1-0.1.0-SNAPSHOT.war && git commit -m 'update' && git push And now you’ve got a working version of your API running on openshift! Happy coding and don’t forget to share! Reference: Tutorial: Getting started with scala and scalatra – Part IV from our JCG partner Jos Dirksen at the Smart Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: