Featured FREE Whitepapers

What's New Here?

devoxx-logo

What is your “x”? The technical details about the Devoxx Keynote Demo

If everything went fine, I just got off the stage and ended the demo part of Mike’s (@mpiech) demo.We talked a lot about xPaaS. The different pieces of the puzzle that we integrated and about our newest acquisition FeedHenry. I think, that all this is worth a more detailed blog post about the inner workings of the demo. Background RedHat released a new version of OpenShift Enterprise 2.2 on the 10/10/14. This version also adds support for the private integration solutions (iPaaS) Fuse. Beside this, we recently acquired FeedHenry (mPaaS) and it was about time to show how those technologies actually integrate and help customers build solutions for tomorrows applications. I’ve been using the following overview in my latest talks to outline what a decent integration with all the different PaaS platforms might look like.On a very high level it contains exactly the parts and setup we used for today’s demo. Which is the following:I have to admit that it is far more complicated, than what we came up with in the demo on stage, but let’s not complain and get over the different parts in more details. Mobile App The mobile part of the demo was build using Cordova this is very well supported by FeedHenry, from their application console you can create builds for iOS and Android, but also easily test it locally. We used the FeedHenry client side javascript library which beside many other things allows us to generate statistics for the application management console, this can be very helpful when you have a production issue and try to diagnose the problem, for instance. Using Cordova together with Node is really nice, no need to switch paradigms and you can rapidly try out new ideas. FeedHenry Mobile Application Platform FeedHenry is a cloud-based mobile application platform to design, develop, deploy and mobile applications. The platform provides specific services for security, notification and data synchronization. You can build hybrid apps for mobile devices and it covers the complete development lifecycle. You can think of it as an Xcode IDE in the cloud which has three different components for applications. Obviously the mobile app, a server backend which you can built on top of Node.js and so-called mPaaS services which can be re-used across different applications. The interesting part of the demo are the two services. One which connects to JBoss A-MQ for xPaaS running on OSE via TCP/STOMP protocol and the other one connecting to an AeroGear unified Push Server instance running on OpenShift Online via REST. The new AeroGear MBaaS Integration Services leverages the  Node.js module already developed by the AeroGear team and provides a simple, secure way to integrate Push Notifications into your FeedHenry apps. The service itself is quick and painless to set up – simply provide the details of your AeroGear installation and credentials for the AeroGear app you want to use and you are ready to go. Using the service is just as easy – as a standard FeedHenry MBaaS service, you can call it in exactly the same way as you would any other MBaaS service – with a clean, crisp FeedHenry API call. AeroGear UnifiedPush Server on Openshift Online The AeroGear project is a one stop solution for all your Push Notification needs – covering Native iOS & Android, Cordova Hybrid as well as Simple Push for Web. It is available to use right now from the OpenShift Marketplace – so feel free to give it a whirl. JBoss Fuse and A-MQ for xPaaS on OpenShift Enterprise JBoss Fuse for xPaaS and JBoss A-MQ for xPaaS are based on Red Hat’s traditional on-premise integration and messaging offerings, Red Hat JBoss Fuse and Red Hat JBoss A-MQ. The latest versions of both products, announced at Red Hat Summit 2014, introduced features such as full support for AMQP 1.0, a vast library of connectors, improved high availability, and the ability to manage processes. In this particular example both instances running are managed via the Fuse Fabric. And the deployment of the Camel route to receive the twitter stream of tweets actually was done via a profile on the fabric. Distributing those applications and infrastructures gets insanely easy by doing it like that. At the end, the Camel rout isn’t very magical. Just a few lines of code for a log and a little conversion to JSON to make it easier for the FeedHenry A-MQ endpoint to consume it on the Node.js end. This screenshot was taken before the demo happened, I hope that we have some higher numbers after the keynote. The A-MQ side is even more simple. A basic standalone broker setup with just one queue called “tweets” like you already might have guessed. We are using two different client connectors. The Camel instance pushes messages via OpenWire and the FeedHenry service listens to them using STOMP. We’re actually not sending around binary content here, so this was the easiest setup. To be clear at this point: The Twitter integration is a nice showcase which we used to have a contact point with the audience. In real life scenarios you’re going to connect heavyweight stuff with Fuse. Like SAP, Oracle EBS, you name them. Takeaway The beauty of this is the overly complex architecture. We could have taken many shorter ways to make it work, but on the other hand, it was a very good exercise. In less than two weeks the RedHat and FeedHenry teams made both technical integrations possible. And we are very proud to have integrated this first chunk of services and help to better understand for what the different products are meant to be used. My Thank You’s Even if I had the pleasure to be on stage for the demo, I only did the very simple backend part. There are a bunch of folks, I’d like to mention here: – John Frizelle and Conor O’Neill for being our 24/7 contact into FeedHenry. There’s not a single thing those two couldn’t solve. – Jay Balunas, Erik-Jan De Wit, Sebastien Blanc and Matthias Wessendorf for developing the mobile bit’s and pieces and writing the Node.js service that we now have available in FeedHenry – Ben Parees, Grant Shipley, Marek Jelen, Hiram Chirino for all their efforts aroun OpenShift Online AND the Enterprise deployment that we used for the demo – Mike and Arun for all their support around the demo and their patience because we didn’t have it ready until the very last minute. – Christian Posta for holding my hand with all kinds of stupid Fuse questions Further Readings If you are curious, you can just get started with reading some more about what we did. Please keep in mind, that the acquisition is quite new and we still don’t have a publicly available version of FeedHenry ready for you to play with. But this is in the works. Keep your eyes open.FeedHenry and Red Hat Pushing ahead with Integrations New JBoss xPaaS offerings help developers integrate SaaS, PaaS, and on-premise applications FeedHenry meets AeroGear UnifiedPush Server!Reference: What is your “x”? The technical details about the Devoxx Keynote Demo from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
java-interview-questions-answers

OSGi Testsuite: Introducing Classname Filters

OSGi Testsuite is a JUnit test-runner that collects dynamically test classes for execution. It has been published by my fellow Rüdiger about a year ago and proven useful in some projects already. However for gonsole we had to use an ugly patch because version 1.0 only supported .*Test postfix matching for test class names. I solved this problem with version 1.1 by introducing an annotation @ClassnameFilters that uses regular expressions to match arbitrary name patterns. This post explains in short how it works.       OSGi Testsuite OSGi Testsuite provides a JUnit test runner BundleTestSuite that can be used to run all tests within a given number of OSGi bundles. To use it annotate a class with @RunWith(BundleTestSuite.class) and specify the bundles with @TestBundles({"bundle.1", ...}). When run JUnit will process all classes in the listed bundles, which have a name ending with 'Test'. @RunWith( BundleTestSuite.class ) @TestBundles( { "org.example.bundle1", "org.example.bundle2" } ) public class MasterTestSuite {} Unfortunately the Test postfix fixation has turned out to be a bit too inflexible. Within gonsole we use different postfixes for unit and integration tests. And we do not want the unit tests to be executed within the OSGi Testsuite run. But this distinction is not possible with version 1.0. ClassnameFilters Inspired by ClasspathSuite (which works similar to OSGi Testsuite on plain JUnit tests) I introduced an annotation @ClassnameFilters. This allows to define filters based on regular expressions to match arbitrary test name patterns: @RunWith( BundleTestSuite.class ) @TestBundles( { "org.example.bundle1", "org.example.bundle2" } ) @ClassnameFilters( { ".*ITest" } ) public class IntegrationTestSuite {} Processing the example would include all the tests of classes in the listed bundles, which have a name ending with the 'ITest' postfix. Note that classes with the simple 'Test' postfix would not be processed. Furthermore it is possible to specify exclusion patterns using a leading '!': @RunWith( BundleTestSuite.class ) @TestBundles( { "org.example.bundle1", "org.example.bundle2" } ) @ClassnameFilters( { ".*ITest", "!.*FooITest" } ) public class IntegrationTestSuite {} The given example would now execute all the tests of classes in the listed bundles, which have a name ending with 'ITest' postfix except for classes whose names end with ‘FooITest’. Simple enough, isn’t it? Conclusion OSGi Testsuite has been enhanced with a filter mechanism for dynamic execution of test classes that matches arbitrary name patterns. Filter specification is done easily using the ClassnameFilters annotation and regular expressions. The code is available under the Eclipse Public License and hosted on GitHub: https://github.com/rherrmann/osgi-testsuite The latest stable version can be obtained from this p2 repository: http://rherrmann.github.io/osgi-testsuite/repositoryReference: OSGi Testsuite: Introducing Classname Filters from our JCG partner Frank Appel at the Code Affine blog....
agile-logo

Agile Through a Matrix Lens

“Agile” is something most teams do wrong*, without realizing they’re doing it wrong.  A good 2×2 matrix acts as a lens, helping to convert information into insight.  Let’s apply this lens to agile as applied within a company, and see if it helps people decide to do things differently.       When You Say Agile, What Do You Mean? There may be as many definitions of agile as there are teams practicing agile development.  Generally, people are talking about iterating in what they do.  Instead of having a long, throw it over the wall process, out of which emerges a deliverable; a team will have a series of shorter iterations where they engage stakeholders and otherwise rethink what they are doing to course-correct and “get smarter” about what they are doing.  The Wikipedia page on agile is pretty comprehensive. Most teams think about agility in terms of how their development teams manage their process.  When “going agile” is the only thing you do, your product does not magically become more successful.  Some teams** think about what it means to be agile when determining what the development team should be doing in the first place. My epiphany was in realizing that these are two separate decisions an organization can make. A 2×2 Matrix of Agile When an organization can make two discrete decisions about being agile in how they create products, it results in four possible outcomes.  A 2×2 matrix can act as a powerful lens for exploring these decisions.  Our first step is to define our two axes. Requirements – how are they treated within the organization / by the team?Requirements and expectations are immutable – this is the typical expectation within a large bureaucracy; someone built a business case, got funding, and allocated a team to deliver the product as-defined. Requirements continually revisited – this is what we see nimble teams doing – at different levels of granularity, context, and relevance; at a low level, this is A|B testing and at a high level this is a pivot.Development process cadence – how frequently does the team deliver***?Infrequent delivery – there is no one size fits all measure to define infrequent vs. frequent; some companies will have fast-moving competitors, customers with rapidly evolving expectations, and significant influence from evolving technology – others will not (for now). Frequent delivery – the precise delineation from infrequent to frequent delivery is contextually dependent.With these two axes, we can draw a matrix.A subordinate message that I couldn’t resist putting into the matrix is that it is harder to do your job in an agile way.  I think you could pedantically argue that agile is easier – by saying it is easier to deliver the equivalent results when your process is agile.  And that’s true.  The point is to deliver a more successful product, which is harder than delivering a less successful product.  An agile approach makes that task easier.  Maybe another way to think about it – if your team is not capable of delivering a good product, going agile will make that more obvious, faster. Living in Boxes Everyone can map their team into one of the four boxes.  That’s the power of this sort of abstraction. Here’s where I can use your help: What are better names for these boxes?  I have satisficed with these names, but they could be better.  Please comment below with proposed alternatives, because I’ll be incorporating this lens into other aspects of my work, and I want it to be better than it currently is. Waterfall as PracticedWhile there are some teams which consciously choose agile because of the planning benefits or perceived risks to quality, I believe that most waterfall teams are still waterfall either because they haven’t chosen to revisit their process choices, or they tried and failed.  Perhaps their instructors weren’t good, perhaps the team was not equipped to make the shift.  My guess is that their organizations were unwilling or unable to support any change in the bureaucratic status quo, effectively making it impossible for the teams to succeed. BUFD & BUFR (Buffed and Buffer)BUFR is an acronym for big up-front requirements, and BUFD is the equivalent for big up-front design.  Both of them are labels assigned as part of the decade-old war between interaction design and extreme programming.  Conceptually, the battle is between rationalists and empiricists.  In a nutshell, the requirements are Defined (capital Defined), then the team applies an agile development methodology (mostly) to incrementally build the product according to the requirements. This is another area where can explore more – what are requirements, what is design, who owns what? My main point is that the developers, while going through the agile motions – even when getting feedback – are only realizing some of the benefits of agile.  Yes, they can improve the effectiveness of their particular design or implementation at solving the intended problem.  Yes, they can avoid the death-march. The problem is that the requirements are set in stone metaphorically. At the end of the day, the team is empowered to rapidly iterate on, and change how they choose to solve the target (market) problems.  The team is not empowered to rapidly change their minds about which market problems to target. When agile is being introduced to a larger organization, as a grass-roots initiative starting with a development team, this is corner the team will find themselves in. Req’s Churn or Glacial DevI struggle for the right way to describe the situation where the people responsible for determining the requirements are getting market feedback and changing their requirements, which the people responsible for creating the product are unwilling or unable to accept changes from the initial plan. From the development team’s point of view, “the product manager can’t make up his mind – we are just churning, without getting anything done!” From the product manager’s point of view, “the development team is too slow, or intransigent, and can’t seem to keep up.” There’s only one environment where this approach is somewhat rational – outsourced development with limited trust.  When the relationship between the product management / design team, and the product creation / test team is defined by the contract, or the two teams do not trust each other, the only reasonable way to make things work is to establish explicit expectations up front, and then deliver to those specifications.  Note that the specifications typically include a change-management process, which facilitates reaching an agreement to change the plan.  The right way to make this type of relationship work is to change it, but if you’re stuck with it – this is your box. Agile as IntendedAh, the magic fourth box. Where rapid delivery leads to rapid learning which leads to rapid changes in the plan.  The success of agile is predicated on the assumption that as we get feedback from the market, we get smarter; as we get smarter, we make better choices about what to do next. This is what enables a sustainable competitive advantage, by enabling you to sustainably differentiate your product from competition, rapidly adapt to changing customer expectations and market conditions.  Effectively, you are empowered to change what you choose to do, as well as how you choose to do it.  This is what agile product management is about – enabling business agility. A winning strategy involves selecting an attractive market, developing a strategy for how you will compete within that market, then developing a product (or portfolio) roadmap which manifests the strategy, while embodying the vision of the company.  It is possible to do this in any corner of the matrix (except the upper left, in my opinion).  The less willing you are to rely on your ability to predict the future accurately, the more you will want to be in the upper right corner. Conclusion There isn’t a particularly strong argument against operating your team in the upper right hand corner of the box, Agile as Intended.  The best argument is really just “we aren’t there yet.”  From conversations I’ve had with many team leaders, they seemed to think that getting to the lower right corner was the right definition of “done.”  They thought they were “doing agile” and that there wasn’t anything left to change, organizationally.  And they wondered why their teams weren’t delivering on the promise of agile.  It’s because they weren’t there yet. Hopefully this visual will help drive the conversation forward for some of you out there.  Let me know if it helps light bulbs go off. Attributions and Clarifications Special thanks to Prabhakar Gopalan, who introduced me to The Power of the 2×2 Matrix, a really good book for framing ideas, in his craftsman product management training class. *Agile isn’t a noun really something you do, agile is an adverb describing how you do something.  English is a funny language, and “doing agile” is generally used to reflect developing a product in an agile manner.  Sometimes it is important to point this out – particularly when you’re trying to help people focus on the product and not the process (like here), but for this article, I didn’t want to dilute the other messages.  As a bonus, the people who would be tweaked by the use of agile as a noun are generally people who “get it” and I like the idea that they read the whole article, just to see this caveat.  Thanks for reading this! **This is based on anecdata (thanks Prabhakar for the great word), but my impression is that small companies commonly do this – think Lean Start Up – and large companies rarely do this.  I suspect this is more about the challenge of managing expectations and otherwise navigating a bureaucracy built to reward execution against a predetermined plan. ***Definitions of “deliver” make for a great devil is in the details discussion too – do you deliver to end-customers or internal stakeholders?  What if your existing customers refuse to update every month?  How many versions of your product do you want in the field?  Another great topic – but not the focus of this article?Reference: Agile Through a Matrix Lens from our JCG partner Scott Sehlhorst at the Tyner Blain’s blog blog....
software-development-2-logo

Don’t Migrate to MariaDB just yet. MySQL is Back!

Now that I have your attention, I’d like to invite you to a critical review of where we’re at in the MySQL vs. MariaDB debate. Around one month ago, I visited Oracle Open World 2014, and I’ve met with Morgan Tocker, the MySQL community manager at Oracle to learn about where MySQL is heading. Who “is” MySQL An interesting learning for myself is the fact that according to Morgan, there had been quite a few former MySQL AB employees that stayed with MySQL when Sun acquired the database, and that are still there now (perhaps after a short break), now that Oracle owns Sun and thus MySQL. In fact, right now as we speak, Oracle is pushing hard to get even more people on board of the MySQL team. When this article was written, there were 21 open MySQL jobs on this blog post dating February 25, 2014. More details about Oracle’s plans to promote and push MySQL can be seen in this presentation by Morten Andersen:So, if you’re still contemplating a migration to MariaDB, maybe give this all a second thought. MySQL is still the database that is being used by most large companies, including Facebook and LinkedIn. MySQL won’t go away and it will get much better in the next couple of years – maybe even to a point where it has a similar amount of features as PostgreSQL? We can only hope. Need another convincing argument? You can now use Oracle Enterprise Manager for both Oracle and MySQL databases. How awesome is that!Stay tuned for a couple of additional blog posts on this blog, about what’s new and what’s on the MySQL roadmap in the next years. See all of Morten Andersen’s slides here: Introduction & News from OOW and MySQL Central from Morten AndersenReference: Don’t Migrate to MariaDB just yet. MySQL is Back! from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
android-logo

Develop android weather app with Material Design

This post describes how to create a weather app using material design guidelines. Material Design is  a set of rules for visual design, UI interaction, motion and so on. These rules help developer when they design and create an Android app. This post wants to describe how we create a weather app using Weatherlib as weather layer and Material design rules. We want to develop this app not only for Android 5 Lollipop that supports Material design natively but we want to support previous version of Android like 4.x Kitkat. For that reason we will introduce appCompat v7 library that helps us to implement the Material Design even in the previous Android versions. We want to code an app that has a extended Toolbar that holds some information about the location and current weather and some basic weather information about temperature, weather icon, humidity, wind and pressure. At the end we will get something like the pic shown below:Android project set up The first thing we have to do is configuring our project so that we can use Weatherlib and especially appCompat v7. We can open build.graddle and add these lines: dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile 'com.android.support:appcompat-v7:21.0.+' compile 'com.mcxiaoke.volley:library:1.0.6@aar' compile 'com.survivingwithandroid:weatherlib:1.5.3' compile 'com.survivingwithandroid:weatherlib_volleyclient:1.5.3' } Now we have our project correctly set up, we can start defining our app layout. App layout: Material design As briefly explained before, we want to use the Toolbar in our case the extended toolbar. a Toolbar is action bar generalization that gives us more control. Differently from Action bar that is tightly bound to an Actvitiy, a Toolbar can be placed everywhere inside the View hierarchy. So our layout will be divided in three main areas:Toolbar area Weather icon and temperature Other weather dataThe layout is shown below: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".WeatherActivity" android:orientation="vertical"><android.support.v7.widget.Toolbar xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/my_toolbar" android:layout_height="128dp" app:popupTheme="@style/ActionBarPopupThemeOverlay" android:layout_width="match_parent" android:background="?attr/colorPrimary" android:paddingLeft="72dp" android:paddingBottom="16dp" android:gravity="bottom" app:titleTextAppearance="@style/Toolbartitle" app:subtitleTextAppearance="@style/ToolbarSubtitle" app:theme="@style/ThemeOverlay.AppCompat.Light" android:title="@string/location_placeholder" />.... </RelativeLayout> As you can see we used Toolbar. We set the toolbar height equals to 128dp as stated in the guidelines, moreover we used the primary color as background. The primary color is defined in colors.xml. You can refer to material design color guidelines for more information. We should define at least three different color:The primary color, identified by 500 The primary dark color identified by 700 Accent color that should be for primary action buttons and so onOur toolbar background color is set to primary color. <resources> <color name="primaryColor_500">#03a9f4</color> <color name="primaryDarkColor_700">#0288d1</color> .... </resources> Moreover the left padding and the bottom padding inside the toolbar are defined according to the guidelines. At the end we add the menu items as we are used to do with action bar. The main result is shown below:As you can notice the toolbar has the background equals to the primary color. Search city: Popup with Material Design We can use the popup to let the user to enter the location. The popup is very simple, it is made by a EditText that is used to enter the data and a simple ListView that shows the city that match the pattern inserted in the EditText. I will not cover how to search a city in weatherlib because i already covered it. The result is shown here: The popup layout is very shown below: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent" ><TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="@string/dialog.city.header" style="@style/Theme.AppCompat.Dialog"/><TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="8sp" android:text="@string/dialog.city.pattern"/><EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="8dp" android:id="@+id/ptnEdit"/><ListView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/cityList" android:clickable="true"/></LinearLayout> The code to create and handle the dialog is shown below: private Dialog createDialog() { AlertDialog.Builder builder = new AlertDialog.Builder(this); LayoutInflater inflater = this.getLayoutInflater(); View v = inflater.inflate(R.layout.select_city_dialog, null); builder.setView(v);EditText et = (EditText) v.findViewById(R.id.ptnEdit); .... et.addTextChangedListener(new TextWatcher() { .... @Override public void onTextChanged(CharSequence s, int start, int before, int count) { if (count > 3) { // We start searching weatherclient.searchCity(s.toString(), new WeatherClient.CityEventListener() { @Override public void onCityListRetrieved(List<City> cities) { CityAdapter ca = new CityAdapter(WeatherActivity.this, cities); cityListView.setAdapter(ca);}}); } }}); builder.setPositiveButton("Accept", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); // We update toolbar toolbar.setTitle(currentCity.getName() + "," + currentCity.getCountry()); // Start getting weather getWeather(); } });builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } });return builder.create(); } An important thing to notice it is at line 33, where we set the toolbar title according to the city selected by the user and then we get the current weather. The result of this piece of code is shown here:Weatherlib: weather To get the current weather we use Weatherlib: private void getWeather() { weatherclient.getCurrentCondition(new WeatherRequest(currentCity.getId()), new WeatherClient.WeatherEventListener() { @Override public void onWeatherRetrieved(CurrentWeather currentWeather) { // We have the current weather now // Update subtitle toolbar toolbar.setSubtitle(currentWeather.weather.currentCondition.getDescr()); tempView.setText(String.format("%.0f",currentWeather.weather.temperature.getTemp())); pressView.setText(String.valueOf(currentWeather.weather.currentCondition.getPressure())); windView.setText(String.valueOf(currentWeather.weather.wind.getSpeed())); humView.setText(String.valueOf(currentWeather.weather.currentCondition.getHumidity())); weatherIcon.setImageResource(WeatherIconMapper.getWeatherResource(currentWeather.weather.currentCondition.getIcon(), currentWeather.weather.currentCondition.getWeatherId()));setToolbarColor(currentWeather.weather.temperature.getTemp()); } .... } You can notice at line 8 we set the toolbar subtitle according to the current weather while at line 15 we change the toolbar color according to the current temperature. As toolbar background color we used the primary colors shown in the guidelines.Source code is available @ github.Reference: Develop android weather app with Material Design from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
java-logo

Java performance tuning survey results (part I)

We conducted a Java performance tuning survey during October 2014. The main goal of the survey was to gathering insight into Java performance world to improve the Plumbr product offering. However, we are happy to share the interesting results with you as well. The data that we collected provided material for a lengthy analysis, so we decided to divide the results into a series of blog posts. This is the first one, trying to answer the following questions:          Who deals with Java performance issues? How widespread are the Java performance issues? How long does it take to solve such issues? Where is this time spent?Engineering roles who answered our survey In total, 308 respondents answered our call and completed the survey during October 2014. We also profiled the respondents based on their roles, and following chart illustrates the different titles used:Zooming further out on this distribution, we can say that the data is distributed by respondent role as follows:73% engineering 6% operations 2% QA 14% management 5% failed to categorizeWe can conclude that the survey is mostly based on engineering roles, with a slight touch from management, operations and QA people. 93% of the respondents faced performance issues during the past year “Have you faced any Java performance issues during the past 12 months?” was the very first question building the overall foundation for the rest of the survey. Out of the 308 respondents, 286, or 93% confirmed that they have faced a performance issue with Java during the last year. For these 286 people we had nine more questions in the survey to answer. For the remaining 22 who did not face any Java performance issues during the last year, this was also the last question of the survey. We do admit that the selection of people answering our survey was likely biased and this number is not truly representing the status in the Java world. After all, when you are building performance monitoring tools, people who tend to hang around your web site are more likely to have been recently involved in performance monitoring domain. Thus we cannot really claim that 93% of the people working with Java applications face performance issues on a yearly basis. What we definitely can claim is that we have data from 286 unique examples about performance issues in Java applications. So let’s see what the issues were all about. Most of the time is spent on reproducing, evidence gathering and root cause analysis. Out of the 308 respondents, 156 chose to answer to the “What was the most time consuming part of the process” question. This was a free-text question and we were able to categorize 146 of the answers. These answers proved to be one of the most interesting outcomes of the survey. It is rather astonishing to see that 76% of the respondents struggle the most with the “trying to reproduce – gather evidence – make sense of the evidence – link evidence to the root cause” cycle:20% of the respondents spent most of the time trying to reproduce the issue, so that they could start gathering evidence 25% struggled the most with trying to gather evidence (such as log files or heap/thread dumps) and to make sense of the evidence 30% spent most of the time while trying to link the evidence to the root cause in source code/configurationTo be fair, you should also note that there is a rather significant (13%) amount of respondents claiming that building the actual solution to the problem was the most time-consuming part of the process. Even though it is a noticeable amount, it is still more than five times less than the amount of users spending most of the time in the vicious cycle of trying to get down to the root cause. How long did it take you to solve the performance issue? In this section we asked respondents to quantify the pain they faced when trying to detect the root cause. Again, we had 284 respondents answering this question:The answers confirm that even though some of the cases are easy to detect and troubleshoot, most of the performance issues are tough solve. Kudos to the eight respondents who found and fixed the issue in less than an hour, but let’s stop for a moment and focus on the 48 respondents (17% of the cases) for whom tracing down and solving a performance issue means that more than a month is spent to it. Another way to interpret the data above is to look at the median and average time spent:Median time falls into the “more than a day but less than a week” range, translating several days spent for detection and troubleshooting. Average is a bit trickier to calculate due to the missing upper boundary, but when assuming that “more than a month” translates to “exactly two months”, the average time spent finding and fixing the root cause is 80 hours.If we look at the total time spent, the numbers start to look even more scary – the 284 respondents spent 22,600 hours in total on detecting and troubleshooting a single performance issue each. This is equivalent to a bit more than 130 man-months. Just thinking of that number alone is a clear sign that this domain is in dire need for better solutions.Reference: Java performance tuning survey results (part I) from our JCG partner Ivo Mägi at the Plumbr Blog blog....
software-development-2-logo

Development Overhead

What does a developer spend his time on? Writing code, debugging, thinking and communicating with colleagues (that includes meetings). Anything that is beyond these activities is unnecessary overhead (some meetings are also unnecessary, but that’s a different topic). And yet, depending on our language and tools, we have to do a lot more to support the process of writing code. These activities include, but are not limited to:        manually format your code – the code has to be beautifully aligned and formatted, but that’s an extra effort. using search and replace instead of refactoring – few languages and tools support good refactoring, and that’s priceless in a big project manually invoking compilation – compile on save gives you immediate feedback; the need to manually run a compiler is adding a whole unnecessary step to your coding process slow compilation – ontop of the previous issue, if your compiler is slow, it’s just a dead time (mandatory xkcd) slow time-to-deploy – if the time from writing the code code to running it is more than a few seconds, then you are wasting enormous amounts of time. E.g. if you need to manually make builds and copy files on a local server. clunky resource navigation – if you can’t go to a given source file in a couple of keystrokes infrastructure problems – you depend on a database, a message queue, possibly some external service. Installing and supporting these components on your development machine can be painful. Recently we spent one day trying to integrate 3 components, some of which had docker instances. And on neither Windows, nor Mac, Docker worked properly. It was painful error-google-try-error-google process to even get things started. Avoid immature, untested tools (not bashing docker here, just an example, and it might have already been improved/fixed) OS issues – if your OS crashes every couple of days, your I/O is blocking your UI, you sometimes lose your ALT+TAB functionality (which are things that I’ve been experiencing when using Ubuntu), then your OS is wasting a significant amount of your time.Most of the manual tasks above can be automated, and the others should not exist at all. If you are using Java, for example, you can have a stable IDE, with automatic formatting and refactoring, with compile-on-save, with save-and-refresh for webapps. And you can use an operating system that doesn’t make you recompile the kernel every now and then in order to keep it working (note: hyperbole here). It’s often a tradeoff. If I have to compare Java to Groovy in terms of productivity, for example, the (perceived) verbosity of Java is a minor nuisance compared to the lack of refactoring, formatting, etc, etc, in groovy (at least that was the case a few years ago; and it’s still the same with scala nowadays). Yes, you have to write a few lines more, but it’s a known process. If you have immature tools that are constantly breaking or just don’t work (and that is the case, unfortunately), it’s unknown how you should process. And you may end up wasting 10 minutes in manual “labour”, which would kill the productivity that a language gives you. For me Linux was also such a tradeoff – having the terminal is sometimes useful indeed, but it did not justify the effort in keeping the system working (and it completely died after a version upgrade). Because I really feel all that overhead draining my productivity, I am very picky when it comes to the technologies I use. Being able to type faster or write less lines of code is fine, but you have to weigh that against the rest of the procedures you are forced to do. And that’s part of the reason why I prefer an IDE over a text editor, I don’t use Emacs and I don’t like Scala, and I don’t use Linux. Your experience may very well be different (and if facebook checking is already taking half of your day, then nothing above really matters). But try to measure (or at least observe) how much time you spend not doing actual programming (or thinking) and have to do “automatable” or redundant stuff instead. And try to ignore the feeling of accomplishment when you do something that you don’t have to do in the first place. And if your preferred technologies turn out to be silently draining productivity, then consider changing them (or improving them, if you have the spare time).Reference: Development Overhead from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
jooq-logo-black-100x80

Painless Access from Java to PL/SQL Procedures with jOOQ

PL/SQL is one of those things. Most people try to stay clear of it. Few people really love it. I just happen to suffer from stockholm syndrome, since I’m working a lot with banks. Even if the PL/SQL syntax and the tooling sometimes remind me of the good old times…          … I still believe that a procedural language (well, any language) combined with SQL can do miracles in terms of productiveness, performance and expressivity. In this article, we’ll see later on, how we can achieve the same with SQL (and PL/SQL) in Java, using jOOQ. But first, a little bit of history… Accessing PL/SQL from Java One of the biggest reasons why Java developers in particular refrain from writing their own PL/SQL code is because the interface between PL/SQL and Java – ojdbc – is a major pain. We’ll see in the following examples how that is. Assume we’re working on an Oracle-port of the popular Sakila database (originally created for MySQL). This particular Sakila/Oracle port was implemented by DB Software Laboratory and published under the BSD license. Here’s a partial view of that Sakila database.  Now, let’s assume that we have an API in the database that doesn’t expose the above schema, but exposes a PL/SQL API instead. The API might look something like this: CREATE TYPE LANGUAGE_T AS OBJECT ( language_id SMALLINT, name CHAR(20), last_update DATE ); /CREATE TYPE LANGUAGES_T AS TABLE OF LANGUAGE_T; /CREATE TYPE FILM_T AS OBJECT ( film_id int, title VARCHAR(255), description CLOB, release_year VARCHAR(4), language LANGUAGE_T, original_language LANGUAGE_T, rental_duration SMALLINT, rental_rate DECIMAL(4,2), length SMALLINT, replacement_cost DECIMAL(5,2), rating VARCHAR(10), special_features VARCHAR(100), last_update DATE ); /CREATE TYPE FILMS_T AS TABLE OF FILM_T; /CREATE TYPE ACTOR_T AS OBJECT ( actor_id numeric, first_name VARCHAR(45), last_name VARCHAR(45), last_update DATE ); /CREATE TYPE ACTORS_T AS TABLE OF ACTOR_T; /CREATE TYPE CATEGORY_T AS OBJECT ( category_id SMALLINT, name VARCHAR(25), last_update DATE ); /CREATE TYPE CATEGORIES_T AS TABLE OF CATEGORY_T; /CREATE TYPE FILM_INFO_T AS OBJECT ( film FILM_T, actors ACTORS_T, categories CATEGORIES_T ); / You’ll notice immediately, that this is essentially just a 1:1 copy of the schema in this case modelled as Oracle SQL OBJECT and TABLE types, apart from the FILM_INFO_T type, which acts as an aggregate. Now, our DBA (or our database developer) has implemented the following API for us to access the above information: CREATE OR REPLACE PACKAGE RENTALS AS FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T; FUNCTION GET_ACTORS RETURN ACTORS_T; FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T; FUNCTION GET_FILMS RETURN FILMS_T; FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T; FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T; END RENTALS; / This, ladies and gentlemen, is how you can now… … tediously access the PL/SQL API with JDBC So, in order to avoid the awkward CallableStatement with its OUT parameter registration and JDBC escape syntax, we’re going to fetch a FILM_INFO_T record via a SQL statement like this: try (PreparedStatement stmt = conn.prepareStatement( "SELECT rentals.get_film_info(1) FROM DUAL"); ResultSet rs = stmt.executeQuery()) {// STRUCT unnesting here... } So far so good. Luckily, there is Java 7’s try-with-resources to help us clean up those myriad JDBC objects. Now how to proceed? What will we get back from this ResultSet? A java.sql.Struct: while (rs.next()) { Struct film_info_t = (Struct) rs.getObject(1);// And so on... } Now, the brave ones among you would continue downcasting the java.sql.Struct to an even more obscure and arcane oracle.sql.STRUCT, which contains almost no Javadoc, but tons of deprecated additional, vendor-specific methods. For now, let’s stick with the “standard API”, though. Interlude: Let’s take a moment to appreciate JDBC in times of Java 8. When Java 5 was introduced, so were generics. We have rewritten our big code bases to remove all sorts of meaningless boilerplate type casts that are now no longer needed. With the exception of JDBC. When it comes to JDBC, guessing appropriate types is all a matter of luck. We’re accessing complex nested data structures provided by external systems by dereferencing elements by index, and then taking wild guesses at the resulting data types. Lambdas have just been introduced, yet JDBC still talks to the mainframe.  And then…  OK, enough of these rants. Let’s continue navigating our STRUCT while (rs.next()) { Struct film_info_t = (Struct) rs.getObject(1);Struct film_t = (Struct) film_info_t.getAttributes()[0]; String title = (String) film_t.getAttributes()[1]; Clob description_clob = (Clob) film_t.getAttributes()[2]; String description = description_clob.getSubString(1, (int) description_clob.length());Struct language_t = (Struct) film_t.getAttributes()[4]; String language = (String) language_t.getAttributes()[1];System.out.println("Film : " + title); System.out.println("Description: " + description); System.out.println("Language : " + language); } From the initial STRUCT that we received at position 1 from the ResultSet, we can continue dereferencing attributes by index. Unfortunately, we’ll constantly need to look up the SQL type in Oracle (or in some documentation) to remember the order of the attributes: CREATE TYPE FILM_INFO_T AS OBJECT ( film FILM_T, actors ACTORS_T, categories CATEGORIES_T ); / And that’s not it! The first attribute of type FILM_T is yet another, nested STRUCT. And then, those horrible CLOBs. The above code is not strictly complete. In some cases that only the maintainers of JDBC can fathom, java.sql.Clob.free() has to be called to be sure that resources are freed in time. Remember that CLOB, depending on your database and driver configuration, may live outside the scope of your transaction. Unfortunately, the method is called free() instead of AutoCloseable.close(), such that try-with-resources cannot be used. So here we go: List<Clob> clobs = new ArrayList<>();while (rs.next()) { try { Struct film_info_t = (Struct) rs.getObject(1); Struct film_t = (Struct) film_info_t.getAttributes()[0];String title = (String) film_t.getAttributes()[1]; Clob description_clob = (Clob) film_t.getAttributes()[2]; String description = description_clob.getSubString(1, (int) description_clob.length());Struct language_t = (Struct) film_t.getAttributes()[4]; String language = (String) language_t.getAttributes()[1];System.out.println("Film : " + title); System.out.println("Description: " + description); System.out.println("Language : " + language); } finally { // And don't think you can call this early, either // The internal specifics are mysterious! for (Clob clob : clobs) clob.free(); } } That’s about it. Now we have found ourselves with some nice little output on the console: Film : ACADEMY DINOSAUR Description: A Epic Drama of a Feminist And a Mad Scientist who must Battle a Teacher in The Canadian Rockies Language : English That’s about it – You may think! But… The pain has only started … because we’re not done yet. There are also two nested table types that we need to deserialise from the STRUCT. If you haven’t given up yet (bear with me, good news is nigh), you’ll enjoy reading about how to fetch and unwind a java.sql.Array. Let’s continue right after the printing of the film: Array actors_t = (Array) film_info_t.getAttributes()[1]; Array categories_t = (Array) film_info_t.getAttributes()[2]; Again, we’re accessing attributes by indexes, which we have to remember, and which can easily break. The ACTORS_T array is nothing but yet another wrapped STRUCT: System.out.println("Actors : ");Object[] actors = (Object[]) actors_t.getArray(); for (Object actor : actors) { Struct actor_t = (Struct) actor;System.out.println( " " + actor_t.getAttributes()[1] + " " + actor_t.getAttributes()[2]); } You’ll notice a few things:The Array.getArray() method returns an array. But it declares returning Object. We have to manually cast. We can’t cast to Struct[] even if that would be a sensible type. But the type returned by ojdbc is Object[] (containing Struct elements) The foreach loop also cannot dereference a Struct from the right hand side. There’s no way of coercing the type of actor into what we know it really is We could’ve used Java 8 and Streams and such, but unfortunately, all lambda expressions that can be passed to the Streams API disallow throwing of checked exceptions. And JDBC throws checked exceptions. That’ll be even uglier.Anyway. Now that we’ve finally achieved this, we can see the print output: Film : ACADEMY DINOSAUR Description: A Epic Drama of a Feminist And a Mad Scientist who must Battle a Teacher in The Canadian Rockies Language : English Actors : PENELOPE GUINESS CHRISTIAN GABLE LUCILLE TRACY SANDRA PECK JOHNNY CAGE MENA TEMPLE WARREN NOLTE OPRAH KILMER ROCK DUKAKIS MARY KEITEL When will this madness stop? It’ll stop right here! So far, this article read like a tutorial (or rather: medieval torture) of how to deserialise nested user-defined types from Oracle SQL to Java (don’t get me started on serialising them again!) In the next section, we’ll see how the exact same business logic (listing Film with ID=1 and its actors) can be implemented with no pain at all using jOOQ and its source code generator. Check this out: // Simply call the packaged stored function from // Java, and get a deserialised, type safe record FilmInfoTRecord film_info_t = Rentals.getFilmInfo1( configuration, new BigInteger("1"));// The generated record has getters (and setters) // for type safe navigation of nested structures FilmTRecord film_t = film_info_t.getFilm();// In fact, all these types have generated getters: System.out.println("Film : " + film_t.getTitle()); System.out.println("Description: " + film_t.getDescription()); System.out.println("Language : " + film_t.getLanguage().getName());// Simply loop nested type safe array structures System.out.println("Actors : "); for (ActorTRecord actor_t : film_info_t.getActors()) { System.out.println( " " + actor_t.getFirstName() + " " + actor_t.getLastName()); }System.out.println("Categories : "); for (CategoryTRecord category_t : film_info_t.getCategories()) { System.out.println(category_t.getName()); } Is that it? Yes! Wow, I mean, this is just as though all those PL/SQL types and procedures / functions were actually part of Java. All the caveats that we’ve seen before are hidden behind those generated types and implemented in jOOQ, so you can concentrate on what you originally wanted to do. Access the data objects and do meaningful work with them. Not serialise / deserialise them! Let’s take a moment and appreciate this consumer advertising:Not convinced yet? I told you not to get me started on serialising the types to JDBC. And I won’t, but here’s how to serialise the types to jOOQ, because that’s a piece of cake! Let’s consider this other aggregate type, that returns a customer’s rental history: CREATE TYPE CUSTOMER_RENTAL_HISTORY_T AS OBJECT ( customer CUSTOMER_T, films FILMS_T ); / And the full PL/SQL package specs: CREATE OR REPLACE PACKAGE RENTALS AS FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T; FUNCTION GET_ACTORS RETURN ACTORS_T; FUNCTION GET_CUSTOMER(p_customer_id INT) RETURN CUSTOMER_T; FUNCTION GET_CUSTOMERS RETURN CUSTOMERS_T; FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T; FUNCTION GET_FILMS RETURN FILMS_T; FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer_id INT) RETURN CUSTOMER_RENTAL_HISTORY_T; FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer CUSTOMER_T) RETURN CUSTOMER_RENTAL_HISTORY_T; FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T; FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T; END RENTALS; / So, when calling RENTALS.GET_CUSTOMER_RENTAL_HISTORY we can find all the films that a customer has ever rented. Let’s do that for all customers whose FIRST_NAME is “JAMIE”, and this time, we’re using Java 8: // We call the stored function directly inline in // a SQL statement dsl().select(Rentals.getCustomer( CUSTOMER.CUSTOMER_ID )) .from(CUSTOMER) .where(CUSTOMER.FIRST_NAME.eq("JAMIE"))// This returns Result<Record1<CustomerTRecord>> // We unwrap the CustomerTRecord and consume // the result with a lambda expression .fetch() .map(Record1::value1) .forEach(customer -> { System.out.println("Customer : "); System.out.println("- Name : " + customer.getFirstName() + " " + customer.getLastName()); System.out.println("- E-Mail : " + customer.getEmail()); System.out.println("- Address : " + customer.getAddress().getAddress()); System.out.println(" " + customer.getAddress().getPostalCode() + " " + customer.getAddress().getCity().getCity()); System.out.println(" " + customer.getAddress().getCity().getCountry().getCountry());// Now, lets send the customer over the wire again to // call that other stored procedure, fetching his // rental history: CustomerRentalHistoryTRecord history = Rentals.getCustomerRentalHistory2(dsl().configuration(), customer);System.out.println(" Customer Rental History : "); System.out.println(" Films : ");history.getFilms().forEach(film -> { System.out.println(" Film : " + film.getTitle()); System.out.println(" Language : " + film.getLanguage().getName()); System.out.println(" Description : " + film.getDescription());// And then, let's call again the first procedure // in order to get a film's actors and categories FilmInfoTRecord info = Rentals.getFilmInfo2(dsl().configuration(), film);info.getActors().forEach(actor -> { System.out.println(" Actor : " + actor.getFirstName() + " " + actor.getLastName()); });info.getCategories().forEach(category -> { System.out.println(" Category : " + category.getName()); }); }); }); … and a short extract of the output produced by the above: Customer : - Name : JAMIE RICE - E-Mail : JAMIE.RICE@sakilacustomer.org - Address : 879 Newcastle Way 90732 Sterling Heights United States Customer Rental History : Films : Film : ALASKA PHANTOM Language : English Description : A Fanciful Saga of a Hunter And a Pastry Chef who must Vanquish a Boy in Australia Actor : VAL BOLGER Actor : BURT POSEY Actor : SIDNEY CROWE Actor : SYLVESTER DERN Actor : ALBERT JOHANSSON Actor : GENE MCKELLEN Actor : JEFF SILVERSTONE Category : Music Film : ALONE TRIP Language : English Description : A Fast-Paced Character Study of a Composer And a Dog who must Outgun a Boat in An Abandoned Fun House Actor : ED CHASE Actor : KARL BERRY Actor : UMA WOOD Actor : WOODY JOLIE Actor : SPENCER DEPP Actor : CHRIS DEPP Actor : LAURENCE BULLOCK Actor : RENEE BALL Category : Music If you’re using Java and PL/SQL… … then you should click on the below banner and download the free trial right now to experiment with jOOQ and Oracle:The Oracle port of the Sakila database is available from this URL for free, under the terms of the BSD license: https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/Sakila/oracle-sakila-db Finally, it is time to enjoy writing PL/SQL again!Reference: Painless Access from Java to PL/SQL Procedures with jOOQ from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
scala-logo

First steps with REST, Spray and Scala

On this site you can already find a couple of articles on how to do REST with a multiple of different frameworks. You can find an old one on Play, some on Scalatra and I even started an (as of yet unfinished) series on Express. So instead of finishing the series on Express, I’m going to look at Spray in this article. Getting started First thing we need to do is get the correct libraries set up, so we can start development (I use IntelliJ IDEA, but you can use whatever you want). The easiest way to get started is by using SBT. I’ve used the following minimal SBT file to get started. organization := "org.smartjava"   version := "0.1"   scalaVersion := "2.11.2"   scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")   libraryDependencies ++= { val akkaV = "2.3.6" val sprayV = "1.3.2" Seq( "io.spray" %% "spray-can" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-routing" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-json" % "1.3.1", "io.spray" %% "spray-testkit" % sprayV % "test", "com.typesafe.akka" %% "akka-actor" % akkaV, "com.typesafe.akka" %% "akka-testkit" % akkaV % "test", "org.specs2" %% "specs2-core" % "2.3.11" % "test", "org.scalaz" %% "scalaz-core" % "7.1.0" ) } After you’ve imported this file into your IDE of choice you should have the correct spray and akka libraries to get started. Create a launcher Next lets create a launcher which you can use to run our Spray server. For this we just an object, creativeally named Boot, which extends from the standard scala App trait. package org.smartjava;   import akka.actor.{ActorSystem, Props} import akka.io.IO import spray.can.Http import akka.pattern.ask import akka.util.Timeout import scala.concurrent.duration._   object Boot extends App {   // create our actor system with the name smartjava implicit val system = ActorSystem("smartjava") val service = system.actorOf(Props[SJServiceActor], "sj-rest-service")   // IO requires an implicit ActorSystem, and ? requires an implicit timeout // Bind HTTP to the specified service. implicit val timeout = Timeout(5.seconds) IO(Http) ? Http.Bind(service, interface = "localhost", port = 8080) } There isn’t happening that much in this object. What we do is we send a HTTP.Bind() message (better said we ‘ask’) to register a listener. If binding succeeds our service will receive messages whenever a request is received on the port. Receiving actor Now lets look at the actor where we’ll be sending the messages from the IO subsystem to. package org.smartjava   import akka.actor.Actor import spray.routing._ import spray.http._ import MediaTypes._ import spray.httpx.SprayJsonSupport._ import MyJsonProtocol._   // simple actor that handles the routes. class SJServiceActor extends Actor with HttpService {   // required as implicit value for the HttpService // included from SJService def actorRefFactory = context   // we don't create a receive function ourselve, but use // the runRoute function from the HttpService to create // one for us, based on the supplied routes. def receive = runRoute(aSimpleRoute ~ anotherRoute)   // some sample routes val aSimpleRoute = {...} val anotherRoute = {...} So what happens here is that when we use the runRoute function, provided by HttpService, to create the receive function that handles the incoming messages. creating routes The final step we need to configure is create some route handling code. We’ll go into more detail for this part in one of the next articles, so for now we’ll show you how to create a route that based on the incoming media-type sends back some JSON. We’ll use the standard JSON support from Spray for this. As a JSON object we’ll use the following very basic case class which we extended with JSON support. package org.smartjava   import spray.json.DefaultJsonProtocol   object MyJsonProtocol extends DefaultJsonProtocol { implicit val personFormat = jsonFormat3(Person) }   case class Person(name: String, fistName: String, age: Long) This way Spray will marshall this object to JSON when we set the correct response media-type. Now that we’ve got our response object lets look at the code for the routes: // handles the api path, we could also define these in separate files // this path respons to get queries, and make a selection on the // media-type. val aSimpleRoute = { path("path1") { get {   // Get the value of the content-header. Spray // provides multiple ways to do this. headerValue({ case x@HttpHeaders.`Content-Type`(value) => Some(value) case default => None }) { // the header is passed in containing the content type // we match the header using a case statement, and depending // on the content type we return a specific object header => header match {   // if we have this contentype we create a custom response case ContentType(MediaType("application/vnd.type.a"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type A", System.currentTimeMillis()); } } }   // if we habe another content-type we return a different type. case ContentType(MediaType("application/vnd.type.b"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type B", System.currentTimeMillis()); } } }   // if content-types do not match, return an error code case default => { complete { HttpResponse(406); } } } } } } }   // handles the other path, we could also define these in separate files // This is just a simple route to explain the concept val anotherRoute = { path("path2") { get { // respond with text/html. respondWithMediaType(`text/html`) { complete { // respond with a set of HTML elements <html> <body> <h1>Path 2</h1> </body> </html> } } } } } A lot of code is in there, so lets highlight a couple of elements in detail: val aSimpleRoute = { path("path1") { get {...} } } This starting point of the route first checks whether the request is made to the “localhost:8080/path1″ path and then checks the HTTP method. In this case we’re only interested in GET methods. Once we’ve got a get method we do the following: // Get the value of the content-header. Spray // provides multiple ways to do this. headerValue({ case x@HttpHeaders.`Content-Type`(value) => Some(value) case default => None }) { // the header is passed in containing the content type // we match the header using a case statement, and depending // on the content type we return a specific object header => header match {   // if we have this contentype we create a custom response case ContentType(MediaType("application/vnd.type.a"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type A", System.currentTimeMillis()); } } }   // if we habe another content-type we return a different type. case ContentType(MediaType("application/vnd.type.b"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type B", System.currentTimeMillis()); } } }   // if content-types do not match, return an error code case default => { complete { HttpResponse(406); } } } } } In this piece of code we extract the Content-Type header of the request and based on that determine the response. The response is automatically converted to JSON because the responseWithMediaType is set to application/json. If a mediatype is provided which we don’t understand we return an 406 message. Lets test this Now lets test whether this is working. Spray provides own libraries and classes for testing, but for now lets just use a simple basic rest client. For this I usually use the Chrome Advanced Rest Client. In the following two screenshots you can see three calls being made to http://localhost:8080/path1: Call with media-type “application/vnd.type.a”:Call with media-type “application/vnd.type.b”:Call with media-type “application/vnd.type.c”:As you can see, the responses exactly match the routes we defined. What is next In the following article we’ll connect Spray IO to a database, make testing a little bit easier and explore a number of other Spray.IO features.Reference: First steps with REST, Spray and Scala from our JCG partner Jos Dirksen at the Smart Java blog....
android-logo

A Guide to Android RecyclerView and CardView

The new support library in Android L (Lollipop) introduced two new UI widgets: RecyclerView and CardView. The RecyclerView is a more advanced and more flexible version of the ListView. This new component is a big step because the ListView is one of the most used UI widgets. The CardView widget, on the other hand, is a new component that does not “upgrade” an existing component. In this tutorial, I’ll explain how to use these two widgets and show how we can mix them. Let’s start by diving into the RecyclerView.         RecyclerView: Introduction As I mentioned, RecyclerView is more flexible that ListView even if it introduces some complexities. We all know how to use ListView in our app and we know if we want to increase the ListView performances we can use a pattern called ViewHolder. This pattern consists of a simple class that holds the references to the UI components for each row in the ListView. This pattern avoids looking up the UI components all the time the system shows a row in the list. Even if this pattern introduces some benefits, we can implement the ListView without using it at all. RecyclerView forces us to use the ViewHolder pattern. To show how we can use the RecyclerView, we can suppose we want to create a simple app that shows a list of contact cards. The first thing we should do is create the main layout. RecyclerView is very similar to the ListView and we can use them in the same way: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"  xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin"  android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MyActivity"> <android.support.v7.widget.RecyclerView android:id="@+id/cardList" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> As you’ll notice from the layout shown above, the RecyclerView is available in the Android support library, so we have to modify build.gradle to include this dependency: dependencies { ... compile 'com.android.support:recyclerview-v7:21.0.0'} Now, in the onCreate method we can get the reference to our RecyclerView and configure it: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_my); RecyclerView recList = (RecyclerView) findViewById(R.id.cardList); recList.setHasFixedSize(true); LinearLayoutManager llm = new LinearLayoutManager(this); llm.setOrientation(LinearLayoutManager.VERTICAL); recList.setLayoutManager(llm); } If you look at the code above, you’ll notice some differences between the RecyclerView and ListView. RecyclerView requires a layout manager. This component positions item views inside the row and determines when it is time to recycle the views. The library provides a default layout manager called LinearLayoutManager. CardViewThe CardView UI component shows information inside cards. We can customise its corners, the elevation and so on. We want to use this component to show contact information. These cards will be the rows of RecyclerView and we will see later how to integrate these two components. By now, we can define our card layout: <android.support.v7.widget.CardView xmlns:card_view="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/card_view" android:layout_width="match_parent" android:layout_height="match_parent" card_view:cardCornerRadius="4dp" android:layout_margin="5dp"> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:id="@+id/title" android:layout_width="match_parent" android:layout_height="20dp" android:background="@color/bkg_card" android:text="contact det" android:gravity="center_vertical" android:textColor="@android:color/white" android:textSize="14dp"/> <TextView android:id="@+id/txtName" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Name" android:gravity="center_vertical" android:textSize="10dp" android:layout_below="@id/title" android:layout_marginTop="10dp" android:layout_marginLeft="5dp"/> <TextView android:id="@+id/txtSurname" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Surname" android:gravity="center_vertical" android:textSize="10dp" android:layout_below="@id/txtName" android:layout_marginTop="10dp" android:layout_marginLeft="5dp"/> <TextView android:id="@+id/txtEmail" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Email" android:textSize="10dp" android:layout_marginTop="10dp" android:layout_alignParentRight="true" android:layout_marginRight="150dp" android:layout_alignBaseline="@id/txtName"/> </RelativeLayout> As you can see, the CardView is very simple to use. This component is available in another android support library so we have to add this dependency too: dependencies { compile 'com.android.support:cardview-v7:21.0.0' compile 'com.android.support:recyclerview-v7:21.0.0' } RecyclerView: Adapter The adapter is a component that stands between the data model we want to show in our app UI and the UI component that renders this information. In other words, an adapter guides the way the information are shown in the UI. So if we want to display our contacts, we need an adapter for the RecyclerView. This adapter must extend a class called RecyclerView.Adapter passing our class that implements the ViewHolder pattern: public class MyAdapter extends RecyclerView.Adapter<MyHolder> { ..... } We now have to override two methods so that we can implement our logic: onCreateViewHolder is called whenever a new instance of our ViewHolder class is created, and onBindViewHolder is called when the SO binds the view with the data — or, in other words, the data is shown in the UI. In this case, the adapter helps us combine the RecyclerView and CardView. The layout we defined before for the cards will be the row layout of our contact list in the RecyclerView. Before doing it, we have to define our data model that stands at the base of our UI (i.e. what information we want to show). For this purpose, we can define a simple class: public class ContactInfo { protected String name; protected String surname; protected String email; protected static final String NAME_PREFIX = "Name_"; protected static final String SURNAME_PREFIX = "Surname_"; protected static final String EMAIL_PREFIX = "email_"; } And finally, we are ready to create our adapter. If you remember what we said before about Viewholder pattern, we have to code our class that implements it: public static class ContactViewHolder extends RecyclerView.ViewHolder { protected TextView vName; protected TextView vSurname; protected TextView vEmail; protected TextView vTitle; public ContactViewHolder(View v) { super(v); vName = (TextView) v.findViewById(R.id.txtName); vSurname = (TextView) v.findViewById(R.id.txtSurname); vEmail = (TextView) v.findViewById(R.id.txtEmail); vTitle = (TextView) v.findViewById(R.id.title); } } Look at the code, in the class constructor we get the reference to the views we defined in our card layout. Now it is time to code our adapter: public class ContactAdapter extends  RecyclerView.Adapter<ContactAdapter.ContactViewHolder> {private List<ContactInfo> contactList; public ContactAdapter(List<ContactInfo> contactList) { this.contactList = contactList; }@Override public int getItemCount() { return contactList.size(); } @Override public void onBindViewHolder(ContactViewHolder contactViewHolder, int i) { ContactInfo ci = contactList.get(i); contactViewHolder.vName.setText(ci.name); contactViewHolder.vSurname.setText(ci.surname); contactViewHolder.vEmail.setText(ci.email); contactViewHolder.vTitle.setText(ci.name + " " + ci.surname); }@Override public ContactViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) { View itemView = LayoutInflater.from(viewGroup.getContext()). inflate(R.layout.card_layout, viewGroup, false); return new ContactViewHolder(itemView); }public static class ContactViewHolder extends RecyclerView.ViewHolder { ... } } In our implementation, we override onBindViewHolder where we bind the data (our contact info) to the Views. Notice that we don’t look up UI components but simply use the references stored in our ContactViewHolder. In onCreateViewHolder we return our ContactViewHolderinflating the row layout (the CardView in our case). Run the app and you’ll get the results shown below:Source code available @ github .Reference: A Guide to Android RecyclerView and CardView from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close