Featured FREE Whitepapers

What's New Here?

jooq-logo-black-100x80

Painless Access from Java to PL/SQL Procedures with jOOQ

PL/SQL is one of those things. Most people try to stay clear of it. Few people really love it. I just happen to suffer from stockholm syndrome, since I’m working a lot with banks. Even if the PL/SQL syntax and the tooling sometimes remind me of the good old times…          … I still believe that a procedural language (well, any language) combined with SQL can do miracles in terms of productiveness, performance and expressivity. In this article, we’ll see later on, how we can achieve the same with SQL (and PL/SQL) in Java, using jOOQ. But first, a little bit of history… Accessing PL/SQL from Java One of the biggest reasons why Java developers in particular refrain from writing their own PL/SQL code is because the interface between PL/SQL and Java – ojdbc – is a major pain. We’ll see in the following examples how that is. Assume we’re working on an Oracle-port of the popular Sakila database (originally created for MySQL). This particular Sakila/Oracle port was implemented by DB Software Laboratory and published under the BSD license. Here’s a partial view of that Sakila database.  Now, let’s assume that we have an API in the database that doesn’t expose the above schema, but exposes a PL/SQL API instead. The API might look something like this: CREATE TYPE LANGUAGE_T AS OBJECT ( language_id SMALLINT, name CHAR(20), last_update DATE ); /CREATE TYPE LANGUAGES_T AS TABLE OF LANGUAGE_T; /CREATE TYPE FILM_T AS OBJECT ( film_id int, title VARCHAR(255), description CLOB, release_year VARCHAR(4), language LANGUAGE_T, original_language LANGUAGE_T, rental_duration SMALLINT, rental_rate DECIMAL(4,2), length SMALLINT, replacement_cost DECIMAL(5,2), rating VARCHAR(10), special_features VARCHAR(100), last_update DATE ); /CREATE TYPE FILMS_T AS TABLE OF FILM_T; /CREATE TYPE ACTOR_T AS OBJECT ( actor_id numeric, first_name VARCHAR(45), last_name VARCHAR(45), last_update DATE ); /CREATE TYPE ACTORS_T AS TABLE OF ACTOR_T; /CREATE TYPE CATEGORY_T AS OBJECT ( category_id SMALLINT, name VARCHAR(25), last_update DATE ); /CREATE TYPE CATEGORIES_T AS TABLE OF CATEGORY_T; /CREATE TYPE FILM_INFO_T AS OBJECT ( film FILM_T, actors ACTORS_T, categories CATEGORIES_T ); / You’ll notice immediately, that this is essentially just a 1:1 copy of the schema in this case modelled as Oracle SQL OBJECT and TABLE types, apart from the FILM_INFO_T type, which acts as an aggregate. Now, our DBA (or our database developer) has implemented the following API for us to access the above information: CREATE OR REPLACE PACKAGE RENTALS AS FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T; FUNCTION GET_ACTORS RETURN ACTORS_T; FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T; FUNCTION GET_FILMS RETURN FILMS_T; FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T; FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T; END RENTALS; / This, ladies and gentlemen, is how you can now… … tediously access the PL/SQL API with JDBC So, in order to avoid the awkward CallableStatement with its OUT parameter registration and JDBC escape syntax, we’re going to fetch a FILM_INFO_T record via a SQL statement like this: try (PreparedStatement stmt = conn.prepareStatement( "SELECT rentals.get_film_info(1) FROM DUAL"); ResultSet rs = stmt.executeQuery()) {// STRUCT unnesting here... } So far so good. Luckily, there is Java 7’s try-with-resources to help us clean up those myriad JDBC objects. Now how to proceed? What will we get back from this ResultSet? A java.sql.Struct: while (rs.next()) { Struct film_info_t = (Struct) rs.getObject(1);// And so on... } Now, the brave ones among you would continue downcasting the java.sql.Struct to an even more obscure and arcane oracle.sql.STRUCT, which contains almost no Javadoc, but tons of deprecated additional, vendor-specific methods. For now, let’s stick with the “standard API”, though. Interlude: Let’s take a moment to appreciate JDBC in times of Java 8. When Java 5 was introduced, so were generics. We have rewritten our big code bases to remove all sorts of meaningless boilerplate type casts that are now no longer needed. With the exception of JDBC. When it comes to JDBC, guessing appropriate types is all a matter of luck. We’re accessing complex nested data structures provided by external systems by dereferencing elements by index, and then taking wild guesses at the resulting data types. Lambdas have just been introduced, yet JDBC still talks to the mainframe.  And then…  OK, enough of these rants. Let’s continue navigating our STRUCT while (rs.next()) { Struct film_info_t = (Struct) rs.getObject(1);Struct film_t = (Struct) film_info_t.getAttributes()[0]; String title = (String) film_t.getAttributes()[1]; Clob description_clob = (Clob) film_t.getAttributes()[2]; String description = description_clob.getSubString(1, (int) description_clob.length());Struct language_t = (Struct) film_t.getAttributes()[4]; String language = (String) language_t.getAttributes()[1];System.out.println("Film : " + title); System.out.println("Description: " + description); System.out.println("Language : " + language); } From the initial STRUCT that we received at position 1 from the ResultSet, we can continue dereferencing attributes by index. Unfortunately, we’ll constantly need to look up the SQL type in Oracle (or in some documentation) to remember the order of the attributes: CREATE TYPE FILM_INFO_T AS OBJECT ( film FILM_T, actors ACTORS_T, categories CATEGORIES_T ); / And that’s not it! The first attribute of type FILM_T is yet another, nested STRUCT. And then, those horrible CLOBs. The above code is not strictly complete. In some cases that only the maintainers of JDBC can fathom, java.sql.Clob.free() has to be called to be sure that resources are freed in time. Remember that CLOB, depending on your database and driver configuration, may live outside the scope of your transaction. Unfortunately, the method is called free() instead of AutoCloseable.close(), such that try-with-resources cannot be used. So here we go: List<Clob> clobs = new ArrayList<>();while (rs.next()) { try { Struct film_info_t = (Struct) rs.getObject(1); Struct film_t = (Struct) film_info_t.getAttributes()[0];String title = (String) film_t.getAttributes()[1]; Clob description_clob = (Clob) film_t.getAttributes()[2]; String description = description_clob.getSubString(1, (int) description_clob.length());Struct language_t = (Struct) film_t.getAttributes()[4]; String language = (String) language_t.getAttributes()[1];System.out.println("Film : " + title); System.out.println("Description: " + description); System.out.println("Language : " + language); } finally { // And don't think you can call this early, either // The internal specifics are mysterious! for (Clob clob : clobs) clob.free(); } } That’s about it. Now we have found ourselves with some nice little output on the console: Film : ACADEMY DINOSAUR Description: A Epic Drama of a Feminist And a Mad Scientist who must Battle a Teacher in The Canadian Rockies Language : English That’s about it – You may think! But… The pain has only started … because we’re not done yet. There are also two nested table types that we need to deserialise from the STRUCT. If you haven’t given up yet (bear with me, good news is nigh), you’ll enjoy reading about how to fetch and unwind a java.sql.Array. Let’s continue right after the printing of the film: Array actors_t = (Array) film_info_t.getAttributes()[1]; Array categories_t = (Array) film_info_t.getAttributes()[2]; Again, we’re accessing attributes by indexes, which we have to remember, and which can easily break. The ACTORS_T array is nothing but yet another wrapped STRUCT: System.out.println("Actors : ");Object[] actors = (Object[]) actors_t.getArray(); for (Object actor : actors) { Struct actor_t = (Struct) actor;System.out.println( " " + actor_t.getAttributes()[1] + " " + actor_t.getAttributes()[2]); } You’ll notice a few things:The Array.getArray() method returns an array. But it declares returning Object. We have to manually cast. We can’t cast to Struct[] even if that would be a sensible type. But the type returned by ojdbc is Object[] (containing Struct elements) The foreach loop also cannot dereference a Struct from the right hand side. There’s no way of coercing the type of actor into what we know it really is We could’ve used Java 8 and Streams and such, but unfortunately, all lambda expressions that can be passed to the Streams API disallow throwing of checked exceptions. And JDBC throws checked exceptions. That’ll be even uglier.Anyway. Now that we’ve finally achieved this, we can see the print output: Film : ACADEMY DINOSAUR Description: A Epic Drama of a Feminist And a Mad Scientist who must Battle a Teacher in The Canadian Rockies Language : English Actors : PENELOPE GUINESS CHRISTIAN GABLE LUCILLE TRACY SANDRA PECK JOHNNY CAGE MENA TEMPLE WARREN NOLTE OPRAH KILMER ROCK DUKAKIS MARY KEITEL When will this madness stop? It’ll stop right here! So far, this article read like a tutorial (or rather: medieval torture) of how to deserialise nested user-defined types from Oracle SQL to Java (don’t get me started on serialising them again!) In the next section, we’ll see how the exact same business logic (listing Film with ID=1 and its actors) can be implemented with no pain at all using jOOQ and its source code generator. Check this out: // Simply call the packaged stored function from // Java, and get a deserialised, type safe record FilmInfoTRecord film_info_t = Rentals.getFilmInfo1( configuration, new BigInteger("1"));// The generated record has getters (and setters) // for type safe navigation of nested structures FilmTRecord film_t = film_info_t.getFilm();// In fact, all these types have generated getters: System.out.println("Film : " + film_t.getTitle()); System.out.println("Description: " + film_t.getDescription()); System.out.println("Language : " + film_t.getLanguage().getName());// Simply loop nested type safe array structures System.out.println("Actors : "); for (ActorTRecord actor_t : film_info_t.getActors()) { System.out.println( " " + actor_t.getFirstName() + " " + actor_t.getLastName()); }System.out.println("Categories : "); for (CategoryTRecord category_t : film_info_t.getCategories()) { System.out.println(category_t.getName()); } Is that it? Yes! Wow, I mean, this is just as though all those PL/SQL types and procedures / functions were actually part of Java. All the caveats that we’ve seen before are hidden behind those generated types and implemented in jOOQ, so you can concentrate on what you originally wanted to do. Access the data objects and do meaningful work with them. Not serialise / deserialise them! Let’s take a moment and appreciate this consumer advertising:Not convinced yet? I told you not to get me started on serialising the types to JDBC. And I won’t, but here’s how to serialise the types to jOOQ, because that’s a piece of cake! Let’s consider this other aggregate type, that returns a customer’s rental history: CREATE TYPE CUSTOMER_RENTAL_HISTORY_T AS OBJECT ( customer CUSTOMER_T, films FILMS_T ); / And the full PL/SQL package specs: CREATE OR REPLACE PACKAGE RENTALS AS FUNCTION GET_ACTOR(p_actor_id INT) RETURN ACTOR_T; FUNCTION GET_ACTORS RETURN ACTORS_T; FUNCTION GET_CUSTOMER(p_customer_id INT) RETURN CUSTOMER_T; FUNCTION GET_CUSTOMERS RETURN CUSTOMERS_T; FUNCTION GET_FILM(p_film_id INT) RETURN FILM_T; FUNCTION GET_FILMS RETURN FILMS_T; FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer_id INT) RETURN CUSTOMER_RENTAL_HISTORY_T; FUNCTION GET_CUSTOMER_RENTAL_HISTORY(p_customer CUSTOMER_T) RETURN CUSTOMER_RENTAL_HISTORY_T; FUNCTION GET_FILM_INFO(p_film_id INT) RETURN FILM_INFO_T; FUNCTION GET_FILM_INFO(p_film FILM_T) RETURN FILM_INFO_T; END RENTALS; / So, when calling RENTALS.GET_CUSTOMER_RENTAL_HISTORY we can find all the films that a customer has ever rented. Let’s do that for all customers whose FIRST_NAME is “JAMIE”, and this time, we’re using Java 8: // We call the stored function directly inline in // a SQL statement dsl().select(Rentals.getCustomer( CUSTOMER.CUSTOMER_ID )) .from(CUSTOMER) .where(CUSTOMER.FIRST_NAME.eq("JAMIE"))// This returns Result<Record1<CustomerTRecord>> // We unwrap the CustomerTRecord and consume // the result with a lambda expression .fetch() .map(Record1::value1) .forEach(customer -> { System.out.println("Customer : "); System.out.println("- Name : " + customer.getFirstName() + " " + customer.getLastName()); System.out.println("- E-Mail : " + customer.getEmail()); System.out.println("- Address : " + customer.getAddress().getAddress()); System.out.println(" " + customer.getAddress().getPostalCode() + " " + customer.getAddress().getCity().getCity()); System.out.println(" " + customer.getAddress().getCity().getCountry().getCountry());// Now, lets send the customer over the wire again to // call that other stored procedure, fetching his // rental history: CustomerRentalHistoryTRecord history = Rentals.getCustomerRentalHistory2(dsl().configuration(), customer);System.out.println(" Customer Rental History : "); System.out.println(" Films : ");history.getFilms().forEach(film -> { System.out.println(" Film : " + film.getTitle()); System.out.println(" Language : " + film.getLanguage().getName()); System.out.println(" Description : " + film.getDescription());// And then, let's call again the first procedure // in order to get a film's actors and categories FilmInfoTRecord info = Rentals.getFilmInfo2(dsl().configuration(), film);info.getActors().forEach(actor -> { System.out.println(" Actor : " + actor.getFirstName() + " " + actor.getLastName()); });info.getCategories().forEach(category -> { System.out.println(" Category : " + category.getName()); }); }); }); … and a short extract of the output produced by the above: Customer : - Name : JAMIE RICE - E-Mail : JAMIE.RICE@sakilacustomer.org - Address : 879 Newcastle Way 90732 Sterling Heights United States Customer Rental History : Films : Film : ALASKA PHANTOM Language : English Description : A Fanciful Saga of a Hunter And a Pastry Chef who must Vanquish a Boy in Australia Actor : VAL BOLGER Actor : BURT POSEY Actor : SIDNEY CROWE Actor : SYLVESTER DERN Actor : ALBERT JOHANSSON Actor : GENE MCKELLEN Actor : JEFF SILVERSTONE Category : Music Film : ALONE TRIP Language : English Description : A Fast-Paced Character Study of a Composer And a Dog who must Outgun a Boat in An Abandoned Fun House Actor : ED CHASE Actor : KARL BERRY Actor : UMA WOOD Actor : WOODY JOLIE Actor : SPENCER DEPP Actor : CHRIS DEPP Actor : LAURENCE BULLOCK Actor : RENEE BALL Category : Music If you’re using Java and PL/SQL… … then you should click on the below banner and download the free trial right now to experiment with jOOQ and Oracle:The Oracle port of the Sakila database is available from this URL for free, under the terms of the BSD license: https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/Sakila/oracle-sakila-db Finally, it is time to enjoy writing PL/SQL again!Reference: Painless Access from Java to PL/SQL Procedures with jOOQ from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
scala-logo

First steps with REST, Spray and Scala

On this site you can already find a couple of articles on how to do REST with a multiple of different frameworks. You can find an old one on Play, some on Scalatra and I even started an (as of yet unfinished) series on Express. So instead of finishing the series on Express, I’m going to look at Spray in this article. Getting started First thing we need to do is get the correct libraries set up, so we can start development (I use IntelliJ IDEA, but you can use whatever you want). The easiest way to get started is by using SBT. I’ve used the following minimal SBT file to get started. organization := "org.smartjava"   version := "0.1"   scalaVersion := "2.11.2"   scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")   libraryDependencies ++= { val akkaV = "2.3.6" val sprayV = "1.3.2" Seq( "io.spray" %% "spray-can" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-routing" % sprayV withSources() withJavadoc(), "io.spray" %% "spray-json" % "1.3.1", "io.spray" %% "spray-testkit" % sprayV % "test", "com.typesafe.akka" %% "akka-actor" % akkaV, "com.typesafe.akka" %% "akka-testkit" % akkaV % "test", "org.specs2" %% "specs2-core" % "2.3.11" % "test", "org.scalaz" %% "scalaz-core" % "7.1.0" ) } After you’ve imported this file into your IDE of choice you should have the correct spray and akka libraries to get started. Create a launcher Next lets create a launcher which you can use to run our Spray server. For this we just an object, creativeally named Boot, which extends from the standard scala App trait. package org.smartjava;   import akka.actor.{ActorSystem, Props} import akka.io.IO import spray.can.Http import akka.pattern.ask import akka.util.Timeout import scala.concurrent.duration._   object Boot extends App {   // create our actor system with the name smartjava implicit val system = ActorSystem("smartjava") val service = system.actorOf(Props[SJServiceActor], "sj-rest-service")   // IO requires an implicit ActorSystem, and ? requires an implicit timeout // Bind HTTP to the specified service. implicit val timeout = Timeout(5.seconds) IO(Http) ? Http.Bind(service, interface = "localhost", port = 8080) } There isn’t happening that much in this object. What we do is we send a HTTP.Bind() message (better said we ‘ask’) to register a listener. If binding succeeds our service will receive messages whenever a request is received on the port. Receiving actor Now lets look at the actor where we’ll be sending the messages from the IO subsystem to. package org.smartjava   import akka.actor.Actor import spray.routing._ import spray.http._ import MediaTypes._ import spray.httpx.SprayJsonSupport._ import MyJsonProtocol._   // simple actor that handles the routes. class SJServiceActor extends Actor with HttpService {   // required as implicit value for the HttpService // included from SJService def actorRefFactory = context   // we don't create a receive function ourselve, but use // the runRoute function from the HttpService to create // one for us, based on the supplied routes. def receive = runRoute(aSimpleRoute ~ anotherRoute)   // some sample routes val aSimpleRoute = {...} val anotherRoute = {...} So what happens here is that when we use the runRoute function, provided by HttpService, to create the receive function that handles the incoming messages. creating routes The final step we need to configure is create some route handling code. We’ll go into more detail for this part in one of the next articles, so for now we’ll show you how to create a route that based on the incoming media-type sends back some JSON. We’ll use the standard JSON support from Spray for this. As a JSON object we’ll use the following very basic case class which we extended with JSON support. package org.smartjava   import spray.json.DefaultJsonProtocol   object MyJsonProtocol extends DefaultJsonProtocol { implicit val personFormat = jsonFormat3(Person) }   case class Person(name: String, fistName: String, age: Long) This way Spray will marshall this object to JSON when we set the correct response media-type. Now that we’ve got our response object lets look at the code for the routes: // handles the api path, we could also define these in separate files // this path respons to get queries, and make a selection on the // media-type. val aSimpleRoute = { path("path1") { get {   // Get the value of the content-header. Spray // provides multiple ways to do this. headerValue({ case x@HttpHeaders.`Content-Type`(value) => Some(value) case default => None }) { // the header is passed in containing the content type // we match the header using a case statement, and depending // on the content type we return a specific object header => header match {   // if we have this contentype we create a custom response case ContentType(MediaType("application/vnd.type.a"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type A", System.currentTimeMillis()); } } }   // if we habe another content-type we return a different type. case ContentType(MediaType("application/vnd.type.b"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type B", System.currentTimeMillis()); } } }   // if content-types do not match, return an error code case default => { complete { HttpResponse(406); } } } } } } }   // handles the other path, we could also define these in separate files // This is just a simple route to explain the concept val anotherRoute = { path("path2") { get { // respond with text/html. respondWithMediaType(`text/html`) { complete { // respond with a set of HTML elements <html> <body> <h1>Path 2</h1> </body> </html> } } } } } A lot of code is in there, so lets highlight a couple of elements in detail: val aSimpleRoute = { path("path1") { get {...} } } This starting point of the route first checks whether the request is made to the “localhost:8080/path1″ path and then checks the HTTP method. In this case we’re only interested in GET methods. Once we’ve got a get method we do the following: // Get the value of the content-header. Spray // provides multiple ways to do this. headerValue({ case x@HttpHeaders.`Content-Type`(value) => Some(value) case default => None }) { // the header is passed in containing the content type // we match the header using a case statement, and depending // on the content type we return a specific object header => header match {   // if we have this contentype we create a custom response case ContentType(MediaType("application/vnd.type.a"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type A", System.currentTimeMillis()); } } }   // if we habe another content-type we return a different type. case ContentType(MediaType("application/vnd.type.b"), _) => { respondWithMediaType(`application/json`) { complete { Person("Bob", "Type B", System.currentTimeMillis()); } } }   // if content-types do not match, return an error code case default => { complete { HttpResponse(406); } } } } } In this piece of code we extract the Content-Type header of the request and based on that determine the response. The response is automatically converted to JSON because the responseWithMediaType is set to application/json. If a mediatype is provided which we don’t understand we return an 406 message. Lets test this Now lets test whether this is working. Spray provides own libraries and classes for testing, but for now lets just use a simple basic rest client. For this I usually use the Chrome Advanced Rest Client. In the following two screenshots you can see three calls being made to http://localhost:8080/path1: Call with media-type “application/vnd.type.a”:Call with media-type “application/vnd.type.b”:Call with media-type “application/vnd.type.c”:As you can see, the responses exactly match the routes we defined. What is next In the following article we’ll connect Spray IO to a database, make testing a little bit easier and explore a number of other Spray.IO features.Reference: First steps with REST, Spray and Scala from our JCG partner Jos Dirksen at the Smart Java blog....
android-logo

A Guide to Android RecyclerView and CardView

The new support library in Android L (Lollipop) introduced two new UI widgets: RecyclerView and CardView. The RecyclerView is a more advanced and more flexible version of the ListView. This new component is a big step because the ListView is one of the most used UI widgets. The CardView widget, on the other hand, is a new component that does not “upgrade” an existing component. In this tutorial, I’ll explain how to use these two widgets and show how we can mix them. Let’s start by diving into the RecyclerView.         RecyclerView: Introduction As I mentioned, RecyclerView is more flexible that ListView even if it introduces some complexities. We all know how to use ListView in our app and we know if we want to increase the ListView performances we can use a pattern called ViewHolder. This pattern consists of a simple class that holds the references to the UI components for each row in the ListView. This pattern avoids looking up the UI components all the time the system shows a row in the list. Even if this pattern introduces some benefits, we can implement the ListView without using it at all. RecyclerView forces us to use the ViewHolder pattern. To show how we can use the RecyclerView, we can suppose we want to create a simple app that shows a list of contact cards. The first thing we should do is create the main layout. RecyclerView is very similar to the ListView and we can use them in the same way: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"  xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin"  android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MyActivity"> <android.support.v7.widget.RecyclerView android:id="@+id/cardList" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> As you’ll notice from the layout shown above, the RecyclerView is available in the Android support library, so we have to modify build.gradle to include this dependency: dependencies { ... compile 'com.android.support:recyclerview-v7:21.0.0'} Now, in the onCreate method we can get the reference to our RecyclerView and configure it: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_my); RecyclerView recList = (RecyclerView) findViewById(R.id.cardList); recList.setHasFixedSize(true); LinearLayoutManager llm = new LinearLayoutManager(this); llm.setOrientation(LinearLayoutManager.VERTICAL); recList.setLayoutManager(llm); } If you look at the code above, you’ll notice some differences between the RecyclerView and ListView. RecyclerView requires a layout manager. This component positions item views inside the row and determines when it is time to recycle the views. The library provides a default layout manager called LinearLayoutManager. CardViewThe CardView UI component shows information inside cards. We can customise its corners, the elevation and so on. We want to use this component to show contact information. These cards will be the rows of RecyclerView and we will see later how to integrate these two components. By now, we can define our card layout: <android.support.v7.widget.CardView xmlns:card_view="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/card_view" android:layout_width="match_parent" android:layout_height="match_parent" card_view:cardCornerRadius="4dp" android:layout_margin="5dp"> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <TextView android:id="@+id/title" android:layout_width="match_parent" android:layout_height="20dp" android:background="@color/bkg_card" android:text="contact det" android:gravity="center_vertical" android:textColor="@android:color/white" android:textSize="14dp"/> <TextView android:id="@+id/txtName" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Name" android:gravity="center_vertical" android:textSize="10dp" android:layout_below="@id/title" android:layout_marginTop="10dp" android:layout_marginLeft="5dp"/> <TextView android:id="@+id/txtSurname" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Surname" android:gravity="center_vertical" android:textSize="10dp" android:layout_below="@id/txtName" android:layout_marginTop="10dp" android:layout_marginLeft="5dp"/> <TextView android:id="@+id/txtEmail" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Email" android:textSize="10dp" android:layout_marginTop="10dp" android:layout_alignParentRight="true" android:layout_marginRight="150dp" android:layout_alignBaseline="@id/txtName"/> </RelativeLayout> As you can see, the CardView is very simple to use. This component is available in another android support library so we have to add this dependency too: dependencies { compile 'com.android.support:cardview-v7:21.0.0' compile 'com.android.support:recyclerview-v7:21.0.0' } RecyclerView: Adapter The adapter is a component that stands between the data model we want to show in our app UI and the UI component that renders this information. In other words, an adapter guides the way the information are shown in the UI. So if we want to display our contacts, we need an adapter for the RecyclerView. This adapter must extend a class called RecyclerView.Adapter passing our class that implements the ViewHolder pattern: public class MyAdapter extends RecyclerView.Adapter<MyHolder> { ..... } We now have to override two methods so that we can implement our logic: onCreateViewHolder is called whenever a new instance of our ViewHolder class is created, and onBindViewHolder is called when the SO binds the view with the data — or, in other words, the data is shown in the UI. In this case, the adapter helps us combine the RecyclerView and CardView. The layout we defined before for the cards will be the row layout of our contact list in the RecyclerView. Before doing it, we have to define our data model that stands at the base of our UI (i.e. what information we want to show). For this purpose, we can define a simple class: public class ContactInfo { protected String name; protected String surname; protected String email; protected static final String NAME_PREFIX = "Name_"; protected static final String SURNAME_PREFIX = "Surname_"; protected static final String EMAIL_PREFIX = "email_"; } And finally, we are ready to create our adapter. If you remember what we said before about Viewholder pattern, we have to code our class that implements it: public static class ContactViewHolder extends RecyclerView.ViewHolder { protected TextView vName; protected TextView vSurname; protected TextView vEmail; protected TextView vTitle; public ContactViewHolder(View v) { super(v); vName = (TextView) v.findViewById(R.id.txtName); vSurname = (TextView) v.findViewById(R.id.txtSurname); vEmail = (TextView) v.findViewById(R.id.txtEmail); vTitle = (TextView) v.findViewById(R.id.title); } } Look at the code, in the class constructor we get the reference to the views we defined in our card layout. Now it is time to code our adapter: public class ContactAdapter extends  RecyclerView.Adapter<ContactAdapter.ContactViewHolder> {private List<ContactInfo> contactList; public ContactAdapter(List<ContactInfo> contactList) { this.contactList = contactList; }@Override public int getItemCount() { return contactList.size(); } @Override public void onBindViewHolder(ContactViewHolder contactViewHolder, int i) { ContactInfo ci = contactList.get(i); contactViewHolder.vName.setText(ci.name); contactViewHolder.vSurname.setText(ci.surname); contactViewHolder.vEmail.setText(ci.email); contactViewHolder.vTitle.setText(ci.name + " " + ci.surname); }@Override public ContactViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) { View itemView = LayoutInflater.from(viewGroup.getContext()). inflate(R.layout.card_layout, viewGroup, false); return new ContactViewHolder(itemView); }public static class ContactViewHolder extends RecyclerView.ViewHolder { ... } } In our implementation, we override onBindViewHolder where we bind the data (our contact info) to the Views. Notice that we don’t look up UI components but simply use the references stored in our ContactViewHolder. In onCreateViewHolder we return our ContactViewHolderinflating the row layout (the CardView in our case). Run the app and you’ll get the results shown below:Source code available @ github .Reference: A Guide to Android RecyclerView and CardView from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
software-development-2-logo

Pair Programming for Team Building

Extreme programming (XP) introduced most people to pair programming. The theory was that the sooner that code was reviewed, the more effective the review — so how much more effective can you be if you do that review right away?     Pair programming increases productivity by 3% and quality by 5% The reason it isn’t a better practice is that two people are being used to produce a single result and so it is not very efficient.  For more information about the marginal productivity see Capers Jones1. However, as a team building tool, pair programming can be extremely effective used in specific situations where high productivity is maintained:Training new team members in coding conventions Sharing individual productivity techniques Working through complex sections of codeNew Team Members   The first issue is self explanatory, pair programming allows you to explain your coding conventions while working on actual projects. It also gives you a fairly good glimpse into how that team member will work with the group. The key here is that the new member should pair program with different people every day until they have worked with the entire team.  This will speed up the integration of new members and get everyone familiar with each other.   Sharing Productivity Practices One of the key benefits of pair programming is that it is an ideal time to share productivity practices. Surprisingly, it is not just the less experienced programmers that learn from the more experienced ones.  Often, more experienced programmers have picked up habits that they are not even aware of. Working with newer programmers can expose you to information on IDEs and new productivity tools that you are not aware of. As much as we do keep up, there is continually new stuff coming out and the newer programmers are aware of it. In addition, there are sub-optimal habits that we all pick up and no longer notice because we do them all the time.   Working Through Complex Code Once you have planned a complex section of code, it can be very helpful to build that section of code as a pair. For information on planning complex code see:Not Planning is for Losers Productive Programmers are Smart and Lazy Planning is 1/2 the work, making sure that you implement that plan can often require two people to make sure that all loose ends (exceptions, boundary cases, etc) are taken care of. In particular, these are the sections of code that you want two pairs of eyes on as you are much more likely to recognized a missed alternative or work through weird conditions.   Summary Used appropriately, pair programming can be a great tool for integrating new members into a team, sharing productivity techniques, and reduce defects and improve quality of difficult sections of code. ReferencesJones, Capers and Bonsignour, Olivier.  The Economics of Software Quality.  Addison Wesley.  2011 Jones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008.Reference: Pair Programming for Team Building from our JCG partner Dalip Mahal at the Accelerated Development blog....
java-logo

How Immutability Helps

In a few recent posts, including “Getters/Setters. Evil. Period.”, “Objects Should Be Immutable”, and “Dependency Injection Containers are Code Polluters”, I universally labelled all mutable objects with “setters” (object methods starting with set) evil. My argumentation was based mostly on metaphors and abstract examples. Apparently, this wasn’t convincing enough for many of you — I received a few requests asking to provide more specific and practical examples. Thus, in order to illustrate my strongly negative attitude to “mutability via setters”, I took an existing commons-email Java library from Apache and re-designed it my way, without setters and with “object thinking” in mind. I released my library as part of the jcabi family — jcabi-email. Let’s see what benefits we get from a “pure” object-oriented and immutable approach, without getters. Here is how your code will look, if you send an email using commons-email: Email email = new SimpleEmail(); email.setHostName("smtp.googlemail.com"); email.setSmtpPort(465); email.setAuthenticator(new DefaultAuthenticator("user", "pwd")); email.setFrom("yegor@teamed.io", "Yegor Bugayenko"); email.addTo("dude@jcabi.com"); email.setSubject("how are you?"); email.setMsg("Dude, how are you?"); email.send(); Here is how you do the same with jcabi-email: Postman postman = new Postman.Default( new SMTP("smtp.googlemail.com", 465, "user", "pwd") ); Envelope envelope = new Envelope.MIME( new Array<Stamp>( new StSender("Yegor Bugayenko <yegor@teamed.io>"), new StRecipient("dude@jcabi.com"), new StSubject("how are you?") ), new Array<Enclosure>( new EnPlain("Dude, how are you?") ) ); postman.send(envelope); I think the difference is obvious. In the first example, you’re dealing with a monster class that can do everything for you, including sending your MIME message via SMTP, creating the message, configuring its parameters, adding MIME parts to it, etc. The Email class from commons-email is really a huge class — 33 private properties, over a hundred methods, about two thousands lines of code. First, you configure the class through a bunch of setters and then you ask it to send() an email for you. In the second example, we have seven objects instantiated via seven new calls. Postman is responsible for packaging a MIME message; SMTP is responsible for sending it via SMTP; stamps (StSender, StRecipient, and StSubject) are responsible for configuring the MIME message before delivery; enclosure EnPlain is responsible for creating a MIME part for the message we’re going to send. We construct these seven objects, encapsulating one into another, and then we ask the postman to send() the envelope for us. What’s Wrong With a Mutable Email? From a user perspective, there is almost nothing wrong. Email is a powerful class with multiple controls — just hit the right one and the job gets done. However, from a developer perspective Email class is a nightmare. Mostly because the class is very big and difficult to maintain. Because the class is so big, every time you want to extend it by introducing a new method, you’re facing the fact that you’re making the class even worse — longer, less cohesive, less readable, less maintainable, etc. You have a feeling that you’re digging into something dirty and that there is no hope to make it cleaner, ever. I’m sure, you’re familiar with this feeling — most legacy applications look that way. They have huge multi-line “classes” (in reality, COBOL programs written in Java) that were inherited from a few generations of programmers before you. When you start, you’re full of energy, but after a few minutes of scrolling such a “class” you say — “screw it, it’s almost Saturday”. Because the class is so big, there is no data hiding or encapsulation any more — 33 variables are accessible by over 100 methods. What is hidden? This Email.java file in reality is a big, procedural 2000-line script, called a “class” by mistake. Nothing is hidden, once you cross the border of the class by calling one of its methods. After that, you have full access to all the data you may need. Why is this bad? Well, why do we need encapsulation in the first place? In order to protect one programmer from another, aka defensive programming. While I’m busy changing the subject of the MIME message, I want to be sure that I’m not interferred with by some other method’s activity, that is changing a sender and touching my subject by mistake. Encapsulation helps us narrow down the scope of the problem, while this Email class is doing exactly the opposite. Because the class is so big, its unit testing is even more complicated than the class itself. Why? Because of multiple inter-dependencies between its methods and properties. In order to test setCharset() you have to prepare the entire object by calling a few other methods, then you have to call send() to make sure the message being sent actually uses the encoding you specified. Thus, in order to test a one-line method setCharset() you run the entire integration testing scenario of sending a full MIME message through SMTP. Obviously, if something gets changed in one of the methods, almost every test method will be affected. In other words, tests are very fragile, unreliable and over-complicated. I can go on and on with this “because the class is so big“, but I think it is obvious that a small, cohesive class is always better than a big one. It is obvious to me, to you, and to any object-oriented programmer. But why is it not so obvious to the developers of Apache Commons Email? I don’t think they are stupid or un-educated. What is it then? How and Why Did It Happen? This is how it always happens. You start to design a class as something cohesive, solid, and small. Your intentions are very positive. Very soon you realize that there is something else that this class has to do. Then, something else. Then, even more. The best way to make your class more and more powerful is by adding setters that inject configuration parameters into the class so that it can process them inside, isn’t it? This is the root cause of the problem! The root cause is our ability to insert data into mutable objects via configuration methods, also known as “setters”. When an object is mutable and allows us to add setters whenever we want, we will do it without limits. Let me put it this way — mutable classes tend to grow in size and lose cohesiveness. If commons-email authors made this Email class immutable in the beginning, they wouldn’t have been able to add so many methods into it and encapsulate so many properties. They wouldn’t be able to turn it into a monster. Why? Because an immutable object only accepts a state through a constructor. Can you imagine a 33-argument constructor? Of course, not. When you make your class immutable in the first place, you are forced to keep it cohesive, small, solid and robust. Because you can’t encapsulate too much and you can’t modify what’s encapsulated. Just two or three arguments of a constructor and you’re done. How Did I Design An Immutable Email?When I was designing jcabi-email I started with a small and simple class: Postman. Well, it is an interface, since I never make interface-less classes. So, Postman is… a post man. He is delivering messages to other people. First, I created a default version of it (I omit the ctor, for the sake of brevity): import javax.mail.Message; @Immutable class Postman.Default implements Postman { private final String host; private final int port; private final String user; private final String password; @Override void send(Message msg) { // create SMTP session // create transport // transport.connect(this.host, this.port, etc.) // transport.send(msg) // transport.close(); } } Good start, it works. What now? Well, the Message is difficult to construct. It is a complex class from JDK that requires some manipulations before it can become a nice HTML email. So I created an envelope, which will build this complex object for me (pay attention, both Postman and Envelope are immutable and annotated with @Immutable from jcabi-aspects): @Immutable interface Envelope { Message unwrap(); } I also refactor the Postman to accept an envelope, not a message: @Immutable interface Postman { void send(Envelope env); } So far, so good. Now let’s try to create a simple implementation of Envelope: @Immutable class MIME implements Envelope { @Override public Message unwrap() { return new MimeMessage( Session.getDefaultInstance(new Properties()) ); } } It works, but it does nothing useful yet. It only creates an absolutely empty MIME message and returns it. How about adding a subject to it and both To: and From: addresses (pay attention, MIME class is also immutable): @Immutable class Envelope.MIME implements Envelope { private final String subject; private final String from; private final Array<String> to; public MIME(String subj, String sender, Iterable<String> rcpts) { this.subject = subj; this.from = sender; this.to = new Array<String>(rcpts); } @Override public Message unwrap() { Message msg = new MimeMessage( Session.getDefaultInstance(new Properties()) ); msg.setSubject(this.subject); msg.setFrom(new InternetAddress(this.from)); for (String email : this.to) { msg.setRecipient( Message.RecipientType.TO, new InternetAddress(email) ); } return msg; } } Looks correct and it works. But it is still too primitive. How about CC: and BCC:? What about email text? How about PDF enclosures? What if I want to specify the encoding of the message? What about Reply-To? Can I add all these parameters to the constructor? Remember, the class is immutable and I can’t introduce the setReplyTo() method. I have to pass the replyTo argument into its constructor. It’s impossible, because the constructor will have too many arguments, and nobody will be able to use it. So, what do I do? Well, I started to think: how can we break the concept of an “envelope” into smaller concepts — and this what I invented. Like a real-life envelope, my MIME object will have stamps. Stamps will be responsible for configuring an object Message (again, Stamp is immutable, as well as all its implementors): @Immutable interface Stamp { void attach(Message message); } Now, I can simplify my MIME class to the following: @Immutable class Envelope.MIME implements Envelope { private final Array<Stamp> stamps; public MIME(Iterable<Stamp> stmps) { this.stamps = new Array<Stamp>(stmps); } @Override public Message unwrap() { Message msg = new MimeMessage( Session.getDefaultInstance(new Properties()) ); for (Stamp stamp : this.stamps) { stamp.attach(msg); } return msg; } } Now, I will create stamps for the subject, for To:, for From:, for CC:, for BCC:, etc. As many stamps as I like. The class MIME will stay the same — small, cohesive, readable, solid, etc. What is important here is why I made the decision to refactor while the class was relatively small. Indeed, I started to worry about these stamp classes when my MIME class was just 25 lines in size. That is exactly the point of this article — immutability forces you to design small and cohesive objects. Without immutability, I would have gone the same direction as commons-email. My MIME class would grow in size and sooner or later would become as big as Email from commons-email. The only thing that stopped me was the necessity to refactor it, because I wasn’t able to pass all arguments through a constructor. Without immutability, I wouldn’t have had that motivator and I would have done what Apache developers did with commons-email — bloat the class and turn it into an unmaintainable monster. That’s jcabi-email. I hope this example was illustrative enough and that you will start writing cleaner code with immutable objects. Related Posts You may also find these posts interesting:Paired Brackets Avoid String Concatenation Typical Mistakes in Java Code DI Containers are Code Polluters Getters/Setters. Evil. Period.Reference: How Immutability Helps from our JCG partner Yegor Bugayenko at the About Programming blog....
software-development-2-logo

Have You Ever Wondered About the Difference Between NOT NULL and DEFAULT?

When writing DDL in SQL, you can specify a couple of constraints on columns, like NOT NULL or DEFAULT constraints. Some people might wonder, if the two constraints are actually redundant, i.e. is it still necessary to specify a NOT NULL constraint, if there is already a DEFAULT clause? The answer is: Yes! Yes, you should still specify that NOT NULL constraint. And no, the two constraints are not redundant. The answer I gave here on Stack Overflow wraps it up by example, which I’m going to repeat here on our blog:  DEFAULT is the value that will be inserted in the absence of an explicit value in an insert / update statement. Lets assume, your DDL did not have the NOT NULL constraint: ALTER TABLE tbl ADD COLUMN col VARCHAR(20) DEFAULT "MyDefault" Then you could issue these statements -- 1. This will insert "MyDefault" -- into tbl.col INSERT INTO tbl (A, B) VALUES (NULL, NULL);-- 2. This will insert "MyDefault" -- into tbl.col INSERT INTO tbl (A, B, col) VALUES (NULL, NULL, DEFAULT);-- 3. This will insert "MyDefault" -- into tbl.col INSERT INTO tbl (A, B, col) DEFAULT VALUES;-- 4. This will insert NULL -- into tbl.col INSERT INTO tbl (A, B, col) VALUES (NULL, NULL, NULL); Alternatively, you can also use DEFAULT in UPDATE statements, according to the SQL-1992 standard: -- 5. This will update "MyDefault" -- into tbl.col UPDATE tbl SET col = DEFAULT;-- 6. This will update NULL -- into tbl.col UPDATE tbl SET col = NULL; Note, not all databases support all of these SQL standard syntaxes. Adding the NOT NULL constraint will cause an error with statements 4, 6, while 1-3, 5 are still valid statements. So to answer your question: No, NOT NULL and DEFAULT are not redundantThat’s already quite interesting, so the DEFAULT constraint really only interacts with DML statements and how they specify the various columns that they’re updating. The NOT NULL constraint is a much more universal guarantee, that constraints a column’s content also “outside” of the manipulating DML statements. For instance, if you have a set of data and then you add a DEFAULT constraint, this will not affect your existing data, only new data being inserted. If, however, you have a set of data and then you add a NOT NULL constraint, you can actually only do so if the constraint is valid – i.e. when there are no NULL values in your column. Otherwise, an error will be raised. Query performance Another very interesting use case that applies only to NOT NULL constraints is their usefulness for query optimisers and query execution plans. Assume that you have such a constraint on your column and then, you’re using a NOT IN predicate: SELECT * FROM table WHERE value NOT IN ( SELECT not_nullable FROM other_table ) In particular, when you’re using Oracle, the above query will be much faster when the not_nullable column has an index AND that particular constraint, because unfortunately, NULL values are not included in Oracle indexes. Read more about NULL and NOT IN predicates here.Reference: Have You Ever Wondered About the Difference Between NOT NULL and DEFAULT? from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

Guess, don’t measure

I have lost count in how many occasions I have recommended to start all performance tuning activities by setting goals and measuring baseline instead of tuning random bits in your source code. In the current post I will give you a counterexample about what just happened. We have been revamping our own internal monitoring tooling to make sure our production deployment is fast and error-free. For this we have rebuilt our logging solutions and monitoring tooling which now among other things includes bunch of Logstash services aggregating logs to central log management solution and a Grafana – Kibana combo to make sense of the logs.   Last week we were finally ready to ship the new monitoring solution to production. Immediately we started receiving data about how (badly) some of our public services behaved. For example, acquiring the list of JVMs Plumbr is monitoring was one of the operations whose performance was not satisfactory for us:As seen from the above, 10% of the queries to /jvms URL were taking multiple seconds to complete. This was clearly violating the SLA we had set for this particular operation. Completely in parallel to surfacing this data from our monitoring solution, our product owner had entered a bug in our issue tracker complaining about that a list of JVMs for a particular customer with 25 different JVMs taking “forever” to render. The forever was in reality seven seconds, but important is that the problem was actually having effect for real users and was reported by real users as well. As a next parallel activity, one of the engineers while implementing a completely non-related functionality stumbled upon a poorly designed query. Exact details are not too relevant, but what was once a simple query had over time evolved to what was now yet another example of the N+1 query problem in action. He had introduced the problem to me and had a permit to rewrite the design. By accident, the query in question was the very same query fetching the list of JVMs. Today, on our regular monitoring review, we spotted a sudden drop in the execution times for this operation as seen from the following chart. What used to look like multi-second operations was now reduced to couple of hundred of milliseconds:First reaction to this data was “great, the optimization our engineer took is clearly working, look at this 10x drop in response times”. Proud of this, we summoned the engineer in question to the meeting to give him feedback and pat on his back showing him the impact of his work. When we explained the chart to the engineer, he just sat there, silent and confused. Five more minutes of confusion and embarrassment later it became apparent that the change he had built the optimization into has not yet even merged to the production branch and there is no way this performance improvement is related to his work. This was now super awkward in many ways. We had a clear sign of performance improvement in place and no idea why it had happened. The doubt went to the level where we suspected that the functionality has to be broken, but numerous checks to the service demonstrated it was behaving exactly as it was supposed to be. And then it struck me. Yesterday I was dealing with some data integrity issues and discovered a missing foreign key in our storage. I had added the key and an index to it. Checking the continuous integration logs revealed that this change had been propagated to production exactly when this performance improvement appeared. So we had found the actual reason for this change in response times. And still, here I stand, feeling a mix of surprise and embarrassment. On one hand – it is awesome that we managed to patch a performance problem before the impact even became fully evident. On the other hand, the whole situation is contradicting my core beliefs where I always say “measure, do not guess”. Considering that actions speak louder than words, I have now some food for thought.Reference: Guess, don’t measure from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....
software-development-2-logo

Development Horror Story – Mail Bomb

Based on my session idea at JavaOne about things that went terrible wrong in our development careers, I thought about writing a few of these stories. I’ll start with one of my favourites ones: Crashing a customer’s mail server after generating more than 23 Million emails! Yes, that’s right, 23 Millions!              History A few years ago, I’ve joined a project that was being developed for several months, but had no production release yet. Actually, the project was scheduled to replace an existing application in the upcoming weeks. My first task in the project was to figure out what was needed to deploy the application in a production environment and replace the old application. This application had a considerable amount of users (around 50 k), but not all of them were active. The new application had a new feature to exclude the users that didn’t log into the application for the last few months. This was implemented as a timer (executed daily) and a email notification was sent to that user warning him that he was excluded from the application. The Problem The release was installed on a Friday (yes, Friday!), and everyone went for a rest. Monday morning, all hell broke loose! The customer mail server was down, and nobody had any idea why. The first reports indicated that the mail server was out of disk space, because it had around 2 Million emails pending delivery and a lot more incoming. What the hell happened? The Cause Even with the server down, support was able to show us a copy of an email stuck in the server. It was consistent with the email sent when a user was excluded. It didn’t make any sense, because we counted the number of users to be excluded and they were around 28 k, so only 28 k emails should have been sent. Even if all users were excluded the number could not be higher than 50 k (the total number of users). Invalid Email Looking into the code, we found out a bug that would cause the user to not be excluded if he had an invalid email. As a consequence these users were caught every time that the timer executed. From the total 28 k users to be excluded, around 26 k had invalid emails. From Friday to Monday, we count 3 executions * 26 k users, so 78k k emails. Ok, so now we have an email increase, but not close enough to the reported numbers. Timer Bug Actually the timer also had a bug. It was not scheduled to be executed daily, but every 8 hours. Let’s adjust the numbers: 3 days * 3 executions a day * 26 k users, brings the total to 234 k emails. A considerable increase but still far from a big number. Additional Node The operations installed the application in a second node, and the timer was executed in both. So a double increase. Let’s update: 2 * 234 k emails, brings the total to 468 k emails. No-reply Address Since the emails were automated, you usually set up a no-reply email as the email sender. Now the problem was that the domain for the no-reply address was invalid. Combining this with the users invalid emails, the mail server entered in a loop state. Each invalid user email generated an error email sent to the no-reply address, which was invalid as well and this caused a returned email again to the server. The loop end when the Maximum hop count is exceeded. In this case it was 50. Now everything starts to make sense! Let’s update the numbers: 26 k users * 3 days * 3 executions * 2 servers * 50 hops for a grand total of 23.4 Million emails! Aftermath The customer lost all their email from Friday to Monday, but it was possible to recover the mail server. The problems were fixed and it never happened again. I remember those days, to be very stressful, but today all of us involved, laugh about it! Remember: always check the no-reply address!Reference: Development Horror Story – Mail Bomb from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
android-logo

CallSerially The EDT & InvokeAndBlock (Part 2)

The last time we talked about the EDT we covered some of the basic ideas, such as call serially etc. We left out two major concepts that are somewhat more advanced. Invoke And Block When we write typical code in Java we like that code to be in sequence as such:     doOperationA(); doOperationB(); doOperationC();This works well normally but on the EDT it might be a problem, if one of the operations is slow it might slow the whole EDT (painting, event processing etc.). Normally we can just move operations into a separate thread e.g.: doOperationA(); new Thread() { public void run() { doOperationB(); } }).start(); doOperationC();Unfortunately, this means that operation C will happen in parallel to operation C which might be a problem…E.g. instead of using operation names lets use a more “real world” example: updateUIToLoadingStatus(); readAndParseFile(); updateUIWithContentOfFile();Notice that the first and last operations must be conducted on the EDT but the middle operation might be really slow! Since updateUIWithContentOfFile needs readAndParseFile to be before it doing the new thread won’t be enough. Our automatic approach is to do something like this: updateUIToLoadingStatus(); new Thread() { public void run() { readAndParseFile(); updateUIWithContentOfFile(); } }).start();But updateUIWithContentOfFile should be executed on the EDT and not on a random thread. So the right way to do this would be something like this: updateUIToLoadingStatus(); new Thread() { public void run() { readAndParseFile(); Display.getInstance().callSerially(new Runnable() { public void run() { updateUIWithContentOfFile(); } }); } }).start();This is perfectly legal and would work reasonably well, however it gets complicated as we add more and more features that need to be chained serially after all these are just 3 methods! Invoke and block solves this in a unique way you can get almost the exact same behavior by using this: updateUIToLoadingStatus(); Display.getInstance().invokeAndBlock(new Runnable() { public void run() { readAndParseFile(); } }); updateUIWithContentOfFile();Invoke and block effectively blocks the current EDT in a legal way. It spawns a separate thread that runs the run() method and when that run method completes it goes back to the EDT. All events and EDT behavior still works while invokeAndBlock is running, this is because invokeAndBlock() keeps calling the main thread loop internally. Notice that this comes at a slight performance penalty and that nesting invokeAndBlocks (or over using them) isn’t recommended. However, they are very convenient when working with multiple threads/UI. Why Would I Invoke callSerially when I’m on the EDT already? We discussed callSerially in the previous post but one of the misunderstood topics is why would we ever want to invoke this method when we are still on the EDT. The original version of LWUIT used to throw an IllegalArgumentException if callSerially was invoked on the EDT since it seemed to make no sense. However, it does make some sense and we can explain that using an example. E.g. say we have a button that has quite a bit of functionality tied to its events e.g.:A user added an action listener to show a Dialog. A framework the user installed added some logging to the button. The button repaints a release animation as its being released.However, this might cause a problem if the first event that we handle (the dialog) might cause an issue to the following events. E.g. a dialog will block the EDT (using invokeAndBlock), events will keep happening but since the event we are in “already happened” the button repaint and the framework logging won’t occur. This might also happen if we show a form which might trigger logic that relies on the current form still being present. One of the solutions to this problem is to just wrap the action listeners body with a callSerially. In this case the callSerially will postpone the event to the next cycle (loop) of the EDT and let the other events in the chain complete. Notice that you shouldn’t use this normally since it includes an overhead and complicates application flow, however when you run into issues in event processing I suggest trying this to see if its the cause.Reference: CallSerially The EDT & InvokeAndBlock (Part 2) from our JCG partner Shai Almog at the Codename One blog....
akka-logo

Use reactive streams API to combine akka-streams with rxJava

Just a quick article this time, since I’m still experimenting with this stuff. There is a lot of talk around reactive programming. In Java 8 we’ve got the Stream API, we got rxJava we got ratpack and Akka has got akka-streams. The main issue with these implementations is that they aren’t compatible. You can’t connect the subscriber of one implementation to the publisher of another. Luckily an initiative has started to provide a way that these different implementations can work together:      “It is the intention of this specification to allow the creation of many conforming implementations, which by virtue of abiding by the rules will be able to interoperate smoothly, preserving the aforementioned benefits and characteristics across the whole processing graph of a stream application.”From – http://www.reactive-streams.org/ How does this work Now how do we do this? Lets look at a quick example based on the akka-stream provided examples (from here). In the following listing: package sample.stream   import akka.actor.ActorSystem import akka.stream.FlowMaterializer import akka.stream.scaladsl.{SubscriberSink, PublisherSource, Source} import com.google.common.collect.{DiscreteDomain, ContiguousSet} import rx.RxReactiveStreams import rx.Observable; import scala.collection.JavaConverters._   object BasicTransformation {   def main(args: Array[String]): Unit = {   // define an implicit actorsystem and import the implicit dispatcher implicit val system = ActorSystem("Sys") import system.dispatcher   // flow materializer determines how the stream is realized. // this time as a flow between actors. implicit val materializer = FlowMaterializer()   // input text for the stream. val text = """|Lorem Ipsum is simply dummy text of the printing and typesetting industry. |Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, |when an unknown printer took a galley of type and scrambled it to make a type |specimen book.""".stripMargin   // create an observable from a simple list (this is in rxjava style) val first = Observable.from(text.split("\\s").toList.asJava); // convert the rxJava observable to a publisher val publisher = RxReactiveStreams.toPublisher(first); // based on the publisher create an akka source val source = PublisherSource(publisher);   // now use the akka style syntax to stream the data from the source // to the sink (in this case this is println) source. map(_.toUpperCase). // executed as actors filter(_.length > 3). foreach { el => // the sink/consumer println(el) }. onComplete(_ => system.shutdown()) // lifecycle event } } The code comments in this example explain pretty much what is happening. What we do here is we create a rxJava based Observable. Convert this Observable to a “reactive streams” publisher and use this publisher to create an akka-streams source. For the rest of the code we can use the akka-stream style flow API to model the stream. In this case we just do some filtering and print out the result.Reference: Use reactive streams API to combine akka-streams with rxJava from our JCG partner Jos Dirksen at the Smart Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close