Featured FREE Whitepapers

What's New Here?


Scala Tutorial – iteration, for expressions, yield, map, filter, count

Preface This is part 4 of tutorials for first-time programmers getting into Scala. Other posts are on this blog, and you can get links to those and other resources on the links page of the Computational Linguistics course I’m creating these for. Additionally you can find this and other tutorial series on the JCG Java Tutorials page. This tutorial departs from the very beginner nature of the previous three, so this may be of more interest to readers who already have some programming experience in another language. (Though also, see the section on using matching in Scala in Part 3.) Iteration, the Scala way(s) Up to now, we have (mostly) accessed individual items on a list by using their indices. But one of the most natural things to do with a list is to repeat some action for each item on the list, for example: “For each word in the given list of words: print it”. Here is how to say this in Scala. scala> val animals = List("newt", "armadillo", "cat", "guppy") animals: List = List(newt, armadillo, cat, guppy) scala> animals.foreach(println) newt armadillo cat guppyThis says to take each element of the list (indicated by foreach) and apply a function (in this case, println) to it, in order. There is some underspecification going on in that we aren’t providing a variable to name elements. This works in some cases, such as above, but won’t always be possible. Here’s is how it looks in full, with a variable naming the element. scala> animals.foreach(animal => println(animal)) newt armadillo cat guppyThis is useful when you need to do a bit more, such as concatenating a String element with another String. scala> animals.foreach(animal => println("She turned me into a " + animal)) She turned me into a newt She turned me into a armadillo She turned me into a cat She turned me into a guppyOr, if you are performing a computation with it, like outputing the length of each element in a list of strings. scala> animals.foreach(animal => println(animal.length)) 4 9 3 5We can obtain the same result as foreach using a for expression. scala> for (animal <- animals) println(animal.length) 4 9 3 5With what we have been doing so far, these two ways of expressing the pattern of iterating over the elements of a List are equivalent. However, they are different: a for expression returns a value, whereas foreach simply performs some function on every element of the list. This latter kind of use is termed a side-effect: by printing out each element, we are not creating new values, we are just performing an action on each element. With for expressions, we can yield values that create transformed Lists. For example, contrast using println with the following. scala> val lengths = for (animal <- animals) yield animal.length lengths: List[Int] = List(4, 9, 3, 5)The result is a new list that contains the lengths (number of characters) of each of the elements of the animals list. (You can of course print its contents now by doing lengths.foreach(println), but typically we want to do other, usually more interesting, things with such values.) What we just did was map the values of animals into a new set of values in a one-to-one manner, using the function length. Lists have another function called map that does this directly. scala> val lengthsMapped = animals.map(animal => animal.length) lengthsMapped: List[Int] = List(4, 9, 3, 5)So, the for-yield expression and the map method achieve the same output, and in many cases they are pretty much equivalent. Using map, however, is often more convenient because you can easily chain a series of operations together. For example, let’s say you want to add 1 to a List of numbers and then get the square of that, so turning List(1,2,3) into List(2,3,4) into List(4,9,16). You can do that quite easily using map. nums.map(x=>x+1).map(x=>x*x)Some readers will be puzzled by what was just done. Here it is more explicitly, using an intermediate variable nums2 to store the add-one list. scala> val nums2 = nums.map(x=>x+1) nums2: List[Int] = List(2, 3, 4) scala> nums2.map(x=>x*x) res9: List[Int] = List(4, 9, 16)Since nums.map(x=>x+1) returns a List, we don’t have to name it to a variable to use it — we can just immediately use it, including doing another map function on it. (Of course, one could do this computation in a single go, e.g. map((x+1)*(x+1)), but often one is using a series of built-in functions, or functions one has predefined already). You can keep on mapping to your heart’s content, including mapping from Ints to Strings. scala> nums.map(x=>x+1).map(x=>x*x).map(x=>x-1).map(x=>x*(-1)).map(x=>"The answer is: " + x) res12: List = List(The answer is: -3, The answer is: -8, The answer is: -15)Note: the use of x in all these cases is not important. They could have been named x, y, z and turlingdromes42 — any valid variable name. Iterating through multiple lists Sometimes you have two lists that are paired up and you need to do something to elements from each list simultaneously. For example, let’s say you have a list of word tokens and another list with their parts-of-speech. (See the previous tutorial for discussion of parts-of-speech.) scala> val tokens = List("the", "program", "halted") tokens: List = List(the, program, halted) scala> val tags = List("DT","NN","VB") tags: List = List(DT, NN, VB)Now, let’s say we want to output these as the following string: the/DT program/NN halted/VB Initially, we’ll do it a step at a time, and then show how it can be done all in one line. First, we use the zip function to bring two lists together and get a new list of pairs of elements from each list. scala> val tokenTagPairs = tokens.zip(tags) tokenTagPairs: List[(java.lang.String, java.lang.String)] = List((the,DT), (program,NN), (halted,VB)) Zipping two lists together in this way is a common pattern used for iterating over two lists. Now we have a list of token-tag pairs we can use a for expression to turn it into a List of strings. 1 scala> val tokenTagSlashStrings = for ((token, tag) <- tokenTagPairs) yield token + "/" + tag tokenTagSlashStrings: List = List(the/DT, program/NN, halted/VB)Now we just need to turn that list of strings into a single string by concatenating all its elements with a space between each. The function mkString makes this easy. scala> tokenTagSlashStrings.mkString(" ") res19: String = the/DT program/NN halted/VBFinally, here it all is in one step. scala> (for ((token, tag) <- tokens.zip(tags)) yield token + "/" + tag).mkString(" ") res23: String = the/DT program/NN halted/VBRipping a string into a useful data structure It is common in computational linguistics to need convert string inputs into useful data structures. Consider the part-of-speech tagged sentence mentioned in the previous tutorial. Let’s begin by assigning it to the variable sentRaw. val sentRaw = "The/DT index/NN of/IN the/DT 100/CD largest/JJS Nasdaq/NNP financial/JJ stocks/NNS rose/VBD modestly/RB as/IN well/RB ./."Now, let’s turn it into a List of Tuples, where each Tuple has the word as its first element and the postag as its second. We begin with the single line that does this so that you can see what the desired result is, and then we’ll examine each step in detail. scala> val tokenTagPairs = sentRaw.split(" ").toList.map(x => x.split("/")).map(x => Tuple2(x(0), x(1))) tokenTagPairs: List[(java.lang.String, java.lang.String)] = List((The,DT), (index,NN), (of,IN), (the,DT), (100,CD), (largest,JJS), (Nasdaq,NNP), (financial,JJ), (stocks,NNS), (rose,VBD), (modestly,RB), (as,IN), (well,RB), (.,.))Let’s take each of these in turn. The first split cuts sentRaw at each space character, and returns an Array of Strings, where each element is the material between the spaces. scala> sentRaw.split(" ") res0: Array = Array(The/DT, index/NN, of/IN, the/DT, 100/CD, largest/JJS, Nasdaq/NNP, financial/JJ, stocks/NNS, rose/VBD, modestly/RB, as/IN, well/RB, ./.)What’s an Array? It’s a kind of sequence, like List, but it has some different properties that we’ll discuss later. For now, let’s stick with Lists, which we can do by using the toList method. Additionally, let’s assign it to a variable so that the remaining operations are easier to focus on. scala> val tokenTagSlashStrings = sentRaw.split(" ").toList tokenTagSlashStrings: List = List(The/DT, index/NN, of/IN, the/DT, 100/CD, largest/JJS, Nasdaq/NNP, financial/JJ, stocks/NNS, rose/VBD, modestly/RB, as/IN, well/RB, ./.)Now, we need to turn each of the elements in that list into pairs of token and tag. Let’s first consider a single element, turning something like “The/DT” into the pair (“The”,”DT”). The next lines show how to do this one step at a time, using intermediate variables. scala> val first = "The/DT" first: java.lang.String = The/DT scala> val firstSplit = first.split("/") firstSplit: Array = Array(The, DT) scala> val firstPair = Tuple2(firstSplit(0), firstSplit(1)) firstPair: (java.lang.String, java.lang.String) = (The,DT)So, firstPair is a tuple representing the information encoded in the string first. This involved two operations, splitting and then creating a tuple from the Array that resulted from the split. We can do this for all of the elements in tokenTagSlashStrings using map. Let’s first convert the Strings into Arrays. scala> val tokenTagArrays = tokenTagSlashStrings.map(x => x.split("/")) res0: List[Array] = List(Array(The, DT), Array(index, NN), Array(of, IN), Array(the, DT), Array(100, CD), Array(largest, JJS), Array(Nasdaq, NNP), Array(financial, JJ), Array(stocks, NNS), Array(rose, VBD), Array(modestly, RB), Array(as, IN), Array(well, RB), Array(., .))And finally, we turn the Arrays into Tuple2s and get the result we obtained with the one-liner earlier. scala> val tokenTagPairs = tokenTagArrays.map(x => Tuple2(x(0), x(1))) tokenTagPairs: List[(java.lang.String, java.lang.String)] = List((The,DT), (index,NN), (of,IN), (the,DT), (100,CD), (largest,JJS), (Nasdaq,NNP), (financial,JJ), (stocks,NNS), (rose,VBD), (modestly,RB), (as,IN), (well,RB), (.,.))Note: if you are comfortable with using one-liners that chain a bunch of operations together, then by all means use them. However, there is no shame in using several lines involving a bunch of intermediate variables if that helps you break apart the task and get the result you need. One of the very useful things of having a List of pairs (Tuple2s) is that the unzip function gives us back two Lists, one with all of the first elements and another with all of the second elements. scala> val (tokens, tags) = tokenTagPairs.unzip tokens: List = List(The, index, of, the, 100, largest, Nasdaq, financial, stocks, rose, modestly, as, well, .) tags: List = List(DT, NN, IN, DT, CD, JJS, NNP, JJ, NNS, VBD, RB, IN, RB, .)With this, we’ve come full circle. Having started with a raw string (such as we are likely to read in from a text file), we now have Lists that allow us to do useful computations, such as converting those tags into another form. Providing a function you have defined to map Let’s return to the postag simplification exercise we did in the previous tutorial. We’ll modify it a bit: rather than shortening the Penn Treebank parts-of-speech, let’s convert them to course parts-of-speech using the English words that most people are familiar with, like noun and verb. The following function turns Penn Treebank tags into these course tags, for more tags than we covered in the last tutorial (note: this is still incomplete, but serves to illustrate the point). def coursePos (tag: String) = tag match { case "NN" | "NNS" | "NNP" | "NNPS" => "Noun" case "JJ" | "JJR" | "JJS" => "Adjective" case "VB" | "VBD" | "VBG" | "VBN" | "VBP" | "VBZ" | "MD" => "Verb" case "RB" | "RBR" | "RBS" | "WRB" | "EX" => "Adverb" case "PRP" | "PRP$" | "WP" | "WP$" => "Pronoun" case "DT" | "PDT" | "WDT" => "Article" case "CC" => "Conjunction" case "IN" | "TO" => "Preposition" case _ => "Other" }We can now map this function over the parts of speech in the collection obtained previously. scala> tags.map(coursePos) res1: List = List(Article, Noun, Preposition, Article, Other, Adjective, Noun, Adjective, Noun, Verb, Adverb, Preposition, Adverb, Other)Voila! If we want to convert the tags in this manner and then output them as a string like what we started with, it’s just a few steps. We’ll start from the beginning and recap. Try running the following for yourself. val sentRaw = "The/DT index/NN of/IN the/DT 100/CD largest/JJS Nasdaq/NNP financial/JJ stocks/NNS rose/VBD modestly/RB as/IN well/RB ./." val (tokens, tags) = sentRaw.split(" ").toList.map(x => x.split("/")).map(x => Tuple2(x(0), x(1))).unzip tokens.zip(tags.map(coursePos)).map(x => x._1+"/"+x._2).mkString(" ")A further point is that when you provide expressions like (x => x+1) to map, you are actually defining an anonymous function! Here is the same map operation with different levels of specification scala> val numbers = (1 to 5).toList numbers: List[Int] = List(1, 2, 3, 4, 5) scala> numbers.map(1+) res11: List[Int] = List(2, 3, 4, 5, 6) scala> numbers.map(_+1) res12: List[Int] = List(2, 3, 4, 5, 6) scala> numbers.map(x=>x+1) res13: List[Int] = List(2, 3, 4, 5, 6) scala> numbers.map((x: Int) => x+1) res14: List[Int] = List(2, 3, 4, 5, 6)So, it’s all consistent: whether you pass in a named function or an anonymous function, map will apply it to each element in the list. Finally, note that you can use that final form to define a function. scala> def addOne = (x: Int) => x + 1 addOne: (Int) => Int scala> addOne(1) res15: Int = 2This is similar to defining functions as we had previously (e.g. def addOne (x: Int) = x+1), but it is more convenient in certain contexts, which we’ll get to later. For now, the thing to realize is that whenever you map, you are either using a function that already existed or creating one on the fly. Filtering and counting The map method is a convenient way of performing computations on each element of a List, effectively transforming a List from one set of values to a new List with a set of values computed from each corresponding element. There are yet more methods that have other actions, such as removing elements from a List (filter), counting the number of elements satisfying a given predicate (count), and computing an aggregate single result from all elements in a List (reduce and fold). Let’s consider a simple task: count how many tokens are not a noun or adjective in a tagged sentence. As a starting point, let’s take the list of mapped postags from before. scala> val courseTags = tags.map(coursePos) courseTags: List = List(Article, Noun, Preposition, Article, Other, Adjective, Noun, Adjective, Noun, Verb, Adverb, Preposition, Adverb, Other)One way of doing this is to filter out all of the nouns and adjectives to obtain a list without them and then get its length. scala> val noNouns = courseTags.filter(x => x != "Noun")noNouns: List = List(Article, Preposition, Article, Other, Adjective, Adjective, Verb, Adverb, Preposition, Adverb, Other) scala> val noNounsOrAdjectives = noNouns.filter(x => x != "Adjective") noNounsOrAdjectives: List = List(Article, Preposition, Article, Other, Verb, Adverb, Preposition, Adverb, Other) scala> noNounsOrAdjectives.length res8: Int = 9However, because filter just takes a Boolean value, we can of course use Boolean conjunction and disjunction to simplify things. And, we don’t need to save intermediate variables. Here’s the one liner. scala> courseTags.filter(x => x != "Noun" && x != "Adjective").length res9: Int = 9If all we want is the number of elements, we can instead just use count with the same predicate. scala> courseTags.count(x => x != "Noun" && x != "Adjective") res10: Int = 9As an exercise, try doing a one-liner that starts with sentRaw and provides the value “resX: Int = 9” (where X is whatever you get in your Scala REPL). In the next tutorial, we’ll see how to use reduce and fold to compute aggregate results from a List. Reference: First steps in Scala for beginning programmers, Part 4 from our JCG partner Jason Baldridge at the Bcomposes blog. Related Articles :Scala Tutorial – Scala REPL, expressions, variables, basic types, simple functions, saving and running programs, comments Scala Tutorial – Tuples, Lists, methods on Lists and Strings Scala Tutorial – conditional execution with if-else blocks and matching Scala Tutorial – regular expressions, matching Scala Tutorial – regular expressions, matching and substitutions with the scala.util.matching API Scala Tutorial – Maps, Sets, groupBy, Options, flatten, flatMap Scala Tutorial – scala.io.Source, accessing files, flatMap, mutable Maps Scala Tutorial – objects, classes, inheritance, traits, Lists with multiple related types, apply Scala Tutorial – scripting, compiling, main methods, return values of functions Scala Tutorial – SBT, scalabha, packages, build systems Scala Tutorial – code blocks, coding style, closures, scala documentation project Fun with function composition in Scala How Scala changed the way I think about my Java Code What features of Java have been dropped in Scala? Testing with Scala Things Every Programmer Should Know...

JBoss AS 7.0.2 “Arc” released – Playing with bind options

More good news on the JBoss AS7 front. JBoss AS 7.0.2.Final “Arc” has been released! It’s been one month since AS 7.0.1 was released. Within this short period of time there have been numerous bugs fixed and more features and improvements implemented. All these bug fixes and features have been included in this 7.0.2 release. This new release mainly consists of the following features/improvements:JSF 2.1 Async EJB support Resurrected -b option for command line binding SSO support JNDI memory footprint improvement Limited support for Hibernate 3.3Let’s quickly look at one of these improvements. Those of you who have been using previous versions of JBoss AS would know that starting JBoss AS 4.2.x, JBoss by default binds its services to localhost for security reasons. These previous versions of JBoss AS allowed a command line option “-b” to allow binding the services to a different IP. AS 7.0.0 and 7.0.1 did not have this feature. Users could still bind to an IP of their choice but that required editing a xml file. Starting 7.0.2 release, we have now enabled this -b option (and also introduced a “-bmanagement” option) to allow you to bind your server to a IP/host of your choice. So let’s quickly see how it’s done.Download the server binary from here and extract it to a folder of your choice. Start the standalone server using the standalone.sh (standalone.bat for Windows OS) script available in JBOSS_HOME/bin folder: jpai@jpai-laptop:bin$ ./standalone.sh ... 18:45:36,893 INFO [org.jboss.as.remoting] (MSC service thread 1-3) Listening on / 18:45:37,030 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-4) Starting Coyote HTTP/1.1 on http-- In the logs you’ll notice that the server is being bound to IP (by default). What this means is that all your services including the web access won’t be accessible remotely from a different machine via the IP address or hostname of your machine. As a quick check, access the following pages: http://localhost:8080 http://localhost:9990 The first one is the default home page of your server and the second URL is the admin console. Now try accessing them using your machine’s IP or hostname instead of localhost and you’ll notice that they aren’t accessible. Now let’s see how we can enable access via your machine’s IP or hostname. Stop your running server and start it with the following command: jpai@jpai-laptop:bin$ ./standalone.sh -b ... 18:47:24,588 INFO [org.jboss.as.remoting] (MSC service thread 1-1) Listening on / 18:47:24,818 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-- Now you’ll notice that the http interface (for your web applications) is now bound to (the IP you passed). However, the management interface (on which the admin console is exposed) still binds to localhost. So now, you’ll be able to access the AS home page (and your applications) at and the admin console at http://localhost:9990 If you want to change the binding address for your management interface too, then you’ll have to additionally use the -bmanagement option as follows: jpai@jpai-laptop:bin$ ./standalone.sh -b -bmanagement ... 18:48:56,295 INFO [org.jboss.as.remoting] (MSC service thread 1-2) Listening on / 18:48:56,654 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-1) Starting Coyote HTTP/1.1 on http-- As you can see the http interface and the management interface are now both bound to the IP address that you passed as a option to the startup script. So you’ll now be able to access the home page at and the admin console at That’s it! So go get a fresh copy of 7.0.2 and start using it. If you run into any issues are have any suggestions, feel free to report them in our user forum. Reference: JBoss AS 7.0.2 “Arc” released! from our JCG partner Jaikirian at “Jaitech WriteUps” blog. Related Articles :JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Debugging a Production Server – Eclipse and JBoss showcase GWT EJB3 Maven JBoss 5.1 integration tutorial ‘Hello World’ portlet on JBoss Portal Java Best Practices Series...

Android Game Development – Displaying Graphical Elements (Primitives) with OpenGL ES

This is part 2 of the android OpenGL ES series. In the previous article we looked at how to set up the android project to use the provided OpenGL view with our renderer. You can use the project from that article as a template for this. Before we start displaying things, we must know a few basic concepts of 3D programming and also familiarise ourselves with the terminology. I’s basic geometry really. 3D graphics happens in the Cartesian Coordinate System. That means that the coordinate system used has three dimensions. X, Y and Z. Traditionally X goes from left to right, Y from bottom to top, and Z from me into the screen so to speak. While we deal with objects to display (a robot for example or a car) OpenGL deals with components of these objects. Each object is created from primitives which in the case of OpenGL is a triangle. Every triangle has a face and a backface. A triangle is defined by 3 points in space. A point is called a vertex (vertices plural). The following diagram shows 2 vertices. A and B.VerticesI drew this diagram to show how we will differentiate between 2D and 3D. A vertex is defined by its X, Y and Z coordinates. If we use 0 for the Z component all the time, we have 2D. You can see vertex A is part of the plane defined by X and Y. Vertex B is farther in on the Z coordinate. If you think of Z as a line being perpendicular to the screen, we wouldn’t even see B. A triangle is called a primitive. A primitive is the simplest type OpenGL understands and is able to graphically represent. It is very simple. 3 vertices define a triangle. There are other primitives as well, like quads but we’ll stick to the basics. Every shape can be broken down into triangles. We mentioned the face of the triangle before. Why is it important? In 3D you will have objects with parts facing towards you, the player and parts facing away from you. In order to make the drawing efficient, OpenGL will not draw the triangles facing away from you as it is not necessary because they will be hidden by the triangles facing towards you anyway. This is called backface culling. How does OpenGL determine this? This is determined by the order of the vertices when drawing the triangle. If the order is counter clockwise then it is a face (green triangle). Clockwise order of the vertices means it is a backface (the red triangle). This is the default setting but it can be changed of course. The following diagram illustrates just that.Backface CullingThe red triangle won’t be drawn. Creating and Drawing a Triangle With all the theory understood, let’s create a triangle and draw it. A triangle is defined by 3 vertices. The coordinates of the vertices is not measured in pixels. We will use float to represent the values and they will be relative to each other. If a side’s length is 1.0f and an other side’s length is 0.5f, then it means that the second side is half the length of the first side’s length. How big it will be displayed depends on how the viewport is set up. Imagine the viewport as a camera. When we use 2D then it means that the camera is orthogonal to the screen. If the camera is very close, the triangle will appear big, if it’s far, then the triangle will be small. Let’s create the Triangle class. package net.obviam.opengl; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.opengles.GL10; public class Triangle { private FloatBuffer vertexBuffer; // buffer holding the vertices private float vertices[] = { -0.5f, -0.5f, 0.0f, // V1 - first vertex (x,y,z) 0.5f, -0.5f, 0.0f, // V2 - second vertex 0.0f, 0.5f, 0.0f // V3 - third vertex }; public Triangle() { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer vertexBuffer = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices vertexBuffer.put(vertices); // set the cursor position to the beginning of the buffer vertexBuffer.position(0); } }Line 11 defines a FloatBuffer that will hold the vertices for our triangle. We need to use the java.nio package as it is very intensive input output. The vertices[] array holds the actual coordinates for the vertices. The triangle we will draw is represented in the following diagram. We calculate everything from the origin.TriangleIn the constructor we initialise the triangle from this vertices[] array. What we do is, we fill the vertexBuffer with the coordinates and set the cursor’s position to the beginning of the buffer. We will be using this buffer in the OpenGL call to display triangle strips. We currently have just one. Let’s take a look at the renderer. The GlRenderer package net.obviam.opengl; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.opengl.GLU; import android.opengl.GLSurfaceView.Renderer; public class GlRenderer implements Renderer { private Triangle triangle; // the triangle to be drawn /** Constructor */ public GlRenderer() { this.triangle = new Triangle(); } @Override public void onDrawFrame(GL10 gl) { // clear Screen and Depth Buffer gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // Reset the Modelview Matrix gl.glLoadIdentity(); // Drawing gl.glTranslatef(0.0f, 0.0f, -5.0f); // move 5 units INTO the screen // is the same as moving the camera 5 units away triangle.draw(gl); // Draw the triangle } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { if(height == 0) { //Prevent A Divide By Zero By height = 1; //Making Height Equal One } gl.glViewport(0, 0, width, height); //Reset The Current Viewport gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Projection Matrix gl.glLoadIdentity(); //Reset The Projection Matrix //Calculate The Aspect Ratio Of The Window GLU.gluPerspective(gl, 45.0f, (float)width / (float)height, 0.1f, 100.0f); gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix gl.glLoadIdentity(); //Reset The Modelview Matrix } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { } }We create the triangle in the constructor. The onDrawFrame(GL10 gl) is of the most interest for us. OpenGL works with state variables. Every method we call on the OpenGL context changes its internal state. Following the onDrawFrame method we see that every time a frame is drawn, the buffers get cleared, the ModelView matrix is reloaded (don’t worry if you don’t understand this at the moment), the camera is moved away 5 units (we’re dealing with units here, not pixels) and the triangle’s draw() method is called. The onSurfaceChanged on the other hand, transitions the OpenGL context between a few states. First it sets the viewport to the current width and height of the surface (so it works with the GL_PROJECTION state), then it transitions the state to the GL_MODELVIEW so we can work with our models – the triangle in our case. It will make sense later on, don’t worry. Let’s check out the draw method for the triangle: public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); // set the colour for the triangle gl.glColor4f(0.0f, 1.0f, 0.0f, 0.5f); // Point to our vertex buffer gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); //Disable the client state before leaving gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); }Because we store the triangle’s vertices’ coordinates in a FloatBuffer we need to enable OpenGL to read from it and understand that is a triangle there. Line 02 does just that. Line 05 sets the colour for the entity (triangle in our case) that will be drawn. Note that the values of the rgb are floats and are between 0.0 and 1.0. gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); will tell OpenGL to use the vertexBuffer to extract the vertices from. The first parameter (value = 3) represents the number of vertices in the buffer. The second lets OpenGL know what type the data the buffer holds. The third parameter is the offset in the array used for the vertices. Because we don’t store extra data, our vertices follow each other and there is no offset. Finally the last parameter is our buffer containing the vertices. gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); tells OpenGL to draw triangle strips found in the buffer provided earlier, starting with the first element. It also lets it know how many vertices there are. That is it. Run the project and you should be able to see your first accelerated triangle. Just like this:Download the source here (obviam.opengl.p02.tgz): I was inspired by code from the nehe android ports. To learn the guts of OpenGL I warmly recommend the nehe tutorials. Next we will see how we can create basic 3D objects and rotate them. We will also find out how we can use textures on elements. Reference: OpenGL ES Android – Displaying Graphical Elements (Primitives) from our JCG partner Tamas Jano from “Against The Grain” blog. Do not forget to check out our new Android Game ArkDroid (screenshots below). You feedback will be more than helpful!Related Articles:Android Game Development Tutorials Introduction Android Game Development – The Game Idea Android Game Development – Create The Project Android Game Development – A Basic Game Architecture Android Game Development – A Basic Game Loop Android Game Development – Displaying Images with Android Android Game Development – Moving Images on Screen Android Game Development – The Game Loop Android Game Development – Measuring FPS Android Game Development – Sprite Animation Android Game Development – Particle Explosion Android Game Development – Design In-game Entities – The Strategy Pattern Android Game Development – Using Bitmap Fonts Android Game Development – Switching from Canvas to OpenGL ES Android Game Development – OpenGL Texture Mapping Android Game Development – Design In-game Entities – The State Pattern Android Games Article Series...

GlassFish Response GZIP Compression in Production

A lot has been written about this and this basically should be common knowledge, but talking to different people out there and looking at the efforts Google takes to improve page speed it seems to me that the topic is worth a second and current look. The basics HTTP compression, otherwise known as content encoding, is a publicly defined way to compress textual content transferred from web servers to browsers. HTTP compression uses public domain compression algorithms, like gzip and compress, to compress XHTML, JavaScript, CSS, and other text files at the server. This standards-based method of delivering compressed content is built into HTTP 1.1, and most modern browsers that support HTTP 1.1 support ZLIB inflation of deflated documents. In other words, they can decompress compressed files automatically, which saves time and bandwidth.But that’s simple. What are the problems? In order to get your stuff compressed, you have to do this somewhere between the responding server and the client. Looking into this a little deeper you find a couple of things to take care of: It should: 1) …be fast 2) …be proven in production 3) …not slow down your appserver 4) …be portable and not bound to an appserver Let’s go and have a more detailed look at what you could do in order to speed up your GlassFish a bit. Testpage I am trying to run this with a simple test-page. This is the “Edit Network Listener” page in GlassFish’s Admin Console (http://localhost:4848/web/grizzly/networkListenerEdit.jsf?name=admin-listener&configName=server-config). The basic response times (uncompressed) for this page on my little machine captured with Firebug:Type # Requests Size (kb) time (ms)css 11 120 125js 12 460.7 130html 3 324.3 727all 52 1126.4 1380GlassFish built-in compression If you are running a GlassFish 3.x server, the most obvious thing is to look what he has to offer. You could simply “Enable HTTP/1.1 GZIP compression to save server bandwidth” (“Edit Network Listener” => HTTP => middle). You simply add the compressible mime types (defaults plus: text/css,text/javascript,application/javascript) you would like and set a compression minimum size (in this case 1024bytes). You do have to restart your instance in order to let the changes take effect.Type # Requests Size (kb) time (ms) change % size change % timecss 11 24.9 185 -79,25 48,00js 12 122,2 55 -73,48 -57,69html 3 22.6 1470 -93,03 102,20all 52 272,4 2350 -75,82 70,29-80,39 40,70Looking at the results you see, that you have an average of 80% to save on bandwidth using compression but you also see that it takes longer to serve compressed content in general. What I also realize is, that you have to play around with the settings for your mime types. It’s helpful to check for single files what mime type they actually have. Apache mod_deflate If you are not willing to have additional load on your application server (which is quite common) you can dispatch this to someone who knows how to handle http. This is true for Apache’s httpd. The module you are looking for is called mod_deflate and you can simply load it along with your configuration. I assume you have something like mod_proxy in place to proxy all the requests against GlassFish through your httpd. Comparing starts getting a bit tricky here. Having mod_proxy in place means your response times drop a lot. So it would not be valid to compare against a direct request onto GlassFish. In fact, what I did is, that I compare the average response time against a not deflated response via Apache, the size is compared against GlassFish compression.Type # Requests Size (kb) time (ms) change % size change % timecss 11 24.9 551 -79,25 -5,97js 12 122,2 55 -73,48 0,76html 3 22.6 1470 -93,62 -1,29all 52 272,4 2350 -75,97 -5,65-80,58 -3,04Not a big surprise, right? They are both using gzip compression and this is a quite common and well known algorithm. So I did not expect any changes in compression effectiveness. But what you see is, that you have an unlike faster compression compared to running it on GlassFish. With an average overhead of roughly 3% you hardly can feel any change. That’s a plus! Another plus is, that you can change the compression level with mod_deflate. Setting it from Zlib#s default to highest (9) gives you an extra bit of compression but it’s not likely you see this higher than 1% overall which also could be a measuring inaccuracy. Google mod_pagespeed Yeah. That would have been a good additional test. But: I only have a Windows box running and the binaries are still only supported on some flavors of Linux. So, I need to skip it today. Compression Filter There are a lot of compression servlet filters out there. Back in the days, even BEA shiped one with their WebLogic. I guess as of today I would not use anything like this in production for stability reasons. I strongly believe, that there is not a single reason to let your appserver do any compression at all. Compressing content on-the-fly uses CPU time and being on an application server this is better spend onto other workload. Especially because you usually don’t have a bandwidth problem between your appserver and your DMZ httpd. Reference: Response GZIP Compression with GlassFish in Production from our JCG partner Markus Eisele at Enterprise Software Development with Java. Related Articles :Multiple Tomcat Instances on Single Machine Zero-downtime Deployment (and Rollback) in Tomcat; a walkthrough and a checklist Debugging a Production Server – Eclipse and JBoss showcase Getting to know the ‘hosts’ file How to solve production problems...

SQL or NOSQL: That is the question?

So what’s the deal with NoSQL? Is NoSQL just a controversial buzzword? Could you imagine if the term ‘Object Oriented’ didn’t exist and instead architectures based on concepts such as encapsulation, polymorphism and inheritance were referred to as ‘NoProcedural’? Could you imagine if .net was called ‘NoJava’? Leinster was called ‘NoMunster’? Well controversial name aside, a good way to appreciate the hype about NoSQL is to consider scalability – the classical non-functional architectural concern. In a classical OLTP architecture, when load increases and your JVM is under pressure, you need to scale. You have two choices:vertical scaling – adding more CPU power to your JVM horizontal scaling – adding more JVMs (usually one more boxes)It’s generally never any problem scaling the business tier horizontally. Follow J2EE / JEE specs and unless you’ve done something crazy your business tier will scale. Just add more JVMs and load balance between them. However, while the business tier may be straightforward, the persistence tier ain’t so easy. Let’s say you are using a classical relational database (such as MySQL, SQLServer, DB2 or Oracle) for your persistence, you can’t just add database machines like you can add JVMs. Why not? Imagine trying to do SQL joins when tables are on the same machine and when the tables are on different machines! Imagine trying to do maintain ACID characteristics for your transactions when your database is split across various CPUs? Now think trying to do all that on 5 machines, 50 , 500, 5000 machines? The more machines the harder it gets. The leading relational databases will scale horizontally. But only by so much. To get around this an architect usually will consider:Scaling vertically – putting the database on the best hardware that can be afforded Partitioning out legacy data and thus reduce things like the size of index tables. This will boost performance and put less pressure on the need to scale Remove the amount of pressure on the database by caching more in the business tier Pay a DBA a lot of money!But what if you just run out of all possible database optimizations options and you have to scale horizontally? Not just to a few machines but to a few hundred if not thousand. This is where NoSQL architectures become relevant. With a NoSQL database there is no strict schema. Everything is effectively collapsed into one very fat table – a bit like an old school flat file, but where each row stores a huge amount of data. So, instead of having a table for Users and a table for Activities (representing User’s activities), you put all the User information together in one fat row. This means there are no joins across tables. It also means there is a lot of data redundancy which means more storage space required. In addition, more computational power will be needed for writes. But because data that is used data is located at the very same place – within the same row – it means no complex joins and hence it is easier to scale. The computational requirement for reads is also less. So reads can go faster. Another advantage of NoSQL databases is derived from the freedom that comes with not having to be tied to strict schema. You know that headache where a change to a data model can cause big problems? Well since there is no strict schema with NoSQL – this problem does not exist. This makes the architecture more flexible and more extensible. Right now, it’s fair to say NoSQL is only relevant in the minority of architectures. But could this be another case of technical innovation driving business innovation as we have seen with smart phones? There wasn’t a need for smart phones but the technical innovation provided business opportunities. I think the same could happen with NoSQL Architectures. Take a step back from Computer Science and just think Science. Science used to be hypothesis centric, now it is becoming more and more data centric. CERN, genome sequencing, climate change analysis – all involve tonnes and tonnes of data. Surely NoSQL architectures allied with searching technologies such as MapReduce / Hadoop will open up new ways to do Science? So any disadvantages with NoSQL architectures? Well it’s still an immature technology. Indexing, Security models are just not as sophisticated as they are with classical relational databases. And because most of it is coming from the open source community the support is not as good as it is for relational databases. So don’t throw out your SQL just yet! Reference: SQL or NOSQL that is the question? from our JCG partner Alex Staveley at Dublin’s Tech Blog. Related Articles :Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison Revving Up Your Hibernate Engine Quick tips for improving Java apps performance How to get C like performance in Java Java Best Practices Series...

Android Game Development – Switching from Canvas to OpenGL ES

It is about time we delve into the graphical capabilities of the Android platform. Android supports the OpenGL ES API. Needless to say that offloading graphics handling to a dedicated GPU is way more optimal than doing it in the CPU. Most android devices have such a dedicated GPU. OpenGL is an API for writing 2D and 3D graphics that is rendered on the GPU. This will free up precious computing resources on the CPU to be used for more complex physics or more entities or anything not related to graphics. There are a few notions that need to be understood but I will introduce them when we will bump into them during the course. If you followed the articles related to displaying graphics on an android device, you already know that in order to display graphical elements, we need a surface and a renderer. We used a basic SurfaceView from which we obtained the Canvas and we drew everything onto it by calling the supported draw method from within our game loop. Using OpenGL is not much different. Android comes with a dedicated implementation of the SurfaceView interface for displaying images rendered by OpenGL. Let’s create the android project the usual way.New Project WizardI call the activity simply Run. Check what the wizard has generated and it should look like this: public class Run extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } }Nothing special. Creating the OpenGL renderer Let’s build the renderer. Create a class GlRenderer which implements the android.opengl.GLSurfaceView.Renderer interface. It will look like this: import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10;import android.opengl.GLSurfaceView.Renderer;public class GlRenderer implements Renderer {@Override public void onDrawFrame(GL10 gl) { }@Override public void onSurfaceChanged(GL10 gl, int width, int height) { }@Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { } }We need to implement the above 3 methods. Currently they are empty and do nothing. The methods onSurfaceCreated(GL10 gl, EGLConfig config)created or recreated. It is important to bear the recreated bit in mind as it means every time the device goes to sleep for example and awakes, the surface gets recreated. Because the context which holds the resources gets destroyed too, this is the place where we will load our resources (images for textures, etc). onSurfaceChanged(GL10 gl, int width, int height) is called whenever the surface size changes. This mainly affects our viewport. The viewport is just a rectangular region through which we see our game world.   onDrawFrame(GL10 gl) is called by the rendering thread to draw each frame. This is where all the drawing happens. We don’t need to call it explicitly. A rendering thread is created by android for us and that will call it. Let’s switch to the OpenGL renderer. Check out the new Run activity. package net.obviam.opengl;import android.app.Activity; import android.opengl.GLSurfaceView; import android.os.Bundle; import android.view.Window; import android.view.WindowManager;public class Run extends Activity {/** The OpenGL view */ private GLSurfaceView glSurfaceView;/** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);// requesting to turn the title OFF requestWindowFeature(Window.FEATURE_NO_TITLE); // making it full screen getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN);// Initiate the Open GL view and // create an instance with this activity glSurfaceView = new GLSurfaceView(this);// set our renderer to be the main renderer with // the current activity context glSurfaceView.setRenderer(new GlRenderer()); setContentView(glSurfaceView); }/** Remember to resume the glSurface */ @Override protected void onResume() { super.onResume(); glSurfaceView.onResume(); }/** Also pause the glSurface */ @Override protected void onPause() { super.onPause(); glSurfaceView.onPause(); } }In line 12 we declare a GLSurfaceView member variable. This is our OpenGL view provided by android. When we instantiate it (line 27) we have to make it context aware. That is, for the view to have access to the application environment. All we need to do is to add our renderer to this view. This we do in line 31. Line 32 tells the activity to use our OpenGL view. The onResume() and onPause() methods are being overridden and trigger the respective methods in our view. You can run the application as an Android app and you should see a blank black screen. That is it. We have switched from the canvas to the OpenGL renderer. Download the source code and eclipse project here (obviam.opengl.p01.tgz) Reference: OpenGL ES with Android Tutorial- Switching from Canvas to OpenGL from our JCG partner Tamas Jano from “Against The Grain” blog. Do not forget to check out our new Android Game ArkDroid (screenshots below). You feedback will be more than helpful!Related Articles:Android Game Development Tutorials Introduction Android Game Development – The Game Idea Android Game Development – Create The Project Android Game Development – A Basic Game Architecture Android Game Development – A Basic Game Loop Android Game Development – Displaying Images with Android Android Game Development – Moving Images on Screen Android Game Development – The Game Loop Android Game Development – Measuring FPS Android Game Development – Sprite Animation Android Game Development – Particle Explosion Android Game Development – Design In-game Entities – The Strategy Pattern Android Game Development – Using Bitmap Fonts Android Game Development – Displaying Graphical Elements (Primitives) with OpenGL ES Android Game Development – OpenGL Texture Mapping Android Game Development – Design In-game Entities – The State Pattern Android Games Article Series...

Open Source Java Libraries and Frameworks – Benefits and Dangers

Everyone in the Java world seems to use various open source libraries and frameworks… and why not, there are hundreds available covering virtually every type of programming problem you’re likely to come across in today’s programming landscape. This blog takes a quick look at the reasons for using open source artifacts and examines what could go wrong… The first reason for using them is reduced cost as it’s cheaper for your project to grab hold of an open source library than it is for your to write the same thing yourself. The second reason for using open source artifacts is reduced cost: you get free support from a bunch of capable and enthusiastic developers, usually in the form of copious amounts of documentation and forums. The third reason is reduced cost: you get free updates and enhancements from the open source community and free bug fixes, although you don’t get to choose which enhancements are added to the project. Some projects, such as Tomcat, have a mechanism for voting on what enhancements are made, but at the end of the day it’s down to what really interests the developers. There are also a couple of unspoken reasons for using popular open source libraries and frameworks: firstly, they make your CV look good. If open source X is popular and you put that on your CV then your chances of getting a pay rise or a better job will improve. Secondly, if you work on one of the open source projects, then you’ll earn some kudos, which, again, makes your CV look good improves the chances of you increasing the size of your pay-packet. There is an obvious downside to using open source artifacts and that is all projects have an natural life-cycle. New versions of libraries are released, old libraries are deprecated, falling out of use because the technology’s too old, the developers have lost interest or moved on, or the rest of the community found something else that’s better and jumped on that bandwagon deserting yours. So, the problems of finding yourself saddled with retired and deprecated open source libraries are firstly extra cost: there’s no support, no forum and no bug fixes. You’re on your own. You can often manage to download the source code to retired projects and support it yourself, but that’s not guaranteed and that costs money. The second problem of using deprecated code is extra cost: old code usually encompasses obsolete architecture and patterns, which contain known flaws and problems – after all, that’s why they’re obsolete. Using obsolete patterns and architecture encourages and in some cases forces developers to write bad code, not because your developers are bad, but that’s just the way it is… For example, there are some very obsolete JSP tags that blatantly mix database calls with business and presentation logic, which is a well know way of producing crumby, unmaintainable, spaghetti code. The third problem is, believe it or not, extra cost: I’ve recently come across a project where the code is so old that there are JAR file clashes, with different JARs containing different versions of the same API being dragged into the classpath. Certain bits of the code use one version of the API whilst other bits use the other version. eclipse didn’t know what to make of it all. There are also hidden costs: no one in there right mind wants to work on obsolete spaghetti code – it damages moral and saps the will to live, whilst damaging your ability to find that next, more highly paid, job. Plus, when people do leave, you’ve got the extra cost of finding and training their replacements. Never forget that the best people will be the first to leave, leaving you with the less experienced developers, again driving up your cost. So, what can you do when faced with obsolete open source libraries and frameworks? 1) Do nothing, continue using the obsolete library and hope everything will be alright. 2) Scrap the whole project and start again from scratch – the Big Bang Theory. 3) Refactor vigorously to remove the obsolete open source code. This could also be seen as a way of changing the architecture of an application, updating the programming practices of the team and improving the code and whole build process. From the above I guess that you can figure out that in my opinion I prefer option 3. Option 1 is very risky, but then again, so is option 2: starting from scratch wastes time simply re-inventing the wheel, and whilst you do that, you don’t have a product, plus you may also end up with a big a mess as you started with. Option 3 is evolution and not revolution, quite the most sensible way to go. Having said all this, I definitely won’t stop using open source code… Reference: The Benefits and Dangers of using Opensource Java Libraries and Frameworks from our JCG partner Roger Hughes at the “Captain’s Debug” blog. Related Articles :Are frameworks making developers dumb? Those evil frameworks and their complexity When Inheriting a Codebase, there are more questions than answers… Java Tools: Source Code Optimization and Analysis Java Tutorials and Android Tutorials list...

Scala Tutorial – conditional execution with if-else blocks and matching

Preface This is part 3 of tutorials for first-time programmers getting into Scala. Other posts are on this blog, and you can get links to those and other resources on the links page of the Computational Linguistics course I’m creating these for. Additionally you can find this and other tutorial series on the JCG Java Tutorials page. Conditionals Variables come and variables go, and they take on different values depending on the input. We typically need to enact different behaviors conditioned on those values. For example, let’s simulate a bar tender in Austin who must make sure that he doesn’t give alcohol to individuals under 21 years of age. scala> def serveBeer (customerAge: Int) = if (customerAge >= 21) println("beer") else println("water") serveBeer: (customerAge: Int)Unit scala> serveBeer(23) beer scala> serveBeer(19) waterWhat we’ve done here is a standard use of conditionals to produce one action or another — in this case just printing one message or another. The expression in the if (…) is a Boolean value, either true or false. You can see this by just doing the inequality directly: scala> 19 >= 21 res7: Boolean = falseAnd these expressions can be combined according to the standard rules for conjunction and disjunction of Booleans. Conjunction is indicated with && and disjunction with ||. scala> 19 >= 21 || 5 > 2 res8: Boolean = true scala> 19 >= 21 && 5 > 2 res9: Boolean = falseTo check equality, use ==. scala> 42 == 42 res10: Boolean = true scala> "the" == "the" res11: Boolean = true scala> 3.14 == 6.28 res12: Boolean = false scala> 2*3.14 == 6.28 res13: Boolean = true scala> "there" == "the" + "re" res14: Boolean = trueThe equality operator == is different from the assignment operator =, and you’ll get an error if you attempt to use = for equality tests. scala> 5 = 5 <console>:1: error: ';' expected but '=' found. 5 = 5 ^ scala> x = 5 <console>:10: error: not found: value x val synthvar$0 = x ^ <console>:7: error: not found: value x x = 5 ^The first example is completely bad because we cannot hope to assign a value to a constant like 5. With the latter example, the error complains about not finding a value x. That’s because it is a valid construct, assuming that a var variable x has been previously defined. scala> var x = 0 x: Int = 0 scala> x = 5 x: Int = 5Recall that with var variables, it is possible to assign them a new value. However, it is actually not necessary to use vars much of the time, and there are many advantages with sticking with vals. I’ll be helping you think in these terms as we go along. For now, try to ignore the fact that vars exist in the language! Back to conditionals. First, here are more comparison operators: x == y (x is equal to y) x != y (x does not equal y) x > y (x is larger than y) x < y (x is less than y) x >= y (x is equal to y, or larger than y) x <= y (x is equal to y, or less than y)These operators work on any type that has a natural ordering, including Strings. scala> "armadillo" < "bear" res25: Boolean = true scala> "armadillo" < "Bear" res26: Boolean = false scala> "Armadillo" < "Bear" res27: Boolean = trueClearly, this isn’t the usual alphabetic ordering you are used to. Instead it is based on ASCII character encodings. A very beautiful and useful thing about conditionals in Scala is that they return a value. So, the following is a valid way to set the values of the variables x and y. scala> val x = if (true) 1 else 0 x: Int = 1 scala> val y = if (false) 1 else 0 y: Int = 0Not so impressive here, but let’s return to the bartender, and rather than the serveBeer function printing a String, we can have it return a String representing a beverage, “beer” in the case of a 21+ year old and “water” otherwise. scala> def serveBeer (customerAge: Int) = if (customerAge >= 21) "beer" else "water" serveBeer: (customerAge: Int)java.lang.String scala> serveBeer(42) res21: java.lang.String = beer scala> serveBeer(20) res22: java.lang.String = waterNotice how the first serveBeer function returned Unit but this one returns a String. Unit means that no value is returned — in general this is to be discouraged for reasons we won’t get into here. Regardless of that, the general pattern of conditional assignment shown above is something you’ll be using a lot. Conditionals can also have more than just the single if and else. For example, let’s say that the bartender simply serves age appropriate drinks to each customer, and that 21+ get beer, teenagers get soda and little kids should get juice. scala> def serveDrink (customerAge: Int) = { | if (customerAge >= 21) "beer" | else if (customerAge >= 13) "soda" | else "juice" | } serveDrink: (customerAge: Int)java.lang.String scala> serveDrink(42) res35: java.lang.String = beer scala> serveDrink(16) res36: java.lang.String = soda scala> serveDrink(6) res37: java.lang.String = juiceAnd of course, the Boolean expressions in any of the ifs or else ifs can be complex conjunctions and disjunctions of smaller expressions. Let’s consider a computational linguistics oriented example now that can take advantage of that, and which we will continue to build on in later tutorials. Everybody (hopefully) knows what a part-of-speech is. (If not, go check out Grammar Rock on YouTube.) In computational linguistics, we tend to use very detailed tagsets that go far beyond “noun”, “verb”, “adjective” and so on. For example, the tagset from the Penn Treebank uses NN for singular nouns (table), NNS for plural nouns (tables), NNP for singular proper noun (John), and NNPS for plural proper noun (Vikings). Here’s an annotated sentence with postags from the first sentence of the Wall Street Journal portion of the Penn Treebank, in the format word/postag. The/DT index/NN of/IN the/DT 100/CD largest/JJS Nasdaq/NNP financial/JJ stocks/NNS rose/VBD modestly/RB as/IN well/RB ./.We’ll see how to process these en masse shortly, but for now, let’s build a function that turns single tags like “NNP” into “NN” and “JJS” into “JJ”, using conditionals. We’ll let all the other postags stay as they are. We’ll start with a suboptimal solution, and then refine it. The first thing you might try is to create a case for every full form tag and output its corresponding shortened tag. scala> def shortenPos (tag: String) = { | if (tag == "NN") "NN" | else if (tag == "NNS") "NN" | else if (tag == "NNP") "NN" | else if (tag == "NNPS") "NN" | else if (tag == "JJ") "JJ" | else if (tag == "JJR") "JJ" | else if (tag == "JJS") "JJ" | else tag | } shortenPos: (tag: String)java.lang.String scala> shortenPos("NNP") res47: java.lang.String = NN scala> shortenPos("JJS") res48: java.lang.String = JJSo, it’s doing the job, but there is a lot of redundancy — in particular, the return value is the same for many cases. We can use disjunctions to deal with this. def shortenPos2 (tag: String) = { if (tag == "NN" || tag == "NNS" || tag == "NNP" || tag == "NNP") "NN" else if (tag == "JJ" || tag == "JJR" || tag == "JJS") "JJ" else tag }These are logically equivalent. There is an easier way of doing this, using properties of Strings. Here, the startsWith method is very useful. scala> "NNP".startsWith("NN") res51: Boolean = true scala> "NNP".startsWith("VB") res52: Boolean = falseWe can use this to simplify the postag shortening function. def shortenPos3 (tag: String) = { if (tag.startsWith("NN")) "NN" else if (tag.startsWith("JJ")) "JJ" else tag }This makes it very easy to add an additional condition that collapses all of the verb tags to “VB”. (Left as an exercise.) A final note of conditional assignments: they can return anything you like, so, for example, the following are all valid. For example, here is a (very) simple (and very imperfect) English stemmer that returns the stem and and suffix. scala> def splitWord (word: String) = { | if (word.endsWith("ing")) (word.slice(0,word.length-3), "ing") | else if (word.endsWith("ed")) (word.slice(0,word.length-2), "ed") | else if (word.endsWith("er")) (word.slice(0,word.length-2), "er") | else if (word.endsWith("s")) (word.slice(0,word.length-1), "s") | else (word,"") | } splitWord: (word: String)(String, java.lang.String) scala> splitWord("walked") res10: (String, java.lang.String) = (walk,ed) scala> splitWord("walking") res11: (String, java.lang.String) = (walk,ing) scala> splitWord("booking") res12: (String, java.lang.String) = (book,ing) scala> splitWord("baking") res13: (String, java.lang.String) = (bak,ing)If we wanted to work with the stem and suffix directly with variables, we can assign them straight away. scala> val (stem, suffix) = splitWord("walked") stem: String = walk suffix: java.lang.String = edMatching Scala provides another very powerful way to encode conditional execution called matching. They have much in common with if-else blocks, but come with some nice extra features. We’ll go back to the postag shortener, starting with a full list out of the tags and what to do in each case, like our first attempt with if-else. def shortenPosMatch (tag: String) = tag match { case "NN" => "NN" case "NNS" => "NN" case "NNP" => "NN" case "NNPS" => "NN" case "JJ" => "JJ" case "JJR" => "JJ" case "JJS" => "JJ" case _ => tag } scala> shortenPosMatch("JJR") res14: java.lang.String = JJNote that the last case, with the underscore “_” is the default action to take, similar to the “else” at the end of an if-else block. Compare this to the if-else function shortenPos from before, which had lots of repetition in its definition of the form “else if (tag == “. Match statements allow you to do the same thing, but much more concisely and arguably, much more clearly. Of course, we can shorten this up. def shortenPosMatch2 (tag: String) = tag match { case "NN" | "NNS" | "NNP" | "NNPS" => "NN" case "JJ" | "JJR" | "JJS" => "JJ" case _ => tag }Which is quite a bit more readable than the if-else shortenPosMatch2 defined earlier. In addition to readability, match statements provide some logical protection. For example, if you accidentally have two cases that overlap, you’ll get an error. scala> def shortenPosMatchOops (tag: String) = tag match { | case "NN" | "NNS" | "NNP" | "NNPS" => "NN" | case "JJ" | "JJR" | "JJS" => "JJ" | case "NN" => "oops" | case _ => tag | } <console>:10: error: unreachable code case "NN" => "oops"This is an obvious example, but with more complex match options, it can save you from bugs! We cannot use the startsWith method the same way we did with the if-else shortenPosMatch3. However, we can use regular expressions very nicely with match statements, which we’ll get to in a later tutorial. Where match statements really shine is that they can match on much more than just the value of simple variables like Strings and Ints. One use of matches is to check the types of the input to a function that can take a supertype of many types. Recall that Any is the supertype of all types; if we have the following function that takes an argument with any type, we can use matching to inspect what the type of the argument is and do different behaviors accordingly. scala> def multitypeMatch (x: Any) = x match { | case i: Int => "an Int: " + i*i | case d: Double => "a Double: " + d/2 | case b: Boolean => "a Boolean: " + !b | case s: String => "a String: " + s.length | case (p1: String, p2: Int) => "a Tuple[String, Int]: " + p2*p2 + p1.length | case (p1: Any, p2: Any) => "a Tuple[Any, Any]: (" + p1 + "," + p2 + ")" | case _ => "some other type " + x | } multitypeMatch: (x: Any)java.lang.String scala> multitypeMatch(true) res4: java.lang.String = a Boolean: false scala> multitypeMatch(3) res5: java.lang.String = an Int: 9 scala> multitypeMatch((1,3)) res6: java.lang.String = a Tuple[Any, Any]: (1,3) scala> multitypeMatch(("hi",3)) res7: java.lang.String = a Tuple[String, Int]: 92So, for example, if it is an Int, we can do things like multiplication, if it is a Boolean we can negate it (with !), and so on. In the case statement, we provide a new variable that will have the type that is matched, and then after the arrow =>, we can use that variable in a type safe manner. Later we’ll see how to create classes (and in particular case classes), where this sort of matching based function is used regularly. In the meantime, here’s an example of a simple addition function that allows one to enter a String or Int to specify its arguments. For example, the behavior we desire is this: scala> add(1,3) res4: Int = 4 scala> add("one",3) res5: Int = 4 scala> add(1,"three") res6: Int = 4 scala> add("one","three") res7: Int = 4Let’s assume that we only handle the spelled out versions of 1 through 5, and that any string we cannot handle (e.g. “six” and aardvark”) is considered to be 0. Then the following two functions using matches handle it. def convertToInt (x: String) = x match { case "one" => 1 case "two" => 2 case "three" => 3 case "four" => 4 case "five" => 5 case _ => 0 } def add (x: Any, y: Any) = (x,y) match { case (x: Int, y: Int) => x + y case (x: String, y: Int) => convertToInt(x) + y case (x: Int, y: String) => x + convertToInt(y) case (x: String, y: String) => convertToInt(x) + convertToInt(y) case _ => 0 }Like if-else blocks, matches can return whatever type you like, including Tuples, Lists and more. Match blocks are used in many other useful contexts that we’ll come to later. In the meantime, it is also worth pointing out that matching is actually used in variable assignment. We’ve seen it already with Tuples, but it can be done with Lists and other types. scala> val (x,y) = (1,2) x: Int = 1 y: Int = 2 scala> val colors = List("blue","red","yellow") colors: List = List(blue, red, yellow) scala> val List(color1, color2, color3) = colors color1: java.lang.String = blue color2: java.lang.String = red color3: java.lang.String = yellowThis is especially useful in the case of the args Array that comes from the command line when creating a script with Scala. For example, consider a program that is run as following. $ scala nextYear.scala John 35 Next year John will be 36 years old.Here’s how we can do it. (Save the next two lines as nextYear.scala and try it out.) val Array(name, age) = args println("Next year " + name + " will be " + (age.toInt + 1) + " years old.")Notice that we had to do age.toInt. That is because age itself is a String, not an Int. Conditional execution with if-else blocks and match blocks is a powerful part of building complex behaviors into your programs that you’ll see and use frequently! Reference: First steps in Scala for beginning programmers, Part 3 from our JCG partner Jason Baldridge at the Bcomposes blog. Related Articles :Scala Tutorial – Scala REPL, expressions, variables, basic types, simple functions, saving and running programs, comments Scala Tutorial – Tuples, Lists, methods on Lists and Strings Scala Tutorial – iteration, for expressions, yield, map, filter, count Scala Tutorial – regular expressions, matching Scala Tutorial – regular expressions, matching and substitutions with the scala.util.matching API Scala Tutorial – Maps, Sets, groupBy, Options, flatten, flatMap Scala Tutorial – scala.io.Source, accessing files, flatMap, mutable Maps Scala Tutorial – objects, classes, inheritance, traits, Lists with multiple related types, apply Scala Tutorial – scripting, compiling, main methods, return values of functions Scala Tutorial – SBT, scalabha, packages, build systems Scala Tutorial – code blocks, coding style, closures, scala documentation project Fun with function composition in Scala How Scala changed the way I think about my Java Code What features of Java have been dropped in Scala? Testing with Scala Things Every Programmer Should Know...

Use java.util.prefs.Preferences instead of java.util.Properties

A typical installer for an application needs to ask the user a couple of options and some of these are configuration questions e.g. the port on which the application should run, how it should run etc. The application has to remember these options and use them in every run. Standard manner of solving such a problem is to write these options in a properties file which can be loaded at the start-up of the application. But then again the problem shifts to some other area i.e. remember the install path and then load the required properties file from the installed path. Remembering installed path can be solved by setting an environment variable e.g. MYAPP_HOME. The variable can be initialized with the required value while installing so that every time the application gets loaded the variable will be set. This is a typical solution that is employed in most of the projects. The Other Solution The Preferences API that is provided JDK can be used to solve this typical problem. Preferences work just like properties but they are persistent unlike Properties. At the back, when a preference is written it gets stored to a backing store. When you ask for the preference, the value is then loaded from this store. On a typical Windows machine the default store is Windows registry but the store is configurable and you can change it to what ever you like e.g. a file. Writing a preference is straight forward. Unlike properties which are String based key-value pairs the preferences have keys that are Strings but you can store values of all basic types e.g. long, boolean, double etc. public class StaticPreferenceFactory { public static void main(String args[]) throws Exception { Preferences prefsRoot = Preferences.userRoot(); Preferences myPrefs = prefsRoot .node("com.myapp.preference.staticPreferenceLoader"); myPrefs.put("fruit", "apple"); myPrefs.putDouble("price", 40); myPrefs.putBoolean("available", false); return prefsRoot; } }Just like we have system variables and user variables. There is a system preference node that you can get by calling systemRoot() and there is a user preference node that you get calling userRoot() node. Once a preference is stored in a userNode it is not accessible to other users of the system just like user variables. You can clear the preferences written by calling the clear() API. public class UsePreference { public static void main(String args[]) throws Exception { Preferences myfilePrefs = Preferences.userRoot(); myfilePrefs = myfilePrefs .node("com.myapp.preference.staticPreferenceLoader"); System.out.println("finding fruit:" + myfilePrefs.get("fruit", "not found") + " available :" + myfilePrefs.getBoolean("available", true)); } }Retrieving a preference is also straight forward just like properties. The get API here takes two arguments the key, to be found, and default value, incase the value is not found. Spring also provides PreferencesPlaceholderConfigurer that can be used to load preferences. <bean id="preferencePlaceHolder" class="org.springframework.beans.factory.config.PreferencesPlaceholderConfigurer"> <property name="userTreePath" value="com.myapp.preference.staticPreferenceLoader" /> </bean> <bean id="myEntity" class="info.dependencyInjection.spring.factory.MyEntity"> <property name="value" value="${fruit}" /> </bean>For our installer problem we can store all our configuration options in Preferences while installing and the application will be only concerned about reading these values. This way we can avoid all the pains of writing to environment variables and making sure that we load the proper variables every time. Reference: Use java.util.prefs.Preferences instead of java.util.Properties from our JCG partner Rahul Sharma at the “The road so far…” blog. Related Articles :Do it short but do it right ! Manipulating Files in Java 7 Java Generics Quick Tutorial Google Guava Libraries Essentials Java Best Practices – DateFormat in a Multithreading Environment...

GPGPU Java Programming

In one of our previous posts we discussed the General Processing on the Graphics Processing Unit (GPGPU) concepts and architecture. For C/C++ programmers this is all great but for Java Programmers writing C/C++ instead of Java is to say the least an inconvenience. So what tools are out there for Java programmers? Before we dive into coding some background. There are two competing GPGPU SDKs: OpenCL and CUDA. OpenCL is an open standard supported by all GPU vendors (namely AMD, NVIDIA and Intel), while CUDA is NVIDIA specific and will work only on NVIDIA cards. Both SDKs support C/C++ code which of course leaves us Java developers in the cold. So far there is no pure java OpenCL or CUDA support. This is not much help for the Java programmer who needs to take advantage of a GPUs massive parallelism potential unless she fiddles with Java Native interface. Of course there are some Java tools out there that ease the pain of GPGPU Java programming. The two most popular (IMHO) are jocl and jcuda. With these tools you still have to write C/C++ code but at least that would be only for the code that will be executed in the GPU, minimizing the effort considerably. This time I will take a look at jcuda and see how we can write a simple GPGPU program. Let’s start by setting up a CUDA GPGPU linux development environment (although Windows and Mac environments shouldn’t be hard to setup either): Step 1: Install an NVIDIA CUDA enabled GPU in your computer. The NVIDIA Developers‘ site has a list of CUDA enabled GPUs. New NVIDIA GPUs are almost certainly CUDA enabled but just in case check on the card’s specification to make sure… Step 2: Install the NVIDIA Driver and CUDA SDK. Download them and find installation instructions from here. Step 3: Go to directory ~/NVIDIA_GPU_Computing_SDK/C/src/deviceQuery and run make. Step 4: If the compilation was successful go to directory ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release and run the file deviceQuery. You will get lots of technical information about your card. Here is what I got for my GeForce GT 430 card: Notice the 2 Multiprocessors with 48 CUDA Cores each totaling 96 cores, not bad for a low end video card worth around 40 euros!!!!Step 5: Now that you have a CUDA environment let’s write and compile a CUDA program in C. Write the following code and save it as multiply.cu #include <iostream>__global__ void multiply(float a, float b, float* c) { *c=a*b; }int main() { float a, b, c; float *c_pointer; a=1.35; b=2.5;cudaMalloc((void**)&c_pointer, sizeof(float)); multiply<<<1,1>>>(a, b, c_pointer); cudaMemcpy(&c, c_pointer, sizeof(float),cudaMemcpyDeviceToHost); /*** This is C!!! You manage your garbage on your own! ***/ cudaFree(c_pointer); printf("Result = %f\n",c); }Compile it using the cuda compiler and run it: $ nvcc multiply.cu -o multiply $ ./multiply Result = 3.375000 $So what does the above code do? The multiply function with the __global__ qualifier is called the kernel and is the actual code that will be executed in the GPU. The code in the main function is executed in the CPU as normal C code although there are some semantic differences:The multiply function is called with the <<<1,1>>> brackets. The two numbers inside the brackets tell CUDA how many times the code should be executed. CUDA enables us to create what are called one, two, or even three-dimensional thread blocks. The numbers in this example indicate a single thread block running in one dimension, thus our code will be executed 1×1=1 time. The cudaMalloc, cudaMemcpy, and cudaFree functions are used to handle the GPU memory in a similar fashion as we handle the computer’s normal memory in C. The cudaMemcpy function is important since GPU has its own RAM and before we can process any data in the kernel we need to load them to GPU memory. Of course we also need to copy the results back to normal memory when done.Now that we got the basics of how to execute code in the GPU let’s see how we can run GPGPU code from Java. Remember the kernel code will still be written in C but at least the main function is now java code with the help of jcuda. Download the jcuda binaries, unzip them and make sure that the directory containing the .so files (or .dll for Windows) is either given in the java.library.path parameter of the JVM or you append it in the LD_LIBRARY_PATH environment variable (or your PATH variable in windows). Similarly the jcuda-xxxxxxx.jar file must be in your classpath during compilation and execution of your java program. So now that we have jcuda setup let’s have a look at our jcuda-compatible kernel: extern "C" __global__ void multiply(float *a, float *b, float *c) /*************** Kernel Code **************/ { c[0]= a[0] * b[0]; }You will notice the following differences from the previous kernel method:We use the extern “C” qualifier to tell the compiler not to mingle the multiply method name so we can call it with its original name. We use arrays instead of primitives for a, b and c. This is required by jcuda since Java primitives are not supported by jcuda. In jcuda data are passed back and forth to the GPU as arrays of things such as floats, integers etc.Save this file as multiply2.cu. This time we don’t want to compile the file as an executable but rather as a CUDA library that will be called within our java program. We can compile our kernel either as a PTX file or a CUBIN file. PTX are human readable files containing an assembly like code that will be compiled on the fly. CUBIN files are compiled CUda BINaries and can be called directly without on the fly compilation. Unless you need optimal start-up performance, PTX files are preferable because they are not tied up to the specific Compute Capability of the GPU they were compiled with, while CUBIN files will not run on GPUs with lesser compute capability. In order to compile our kernel type the following: $ nvcc -ptx multiply2.cu -o multiply2.ptxHaving successfully created our PTX file let’s have a look at the java equivalent of the main method we used in our C example: import static jcuda.driver.JCudaDriver.*; import jcuda.*; import jcuda.driver.*; import jcuda.runtime.JCuda;public class MultiplyJ { public static void main(String[] args) {float[] a = new float[] {(float)1.35}; float[] b = new float[] {(float)2.5}; float[] c = new float[1]; cuInit(0); CUcontext pctx = new CUcontext(); CUdevice dev = new CUdevice(); cuDeviceGet(dev, 0); cuCtxCreate(pctx, 0, dev); CUmodule module = new CUmodule(); cuModuleLoad(module, "multiply2.ptx"); CUfunction function = new CUfunction(); cuModuleGetFunction(function, module, "multiply");CUdeviceptr a_dev = new CUdeviceptr(); cuMemAlloc(a_dev, Sizeof.FLOAT); cuMemcpyHtoD(a_dev, Pointer.to(a), Sizeof.FLOAT);CUdeviceptr b_dev = new CUdeviceptr(); cuMemAlloc(b_dev, Sizeof.FLOAT); cuMemcpyHtoD(b_dev, Pointer.to(b), Sizeof.FLOAT);CUdeviceptr c_dev = new CUdeviceptr(); cuMemAlloc(c_dev, Sizeof.FLOAT);Pointer kernelParameters = Pointer.to( Pointer.to(a_dev), Pointer.to(b_dev), Pointer.to(c_dev) );cuLaunchKernel(function, 1, 1, 1, 1, 1, 1, 0, null, kernelParameters, null); cuMemcpyDtoH(Pointer.to(c), c_dev, Sizeof.FLOAT); JCuda.cudaFree(a_dev); JCuda.cudaFree(b_dev); JCuda.cudaFree(c_dev);System.out.println("Result = "+c[0]); } }OK that looks like a lot of code for just multiplying two numbers, but remember there are limitations regarding Java and C pointers. So starting with lines 9 through 11 we convert our a,b,and c parameters into arrays named a, b, and c each containing only one float number. In lines 13 to 17 we tell jcuda that we will be using the first GPU in our system (it is possible to have more than one GPUs in high end systems.) In lines 19 to 22 we tell jcuda were our PTX file is and the name of the kernel method we would like to use (in our case multiply.) Things get interesting in line 24, where we use a special jcuda class the CUdeviceptr which acts as a pointer placeholder. In line 25 we use the CUdeviceptr pointer we just created to allocate GPU memory. Note that if our array had more than one items we would need to multiple the Sizeof.FLOAT constant with the number of elements in our array. Finally in line 26 we copy the contents of our first array to the GPU. Similarly we create our pointer and copy the contents to the GPU RAM for our second array (b). For our output array (c) we only need to allocate GPU memory for now. In line 35 we create a Pointer object that will hold all the parameters we want to pass to our multiply method. We execute our kernel code in line 41 where we execute the utility method cuKernelLaunch passing the function and pointer classes as parameters. The first six parameters after the function parameter define the number of grids (a grid is a group of blocks) and blocks which in our example are all 1 as we will execute the kernel only once. The next two parameters are 0 and null and used for identifying any shared memory (memory that can be shared among threads) we may have defined, in our case none. The next parameter contains the Pointer object we created containing our a,b,c device pointers and the last parameter is for additional options. After our kernel returns we simply copy the contents of dev_c to our c array, free all the memory we allocated in the GPU and print the result stored in c[0], which is of course the same as with our C example. Here is how we compile and execute the MultiplyJ.java program (assuming the multiply2.ptx is in the same directory): $ javac -cp ~/GPGPU/jcuda/JCuda-All-0.4.0-beta1-bin-linux-x86_64/jcuda-0.4.0-beta1.jar MultiplyJ.java$ java -cp ~/GPGPU/jcuda/JCuda-All-0.4.0-beta1-bin-linux-x86_64/jcuda-0.4.0-beta1.jar:. MultiplyJ Result = 3.375 $Note that in this example the directory ~/GPGPU/jcuda/JCuda-All-0.4.0-beta1-bin-linux-x86_64 is already in my LD_LIBRARY_PATH so I don’t need to set the java.library.path parameter on the JVM. Hopefully by now the mechanics of jcuda are clear, although we didn’t really touched on the GPU’s true power which is massive parallelism. In a future article I will provide an example on how to run parallel threads in CUDA using java accompanied with an example of what NOT to run in the GPU. A GPU processing makes sense for very specialized tasks and most tasks should be better left to be processed by our old and trusted CPU. Reference: GPGPU Java Programming from our W4G partner Spyros Sakellariou. Related Articles :CPU vs. GPGPU Java Lambda Syntax Alternatives How does JVM handle locks Java Fork/Join for Parallel Programming Java Best Practices Series How to get C like performance in Java...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: