Featured FREE Whitepapers

What's New Here?


Bad Things Happen to Good Code

We need to understand what happens to code over time and why, and what a healthy, long-lived code base looks like. What architectural decisions have the most lasting impact, and what decisions made early will make the most difference over the life of a system. Forces of Compromise Most of the discussion around technical debt assumes that code degrades over time because of sloppiness and lazy coding practices and poor management decisions, by programmers who don’t know or don’t care about what they are doing or who are forced to take short-cuts under pressure. But it’s not that simple. Code is subject to all kinds of pressures and design compromises, big and small, in the real world. Performance optimization trade-offs can force you to bend the design and code in ways that were never expected. Dealing with operational dependencies and platform quirks and run-time edge cases also adds complexity. Then there are regulatory requirements – things that don’t fit the design and don’t necessarily make sense but you have to do anyways. And customization: customer-specific interfaces and options and custom workflow variants and custom rules and custom reporting, all to make someone important happy. Integration with other systems and API lock-in and especially maintaining backwards compatibility with previous versions can all make for ugly code. Michael Feathers, who I think is doing the most interesting and valuable work today in understanding what happens to code and what should happen to code over time, has found that code around APIs and other hard boundaries becomes especially messy – because some interfaces are so hard to change, this forces programmers to do extra work (and workarounds) behind the scenes. All of these forces contribute to making a system more complex, harder to understand, harder to change and harder to test over time – and harder to love.Iterative Development is Erosive In Technical Debt, Process and Culture, Feathers explains that “generalized entropy in software systems” is inevitable, the result of constant and normal wear and tear in an organization. As more people work on the same code, the design will naturally deteriorate as each person interprets the design in their own way and makes their own decisions on how to do something. What’s interesting is that the people working with this code can’t see how much of the design has been lost because their familiarity with the code makes it appear to be simpler and clearer than it really is. It’s only when somebody new joins the team that it becomes apparent how bad things have become. Feathers also suggests that highly iterative development accelerates entropy, and that code which is written iteratively is qualitatively different than code in systems where the team spent more time in upfront design. Iterative development and maintenance tend to bias towards the existing structure of the system, meaning that more compromises will end up being made. Iterative design and development involves making a lot of small mistakes, detours and false starts as you work towards the right solution. Testing out new ideas in production through A/B split testing amplifies this effect, creating more options and complexity. As you work this way some of the mistakes and decisions that you make won’t get unmade – you either don’t notice them, or it’s not worth the cost. So you end up with dead abstractions and dead ends, design ideas that aren’t meaningful any more or are harder to work with than they should be. Some of this will be caught and corrected later in the course of refactoring, but the rest of it becomes too pervasive and expensive to justify ripping out.Dealing with Software Sprawl Software, at least software that gets used, gets bigger and more complicated over time – it has to, as you add more features and interfaces and deal with more exceptions and alternatives and respond to changing laws and regulations. Capers Jones analysis shows that the size of the code base for a system under maintenance will increase between 5-10% per year. Our own experience bears this out – the code base for our systems has doubled in size in the last 5 years. As the code gets bigger it also gets more complex – code complexity tends to increase an average of between 1% and 3% per year. Some of this is real, essential complexity – not something that you can wish your way out of. But the rest is due to how changes and fixes are done. Feathers has confirmed by mining code check-in history (Discovering Startling Things from your Version Control System) that most systems have a common shape or “power curve”. Most code is changed only infrequently or not at all, but the small percentage of methods and classes in the system that are changed a lot tend to get bigger and more complex over time. This is because it is easier to add code to an existing method than to add a new method and easier to add another method to an existing class than to add a new class. The key to keeping a code base healthy is disciplined refactoring of this code, taking the time to come up with new and better abstractions, and preventing the code from festering and becoming malignant. There is also one decision upfront that has a critical impact on the future health of a code base. Capers Jones has found that the most important factor in how well a system ages is, not surprisingly, how complex the design was in the beginning: The rate of entropy increase, or the increase in cyclomatic complexity, seems to be proportional to the starting value. Applications that are high in complexity when released will experience much faster rates or entropy or structural decay than applications put into production with low complexity levels. The Economics of Software Quality Systems that were poorly designed only get worse – but Jones has found that systems that were well-designed can actually get better over time. Reference: Bad Things Happen to Good Code from our JCG partner Jim Bird at the Building Real Software blog....

Android: Multi-touch gestures controller

Few of our projects (ours or for clients) required to implement a multi-touch features for manipulation over images with the standard gestures (drag&drop, rotate, zoom). While an implementation of a multi-touch controller is challenging and fun thing to do, it consumes a lot of time and if there is something ready and simply works, why reinvent the wheel? So we decided to make a little research to see which of the open-source projects out there suits best for our needs. Without mentioning the controllers and projects that we didn’t like and didn’t find compelling, I will mention that that our experience with few demo projects we built back then showed that the best multi-touch controller with easy to implement logics and easy to understand syntax was Luke Hutchison‘s Android Multitouch Controller. The sources are available under the terms of the MIT License. What is a multitouch controller? The term is not general. What we refer to as ‘multitouch controller’ is a piece of code that makes the process of implementing manipulative operations over entities which are shown on the screen (views, bitmaps, etc.) convinient. That means that the multi-touch controller wraps the logics behind the user’s input through the touch-screen. So when the user puts down one finger and starts moving it over the touch-screen the controller should ‘point’ to part of the code where the developer is supposed to put his logic on what he wants to happen then. The controller ‘knows’ that it’s one-finger touch, pinch-gesture or rotate-gesture, or all of them at once (wherever possible). To be more precise, the multi-touch controller would not make your graphics move over the screen magically by linking a library or code to your app. Think of it as interface that you need to implement so that you make your objects move, but you’ll need to figure out how to make it. The already mentioned multitouch controller of our choice is, well Android Multitouch Controller. What you need to do first as preparation is create empty Android project and download the MultiTouchController.java file. 90% of what we need for this tutorial is here.Necessities So far the examples for the Android Multitouch Controller are given by using batch of resources (Drawables) which can later be modified over a Canvas. Our idea is to simplify this process and make only one movable object over a canvas. So we’ll go with simple Bitmap that would be drawn over the canvas which can later on be moved/scaled/rotated from its initial position. So we need two things at first, a View that implements the MultiTouchObjectCanvas interface and then a custom object/entity/widget which would contain the logics for its drawing. This object is the ‘manipulative’ one and it contains the Bitmap that it represents it. So when we say the Canvas we think of a custom View that you need to create, that would implement the MultiTouchObjectCanvas interface (where T is the Pinch object that is to be modified). This interface tells us whether we have an object (draggable object) or widget at the point where the user has touched the screen. If so, the implementation will return the relevant object, otherwise null. So to use this interface we need to implement couple of methods without which we cannot achive results. Anyway, the IDE will tell you what you must implement, these are the methods (among the constructors from the parent View class etc.): @Override public T getDraggableObjectAtPoint(PointInfo touchPoint) { return null; } This method as you can see, returns the object of interest from the place where the user has made the touch input. Meaning, if the user touched x:120;y:100 point, this method should check if this point is in the area occupied by the widget. When you see the full implementation you’ll know how this is done. @Override public void getPositionAndScale(T obj, PositionAndScale objPosAndScaleOut) { } This method operates over the PositionAndScale object for the widget that is touched. The widget is passed as first argument, the PositionAndScale as second and that is pretty much self-descriptive: when the ‘obj’ object is touched, apply the ‘objPosAndScaleOut’ attributes for the position, scale and angle. But this is only for the initial position of the screen, what actually makes the motion is the next obligatory method. @Override public boolean setPositionAndScale(T obj, PositionAndScale newObjPosAndScale, PointInfo touchPoint) { return false; } Again, we have the widget as first argument, the new object’s position/scale/angle properties and a helper object from PointInfo which tells is whether it’s multi-touch or single (drag) gesture. @Override public void selectObject(T obj, PointInfo touchPoint) { } This method takes care of informing the controller which object is being selected. The ‘obj’ object has the selected object. So when we implement this methods and we have set the multitouch controller object, we can implement the widget logics which holds the data behind the item(s) that is/are drawn over the canvas. Yes, in that case we need a MultiTouchController object which needs to be defined like this: private MultiTouchController mMultiTouchController = new MultiTouchController (this); And what to do with this? Well, when the touch event happens, we need to pass ‘everything’ to this controller. And that is done by overriding the onTouchEvent for the custom View/Canvas: @Override public boolean onTouchEvent(MotionEvent ev) { return mMultiTouchController.onTouchEvent(ev); } And with this and few other staff, we have the essential Canvas that takes care of the objects it draws. So we need one more thing, the logics for the Pinch widget with its Bitmap, coordinates, etc. So we create a PinchWidget object which is modification from some of the examples that go with the Multitouch Controller. The essence of this object are these two methods: public boolean setPos(PositionAndScale newImgPosAndScale, int uiMode, int uiModeAnisotropic, boolean isMultitouch) { boolean ret = false; float x = newImgPosAndScale.getXOff(); float y = newImgPosAndScale.getYOff(); if(isMultitouch) { x = mCenterX; y = mCenterY; }ret = setPos(x, y, (uiMode & uiModeAnisotropic) != 0 ? newImgPosAndScale.getScaleX() : newImgPosAndScale.getScale(), (uiMode & uiModeAnisotropic) != 0 ? newImgPosAndScale.getScaleY() : newImgPosAndScale.getScale(), newImgPosAndScale.getAngle());return ret; } and private boolean setPos(float centerX, float centerY, float scaleX, float scaleY, float angle) { float ws = (mImage.getWidth() / 2) * scaleX, hs = (mImage.getHeight() / 2) * scaleY; float newMinX = centerX - ws, newMinY = centerY - hs, newMaxX = centerX + ws, newMaxY = centerY + hs; mCenterX = centerX; mCenterY = centerY; mScaleFactor = scaleX; mAngle = angle;mMinX = newMinX; mMinY = newMinY; mMaxX = newMaxX; mMaxY = newMaxY;return true; } These two methods make the movement possible since they give information on the coordinates, scale factor and the angle of the PinchWidget. They are in a way coupled, meaning the first method makes calculations regarding the data it gets from the second one. So now we have the coordinates of the object, its scale and angle. We just need to draw it by using its draw(Canvas) method: public void draw(Canvas canvas) { Paint itemPaint = new Paint(); itemPaint.setAntiAlias(true); itemPaint.setFilterBitmap(true);float dx = (mMaxX + mMinX) / 2; float dy = (mMaxY + mMinY) / 2;canvas.save();canvas.translate(dx, dy); canvas.rotate(mAngle * 180.0f / (float) Math.PI); canvas.translate(-dx, -dy);Rect srcRect = new Rect(0, 0, mImage.getWidth(), mImage.getHeight()); Rect dstRect = new Rect((int) mMinX, (int) mMinY, (int) mMaxX, (int) mMaxY);canvas.drawBitmap(mImage, srcRect, dstRect, null);canvas.restore(); } mImage is the Bitmap of the item/widget we draw. It must be drawn to see something. And to draw it we need its source and destination rects (in this kind of implementation). The source is the size of the image (in our case Rect[0,0,300,300]) and the destination (where to be drawn) which is calculated from the init and setPos methods of PinchWidget. Then the image is drawn the same way any other Bitmap is drawn, by using drawBitmap(…) method. Now a bit back to the MultiTouchView implementation. As mentioned before, having in mind that we have declared this View in XML, we’ll make use of the: public MultiTouchView(Context context, AttributeSet attrs) constructor. There we initialize the Context. Also we have this method: public void setPinchWidget(Bitmap bitmap) { mPinchWidget = new PinchWidget(bitmap); mPinchWidget.init(mContext.getResources()); } This method tells the MultiTouchView what is our PinchWidget. It is where it’s created, and by calling init() (Resources here is passed just to calculate display’s width and height), we call the whole machanism that will draw the widget. And from this View that happens in its onDraw() method: @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.drawColor(Color.WHITE); mPinchWidget.draw(canvas); } Pretty simple ain’t it? So if everything goes as explained and you get the idea behind this kind of implementation, you’ll see something like this on the screen:Conclusion MultiTouch operations in Android are basically the same math you’ll need to do on other platforms. But when you need time this kind of projects, the Android Multitouch Contoroller is time-saving tool, and when you download it do read the documented methods, do see the elegant and nice code and don’t forget to thank the developer for what he had done. Maybe worth mentioning that our research took almost 1 year (9 months to be precise) about which Multitouch Controller we need to use in our present and future apps. The sources of the sample app are available on our GitHub repository. Next we’ll try to cover what needs to be done to show many Bitmaps over the Canvas view. Or if you figure it out yourself, don’t hesitate to write a tutorial and we’ll be happy to publish it here. Happy coding and don’t forget to share! Reference: Android: Multi-touch gestures controller from our JCG partner Aleksandar Balalovski at the 2dwarfs blog....

Mocking with JodaTime’s DateTime and Google Guava’s Supplier

Introduction If you’re a seasoned unit tester, you’ve learned to take note when you see any code working with time, concurrency, random, persistence and disc I/O. The reason for this is that tests can be very brittle and sometimes down-right impossible to test properly. This post will show how to abstract out ‘time’ by injecting a replacement for it in the consumer. This post will be using Spring 3 as the Dependency Injection container, though Guice, other DI containers or constructor/setters on POJOs would work as well. I will also ignore Locales since the focus is on the injection of the DateTime, not DateTime itself. Existing code You’ve been handed a piece of code to unit test (or you are creating one and this is your first stab at it). Our first piece of code, only one class: (This class is a Spring 3.1 controller and the purpose is to return back the current time as a String) @Controller @RequestMapping(value = '/time') @VisibleForTesting class TimeController {@RequestMapping(value = '/current', method = RequestMethod.GET) @ResponseBody public String showCurrentTime() { // BAD!!! Can't test DateTime dateTime = new DateTime(); return DateTimeFormat.forPattern('hh:mm').print(dateTime); } } Take note that the class does a ‘new DateTime()’ in the class. Here is the corresponding test class: What happens when we run the test? How about assuming we have a very slow machine. You could (and most likely will) end up with your comparison DateTime to be different than the returned DateTime. This is a problem! First thing to do is to remove the dependency, but how are we going to do this? If we make the DateTime a field on the class, we will still have the same problem. Introduce Google Guava’s Supplier interface. Google Guava Supplier The Supplier interface only has one method, ‘get()’ which will return an instance of whatever the supplier is setup for. An example, the supplier will return a user’s first name if they have logged in, and a default one if they have not: public class FirstNameSupplier implements Supplier<String> {private String value; private static final String DEFAULT_NAME = 'GUEST';public FirstNameSupplier() { // Just believe that this goes and gets a User from somewhere String firstName = UserUtilities.getUser().getFirstName(); // more Guava if(isNullOrEmpty(firstName)) { value = DEFAULT_NAME; } else { value = firstName; } }@Override public String get() { return value; } } To your implementing method, you don’t care what the first name is, only that you get one. Refactoring out DateTime Let’s move on. For a much more real example of using a Supplier (and the point of this post) let’s implement a DateTime supplier to give us back the current DateTime. While we’re at it, let’s also create an interface so that we can create mock implementations for testing: public interface DateTimeSupplier extends Supplier<DateTime> { DateTime get(); } and an implementation: public class DateTimeUTCSupplier implements DateTimeSupplier { @Override public DateTime get() { return new DateTime(DateTimeZone.UTC); } } Now we can take the DateTimeUTCSupplier and inject that into our code that needs the current DateTime as the DateTimeSupplier interface: @Controller @RequestMapping(value = '/time') @VisibleForTesting class TimeController {@Autowired @VisibleForTesting // Injected DateTimeSupplier DateTimeSupplier dateTime;@RequestMapping(value = '/current', method = RequestMethod.GET) @ResponseBody public String showCurrentTime() { return DateTimeFormat.forPattern('hh:mm').print(dateTime.get()); } } In order to test this, we’ll need to create a MockDateTimeSupplier and have a controller to pass in the specific instance we want to return: public class MockDateTimeSupplier implements DateTimeSupplier {private final DateTime mockedDateTime;public MockDateTimeSupplier(DateTime mockedDateTime) { this.mockedDateTime = mockedDateTime; }@Override public DateTime get() { return mockedDateTime; } } Notice that the object being saved is passed in via the constructor. This will not get you the current date/time back, but will return back the specific instance you want and finally our test that exercises (slightly) the TimeController we implemented above: public class TimeControllerTest {private final int HOUR_OF_DAY = 12; private final int MINUTE_OF_DAY = 30;@Test public void testShowCurrentTime() throws Exception { TimeController controller = new TimeController(); // Create the mock DateTimeSupplier with our known DateTime controller.dateTime = new MockDateTimeSupplier(new DateTime(2012, 1, 1, HOUR_OF_DAY, MINUTE_OF_DAY, 0, 0));// Call our method String dateTimeString = controller.showCurrentTime();// Using hamcrest for easier to read assertions and condition matchers assertThat(dateTimeString, is(String.format('%d:%d', HOUR_OF_DAY, MINUTE_OF_DAY))); }}Conclusion This post has shown how to use Google Guava’s Supplier interface to abstract out a DateTime object so you can better design your implementations with unit testing in mind! Suppliers are a great way to solve the ‘just give me something’, mind you it’s a known type of something. Happy coding and don’t forget to share! Reference: Mocking with JodaTime’s DateTime and Google Guava’s Supplier from our JCG partner Mike at the Mike’s site blog....

Apache Hive on Windows in 6 easy steps

Note: You need to have cygwin installed to run this tutorial, as Hadoop (needed by Hive) needs cygwin to run on windows. At a minimum, Basic, Net (OpenSSH,tcp_wrapper packages) and Security related Cygwin packages need to be present in the system. Here are the 6 steps: 1. Download WSO2 BAM 2.0.0. It’s free and open source. 2. Extract it to a preferred location. Let’s call it $BAM_HOME. 3.Start the server by executing the wso2server.bat file present in $BAM_HOME/bin. The server would startup on the default port 9443 on the machine’s IP. 4. Log in to the web console at https://localhost:9443 using the default credentials, i.e. username: admin, password: admin and clicking “Sign-In”.WSO2 BAM login screen 5. Navigate to the “Add Analytics” option by clicking the menu item on the left hand menu.WSO2 BAM left hand menu – add Analytics option 6. Now execute your Hive script, by entering the script and clicking execute! Note: Follow this KPI sample document to see a sample working for you in no time, with results appearing on a dashboard. Also, notice that you can schedule the Hive script as well.Execute Apache Hive script I have to thank my colleague Buddhika Chamith, as all this was possible because of some grueling work done by him. Also, I hate the fact Hadoop and Hive makes it so hard to run stuff on Windows, especially since this is a Java application. Read about those concerns here. Don’t forget to share! Reference: HOWTO: Run Apache Hive on Windows in 6 easy steps from our JCG partner Mackie Mathew at the dev_religion blog....

Android: Finding the SD Card Path

Finding the SD Card path in Android is easy right? All you have to do is use Environment.getExternalStoreDirectory(), and you’re good to go! Well, not quite. After all, that’s what StackOverflow says. If you actually try the above method on a Samsung device, life won’t be fun for you. Environment.getExternalStoreDirectory() actually returns the incorrect path on most Samsung devices. That’s how I came across the issue. It turns out, the above method doesn’t actually guarantee that it will return the SD Card directory. According to Android’s API documentation, “In devices with multiple ‘external’ storage directories (such as both secure app storage and mountable shared storage), this directory represents the ‘primary’ external storage that the user will interact with.” So the call doesn’t guarantee that the path returned truly points SD Card. There are a few other ways to get an “external” path on the device where files can be stored, though, like the getExternalFilesDir(). There are also a few other tricks to actually get the path of the SD Card directory. The below code works on most Android devices (Samsung included). It’s a pretty hacky solution, though, and who knows how long this trick will actually work (source). Instead of using the code below, it may be better to ask the question, “do I really need the SD Card directory, or just a path that I can store files to?” File file = new File("/system/etc/vold.fstab"); FileReader fr = null; BufferedReader br = null; try { fr = new FileReader(file); } catch (FileNotFoundException e) { e.printStackTrace(); } try { if (fr != null) { br = new BufferedReader(fr); String s = br.readLine(); while (s != null) { if (s.startsWith("dev_mount")) { String[] tokens = s.split("\\s"); path = tokens[2]; //mount_point if (!Environment.getExternalStorageDirectory().getAbsolutePath().equals(path)) { break; } } s = br.readLine(); } } } catch (IOException e) { e.printStackTrace(); } finally { try { if (fr != null) { fr.close(); } if (br != null) { br.close(); } } catch (IOException e) { e.printStackTrace(); } }Happy coding and don’t forget to share! Reference: Android Tutorial: Finding the SD Card Path from our JCG partner Isaac Taylor at the Programming Mobile blog....

JavaOne 2012: Diagnosing Your Application on the JVM

It was worth attending Staffan Larsen‘s (Oracle Java Serviceability Architect) presentation ‘Diagnosing Your Application on the JVM‘ (Hilton Plaza A/B) just for learning of the new jcmd command-line tool provided with Oracle’s JVM 7. The rest of the presentation was ‘bonus’ for me, which was nice for the last session I attended on Wednesday of JavaOne 2012.The Oracle HotSpot JDK provides jcmd, a command-line tool designed to be both backwards compatible and forward adaptable for future versions of Java. It is designed to support new tools and features that come with new SDKs in a standardized approach. The following screen snapshot shows it used for most basic jps-like functionality (Larsen mentioned jps almost as briefly as I just did and referred to jcmd as ‘like jps but more powerful’).As the above image shows, jcmd can be used like jps. Larsen showed some handy features of the jcmd command. He had some small sample Java applications that helped him to demonstrate jcmd. For my purposes, I’m running jconsole in one terminal on my machine and then I’ll run jcmd commands against that JVM in which jconsole is running. The next screen snapshot shows how the basic (no arguments) jcmd call provides information on that JConsole process.jcmd supports execution against JVM processes either by process ID (pid) or by process name. The next screen snapshot shows running jcmd against the JConsole process by that name and passing it help to see which options can be run against that particular process. Note that I tried unsuccessfully to run this against ‘dustin’ (no existing process) to prove that jcmd is really showing options available for running processes.The feature demonstrated in the last screen snapshot is one of the most compelling reasons for moving from existing command-line tools provided with the Oracle JDK to jcmd. This image shows how jcmd can provide a list of the available options on a per process basis, allowing for ultimate flexibility in terms of supporting past versions or future versions of Java that support different/new commands. Just as jcmd <pid> help (or replace pid with process name) lists the available operations that can be run by jcmd against a particular JVM process, this same help mechanism can be run against any one of those specific listed commands [with syntax jcmd <pid> <command_name> help (or use process name instead of pid)], though I could not get this to work properly on my Windows machine. The next image shows actually running that command against that JVM process rather than simply asking for help on it.In the two screen snapshots immediately above, I ran jcmd against the pid instead of the process name simply to show that it works against both process ID as well as name. The next screen snapshot shows executing jcmd against the JVM process to get VM flags and command-line options from the JVM process (the pid of this instance of JConsole process is 3556).                                Running jcmd‘s Thread.print command against a supporting JVM process makes easy work of viewing the targeted JVM’s threads. The following output is generated from running jcmd JConsole Thread.print against my running JConsole process. 3556: 2012-10-04 23:39:36 Full thread dump Java HotSpot(TM) Client VM (23.2-b09 mixed mode, sharing):'TimerQueue' daemon prio=6 tid=0x024bf000 nid=0x1194 waiting on condition [0x069af000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x23cf2db0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at java.util.concurrent.DelayQueue.take(DelayQueue.java:209) at javax.swing.TimerQueue.run(TimerQueue.java:171) at java.lang.Thread.run(Thread.java:722)'DestroyJavaVM' prio=6 tid=0x024be400 nid=0x1460 waiting on condition [0x00000000] java.lang.Thread.State: RUNNABLE'AWT-EventQueue-0' prio=6 tid=0x024bdc00 nid=0x169c waiting on condition [0x0525f000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x291a90b0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at java.awt.EventQueue.getNextEvent(EventQueue.java:521) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:213) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139) at java.awt.EventDispatchThread.run(EventDispatchThread.java:97)'Thread-2' prio=6 tid=0x024bd800 nid=0x4a8 in Object.wait() [0x04bef000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2917ed80> (a java.io.PipedInputStream) at java.io.PipedInputStream.read(PipedInputStream.java:327) - locked <0x2917ed80> (a java.io.PipedInputStream) at java.io.PipedInputStream.read(PipedInputStream.java:378) - locked <0x2917ed80> (a java.io.PipedInputStream) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) - locked <0x29184e28> (a java.io.InputStreamReader) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.readLine(BufferedReader.java:317) - locked <0x29184e28> (a java.io.InputStreamReader) at java.io.BufferedReader.readLine(BufferedReader.java:382) at sun.tools.jconsole.OutputViewer$PipeListener.run(OutputViewer.java:109)'Thread-1' prio=6 tid=0x024bd000 nid=0x17dc in Object.wait() [0x047af000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x29184ee8> (a java.io.PipedInputStream) at java.io.PipedInputStream.read(PipedInputStream.java:327) - locked <0x29184ee8> (a java.io.PipedInputStream) at java.io.PipedInputStream.read(PipedInputStream.java:378) - locked <0x29184ee8> (a java.io.PipedInputStream) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) - locked <0x2918af80> (a java.io.InputStreamReader) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.readLine(BufferedReader.java:317) - locked <0x2918af80> (a java.io.InputStreamReader) at java.io.BufferedReader.readLine(BufferedReader.java:382) at sun.tools.jconsole.OutputViewer$PipeListener.run(OutputViewer.java:109)'AWT-Windows' daemon prio=6 tid=0x024bc800 nid=0x16e4 runnable [0x0491f000] java.lang.Thread.State: RUNNABLE at sun.awt.windows.WToolkit.eventLoop(Native Method) at sun.awt.windows.WToolkit.run(WToolkit.java:299) at java.lang.Thread.run(Thread.java:722)'AWT-Shutdown' prio=6 tid=0x024bc400 nid=0x157c in Object.wait() [0x04c6f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2918b098> (a java.lang.Object) at java.lang.Object.wait(Object.java:503) at sun.awt.AWTAutoShutdown.run(AWTAutoShutdown.java:287) - locked <0x2918b098> (a java.lang.Object) at java.lang.Thread.run(Thread.java:722)'Java2D Disposer' daemon prio=10 tid=0x024bbc00 nid=0x3b8 in Object.wait() [0x0482f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2918b128> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) - locked <0x2918b128> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) at sun.java2d.Disposer.run(Disposer.java:145) at java.lang.Thread.run(Thread.java:722)'Service Thread' daemon prio=6 tid=0x024bb800 nid=0x1260 runnable [0x00000000] java.lang.Thread.State: RUNNABLE'C1 CompilerThread0' daemon prio=10 tid=0x024c6400 nid=0x120c waiting on condition [0x00000000] java.lang.Thread.State: RUNNABLE'Attach Listener' daemon prio=10 tid=0x024bb000 nid=0x1278 waiting on condition [0x00000000] java.lang.Thread.State: RUNNABLE'Signal Dispatcher' daemon prio=10 tid=0x024bac00 nid=0xe3c runnable [0x00000000] java.lang.Thread.State: RUNNABLE'Finalizer' daemon prio=8 tid=0x024a9c00 nid=0x15c4 in Object.wait() [0x046df000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2918b358> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135) - locked <0x2918b358> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:177)'Reference Handler' daemon prio=10 tid=0x024a4c00 nid=0xe40 in Object.wait() [0x0475f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x2917e9c0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:503) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133) - locked <0x2917e9c0> (a java.lang.ref.Reference$Lock)'VM Thread' prio=10 tid=0x024a3800 nid=0x164c runnable'VM Periodic Task Thread' prio=10 tid=0x024e7c00 nid=0xcf0 waiting on conditionJNI global references: 563 Larsen showed how to use thread information provided by jcmd to resolve a deadlock. Larsen showed getting a class histogram from the running JVM process with jcmd. This is done using the command jcmd <pid> GC.class_histogram. A very small subset of its output is shown next (the pid of this JConsole process is 4080 this time). 4080:num #instances #bytes class name ---------------------------------------------- 1: 1730 3022728 [I 2: 5579 638168 3: 5579 447072 4: 645 340288 5: 4030 337448 [C 6: 645 317472 7: 602 218704 8: 942 167280 [B 9: 826 97720 java.lang.Class 10: 3662 87888 java.lang.String 11: 2486 79552 javax.swing.text.html.parser.ContentModel 12: 3220 77280 java.util.Hashtable$Entry 13: 1180 67168 [S 14: 2503 60072 java.util.HashMap$Entry 15: 181 59368 16: 971 43584 [Ljava.lang.Object; 17: 1053 41160 [[I 18: 206 29040 [Ljava.util.HashMap$Entry; 19: 111 27880 [Ljava.util.Hashtable$Entry; 20: 781 18744 java.util.concurrent.ConcurrentHashMap$HashEntry 21: 1069 17104 java.lang.Integer 22: 213 9816 [Ljava.util.concurrent.ConcurrentHashMap$HashEntry; 23: 202 9696 java.util.HashMap 24: 201 9280 [Ljava.lang.String; 25: 24 8416 [[I Larsen also demonstrated jstat and several of its useful functions. He demonstrated use of jstat -gcnew (new generation behavior), jstat -precompilation (compilation method statistics), and jstat -options (displays options). During the course of his presentation, Larsen needed to convert a decimal number (pid?) to its hexadecimal representation for comparing it to the output of another tool. He used the handy printf '%x\n' <pid> command to get the hexadecimal representation of the pid. Larsen demonstrated use of VisualVM to compare two heap dumps and browse a heap dump. He also demonstrated the VisualVM Profiler. Larsen moved from the tools previously covered aimed at running JVMs to tools that can be used to analyze JVM core files. He returned to jstack to analyze contents of the core file. Larsen talked about remotely accessing JVM information via JMX and tools like jconsole and jvisualvm. He demonstrated that jcmd can be used to start JMX exposure as well: ManagementServer.start ‘with a bunch of parameters.’ Larsen feels that VisualVM and JConsole would use ManagementServer.start rather than Attach API if implemented today. jstat can also connect to daemon remotely through use of jstatd. There is no encryption or authentication with jstatd. jps and jcmd find what’s running on system using ‘well-known file for each JVM': /hsperfdata_<user>/<pod> This file is created on JVM startup and deleted on JVM shutdown. Unused previous files are deleted on startup, so jps and jcmd, as Java programs themselves, will clean these old ones up. The Attach API ‘allows sending ‘commands’ for executionin the JVM,’ but only works on local machine and for current/same user. This is what jcmd and jstack use. Larsen then went onto explain the different mechanics of using the Attach API for Linux/BSD/Solaris (uses temporary file creation) versus Windows (uses code injection). I employed the Attach API in my post Groovy, JMX, and the Attach API. Diagnostic commands are ‘helper routines inside the JVM’ that produce ‘text output.’ They can be executed via the jcmd utility (and soon via JMX). They each have a self-describing facility: jcmd PerfCounter.print to see the raw contents. Larsen showed an informative table comparing ‘communicating with the JVM’ approaches: attach, jvmstat, JMX, jstatd, and Serviceability Agent (SA). The SA ‘should be used as a last resort (‘typically for a JVM that is hung’)’ and uses a ‘debugger to read information.’ Larsen transitioned to talk of future tools. He started this portion of the presentation with coverage of Java Flight Recorder. Java Flight Recorder is a ‘JVM-built-in profiler and tracer’ with ‘low overhead’ and is ‘always on.’ Other coming tools are Java Mission Control (‘graphical tool providing very detailed runtime monitoring details’), more diagnostic commands for jcmd (‘eventually replacing jstack, jmap, jinfo’ for various reasons), JMX 2.0 (‘something we’re picking up again; it was started a very long time ago’), improved logging for JVM (JVM Enhancement Proposal [JEP] 158), and Java Discovery Protocol (anticipating forthcoming JEP for this). One question asked was if one could see MBeans in VisualVM as can be done in JConsole. As I’ve blogged on, there is a VisualVM plug-in for doing just that. Although I felt somewhat comfortable with the Oracle HotSpot JDK command line tools, I was unfamiliar with jcmd and appreciated Larsen’s coverage of it. I learned some other things along the way as well. My only complaint is that Larsen’s presentation (especially the demonstration) was so rapid fire and so content-rich that I wish I could see it again. A related (but older) presentation with some of the same content is available at http://www.oracle.com/javaone/lad-en/session-presentations/corejava/22260-enok-1439100.pdf   Reference: JavaOne 2012: Diagnosing Your Application on the JVM from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Intro to Chef

Chef is an incredible tool, but despite its beginnings in 2008/2009 it still lacks an effective quick start, or even an official “hello world” – so it takes too long to really get started as you desperately search for tutorials/examples/use cases. The existing quick starts or tutorials take too long and fail to explain the scope or what Chef is doing. This is really unfortunate because the world could be a better place if more people used Chef (or even Puppet) and we didn’t have to guess how to configure a server for various applications. Official Chef Explanation Officially, Chef is a “configuration management tool” where you write “recipes”. It’s written in Ruby and you’ll need at least version 1.8.6. Here’s the official Chef Quick Start guide but I think it fails at succinctly presenting the scope of Chef or even how to use it, hence this document. Simpler Chef Explanation Put simpler, Chef is a Ruby DSL (domain specific language) for configuring GNU/Linux (or BSD) machines (Windows is not well supported), it has 2 flavors, “Chef Server” and “Chef Solo”, in this document I’m talking about Chef SOLO because it’s easier to get started with – and works well as a complement to Rails apps. Simplest Chef Explanation and actual working example Put simpler yet: Chef is a ruby script that uses “recipes” (a recipe is a Ruby file that uses the Chef DSL) to install software and run scripts on GNU/Linux servers. You can run Chef over and over again safely because most recipes know not to, for example, reinstall something that already exists (sometimes you have to code this functionality of not installing something that already exists, but most of the DSLs do it already). Think of Chef as having 4 components: install a binary/executable (chef-solo), installable via Ruby Gems sudo /usr/bin/gem install chef ohai --no-rdoc --no-ri /var/lib/gems/1.8/bin/chef-solo create one or more ruby files that they call “recipes” in a structure like this ~/my_cookbooks/RECIPE_NAME/recipes/default.rb Vim install recipe example if we want a recipe for installing vim, here’s one quick and simple way to do it: ~/my_cookbooks/vim/recipes/default.rb package('vim') Just duplicate the directory structure I have listed above, and in the default.rb file, you only need 1 line. that “package” method knows which package management software to use depending on what OS is running and then leverages it. Create MySQL DB recipe example There are a lot of methods available for recipes. Take “bash” for example. Pass the “bash” method a block, and inside the block you can use methods like “code” (which executes a string of bash commands) and “user” which specifies which OS user to run the commands as. ~/my_cookbooks/create_mysql_db/recipes/default.rb bash “really awesome way to create a mysql database from chef using the bash method” do # dont if the db already exists not_if('/usr/bin/mysql -uroot -pmiller_highlife_lol_jk -e'show databases' | grep #{node[:create_mysql_db][:db_name]}', :user => 'evan')# run as the evan user user 'evan'# a heredoc of the code to execute, note the node hash is created from the JSON file code <<-HEY_BRO_EOM mysql -uroot -ppmiller_highlife_lol_jk -e 'create database #{node[:create_mysql_db][:db_name]}' HEY_BRO_EOMend JSON file with array of recipes that you’ll point the binary at ~/my_cookbooks/roles/ottobib.json { 'name': 'ottobib', 'run_list': [ 'create_mysql_db', 'vim', ], </strong> 'create_mysql_db': { 'db_name': 'ottobib_production' } } Finally, a ruby file with more configuration options ~/my_cookbooks/chefsoloconfig.rb file_cache_path '/tmp/chef-solo' cookbook_path '/home/evan/my_cookbooks' log_level :info log_location STDOUT ssl_verify_mode :verify_none NOW you can run it over and over again and your system will end up with Vim and a ottobib_production database. If you want to get CRAZY: add a recipe that checks out the latest copy of your application source code and then setup a cron job to execute your chef script every minute! Here’s what your /home/evan/my_cookbooks dir should look like: |-chefsoloconfig.rb |-roles -ottobib.json |-vim |-recipes -default.rb |-create_mysql_db |-recipes -default.rbTHE ACTUAL COMMAND TO RUN CHEF! sudo /var/lib/gems/1.8/bin/chef-solo -c /home/evan/my_cookbooks/chefsoloconfig.rb -j /home/evan/my_cookbooks/roles/ottobib.json -ldebug Reference: Intro to Chef from our JCG partner Evan Conkle at the Evan Conkle’s blog blog....

Android Tutorial: Enter the DROID World

Well quite frankly i’m late to the game, but here i am, getting my hands dirty(or wet or whatever you might call it) in the world of android. This post will focus on how to set up the android SDK, setting up ADT for eclipse as well as an introduction to the structure of a typical android project using an example. Lets get going (said in a robotic voice of course)… First of all you need the Android SDK to get going. Download the relevant version for your platform. Currently it supoprts Windows, Linux and Mac. All right, got it done? Awesome, lets see the minimum you need to get started. Note that when you run the installer you will be presented with the following screen;The rows i have marked with an arrow are the minimum elements you need to download in order to get started. Of course here i have presented my SDK manager in which i have installed almost everything. But that takes too much of time, and i know all of you do not have much time to spare. So just download the marked elements and lets get this show on the road!!!! Got everything installed? Great, now lets set up our eclipse platform to start creating awesome android applications. Note that you require Eclipse 3.6 or higher to get the ADT ( Android Development Tools) plugin to work. Go to install new software and add the location of the ADT plugin which is http://dl-ssl.google.com/android/eclipse . You only need to download the Developer tools from the ADT plugin because you will only need the NDK in a few instances. The NDK is the native development kit which allows you to program at a lower level using C language specifics. This post will only focus on the Android SDK. So once you get that done you are ready to roll my friend. Before that i would like to mention a few things that are available to you after installing the Android SDK. You will have the SDK Manager and the AVD manager. The SDK manager will show any tools or APIs you need to download and you can use this tool to upgrade you environment as and when you need. We will get to the AVD manager when we look at the sample application. In Eclipse, go to New->Other->Android->Android Application Project and follow the steps. Note that in the first screen you will have an option to specify the minimum required SDK. This signifies the minimum android SDK your application needs to run. Select the option ‘Create Activity’ and select the blank activity option. Give it a name and finish off the application creation process. Now you will be presented with a structure as follows;Lets take a look at what each folder is for. assets : any property file, databases, text files or the sort which you want to bundle up with your application can be put here. This can have its own folder hierarchy within it self and you can read those files in the usual way you do file reading in java. bin : contains the various files built by the ADT plugin. It will contain the .apk( Android application package file) gen : this folder contains mainly two files that are compiler generated. Which are R.java & BuildConfig.java. I will explain more about he R.java in a bit. It is best not to edit these files as these are anyway generated on each build. libs : Contains the android jar that exposes the android APIs required for development. Note that in our application it uses the android-support-v4.jar which is the support version library that allows you to use newer APIs whilst having support for older Android Operating systems. res : this folder contains all the resources required by your application such as images etc. You can categorize according to various screen resolutions, languages and OS versions. The layout folder will contain the XML file that allows you to define your UI element specific to your activity. The values folder allows you to define language entries in such a way we use .properties files in normal java applications to support different languages. More information can be found here. src : contains the source files of your project. AndroidManifest.xml : The manifest will define the name of the application, icon to be displayed, the various activities used, Permissions required etc. The version code is set to ‘1’ initially. This code is used to determine whether your application has an upgrade available or not. Best practice is to increment the value on each release. Within the manifest you can see an entry such as android.intent.action.MAIN. This signifies that the activity we just created is the main entry point of the application ( such as the main method in a java program). R.java : This file is automatically generated and it is recommended that you do not change this file manually because anyway when you do any changes in your project, ADT will generate this file. This file provides access to the resources in your application in a programmatic way so that you can access your resources in a unified manner. Let us open the blank activity that we just created. Ok the app here does not do much. But i just wanted to introduce the various elements that make up an android application and to jump start development. Within this sample what i show is how to call another activity from your main activity. Let us first see the xml file pertaining to my main activity. <RelativeLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' ><Button android:id='@+id/button1' android:layout_width='wrap_content' android:layout_height='wrap_content' android:layout_alignParentLeft='true' android:layout_alignParentTop='true' android:layout_marginLeft='107dp' android:layout_marginTop='134dp' android:text='@string/next_activity_btn_name' android:onClick='actClick'/></RelativeLayout>As you can see nothing major here. Just one button which i have defined. The name of the button is defined in the strings.xml in order to make the application localization friendly. Also i have defined an onclick functionality. Let us see how the onClick method is implemented in my main activity: package com.example.droidworld;import android.os.Bundle; import android.app.Activity; import android.content.Intent; import android.view.Menu; import android.view.View;public class DroidMainActivity extends Activity {@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_droid_main); }@Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_droid_main, menu); return true; }public void actClick(View view) { startActivity(new Intent('com.example.droidworld.NextActivity')); }You can see that the name of the on click method is the same as what i defined in the XML file. Also the method takes on the View class as a parameter. Within this i use the method startActivity() which enables us to call another activity. Any name can be given here which should correspond to the name given in our application’s manifest.xml file. Let us see how we have defined this in our manifest; <activity android:name='.NextActivity' android:label='@string/title_activity_next' > <intent-filter> <action android:name='com.example.droidworld.NextActivity' /><category android:name='android.intent.category.DEFAULT' /> </intent-filter> </activity> Within the intent filter tags, the name given for the attribute android:name should correspond with the name given within our Intent() method within the startActivity method call. android.intent.category.DEFAULT allows another activity to call this activity. You can also get away by not defining intent filters if the activity you are going to call is within your own project. If that is the case then you call the activity directly as such; startActivity(new Intent(this, NextActivity.class)); One thing to note here is that if you want to expose your activity to other applications, then you need to expose it using intent-filters. That about winds up the introduction to the droid world. I myself am pretty new to this, so if you believe some of what i said in this post is invalid or requires changes, please do leave by a comment which is much appreciated. You can download the sample project from here. You just need to do a Run->Android Application and you are good to go. Make sure to set up an AVD manager before running the application. The AVD manager creates the emulator on which your application is deployed. Create an instance by going to Windows->AVD Manager. The rest is intuitive so i will not go into detail. If you have any issues please do let me know and i will be glad to help. I will follow up this post with a few other articles to depict various features available. Thank you for reading and have a good day. And from words of the Terminator ‘Hasta la vista ‘ Happy coding and don’t forget to share! Reference: Enter the DROID World!! from our JCG partner Dinuka Arseculeratne at the My Journey Through IT blog....

Warming Up Your JVM – Superfast Production Servers and IDEs

A couple of months ago I was reading up on Complex Event Processing in Java and ways to achieve low latency. At the end of my hour long research I figured out that even if your application is well written and your methods run mostly in 0(log n) time, and you are using some bleeding edge hardware solutions, there is still some time consumed by the VM during its interpretation of the bytecode. On the good side of things, Java is interpreted and its bytecode is cross-JVM compatible but we also know that because of this we are bound to lose something somewhere. Our JVM reads the interpreted bytecode and runs it every time. Obviously, this takes time. But its not that bad considering our friendly neighbourhood JIT compiler (server or client) watches out for commonly used methods and when it figures out that the method is called just too many times, it compiles it into native machine code instead of making the JVM rely on bytecode all the time. The number for ‘too many times’ is configured using the vm argument <-XX:CompileThreshold And the default is 1500. One’s natural guess would be that reducing the number would mean more methods are converted to native code faster and that would mean faster application but it may not be. Considerably low numbers would mean that the server starts considerably slower because of the time taken by the JIT to compile too many methods (which may not be used that often after all) and since the native machine code resides in the memory, your application will be awarded the “Memory killer” award and die a slow painful death. A little googling shows that numbers around 100 is not that bad. Again, it depends on your application and the usage patterns and traffic. Forgot to mention, that the smallest compilation unit to become a JIT native compilation candidate is a method. Not a block. So, long running fat methods – good luck !!! In reality, this JIT compilation is not in one go. It has two neat phases : 1) Everytime a method is called, its counter gets increased by 1 and soon after it reaches the threshold, the JIT does its first compilation. 2) After the first compilation, the counter is reset to 0 and incremented again. In this second cycle, when it reaches the threshold, JIT does an second round of compilation – this time with more aggressive and awesome optimizations (sorry – unable to provide you much details here) If you are using JDK 7 and your machine runs on multi-core (I don’t see why not), then you could use the following flag to speed up your native compilation process -server -XX:+TieredCompilationI can’t claim to be an expert in JVM tuning considering the amount of options available. So, please leave your comments if you find this useful or is incorrect. Don’t forget to share! Reference: Warming Up Your JVM – Superfast Production Servers and IDEs from our JCG partner Arun Manivannan at the Rerun.me blog....

JavaOne 2012: Up, Up, and Out: Scaling Software with Akka

After the late-ending Community Keynote, I headed to Hilton Golden Gate 3/4/5 to see Viktor Klang‘s (Typesafe) ‘Up, Up and Out: Akka’ presentation. Klang is the technical lead on Akka. Akka is a ‘beautiful mountain in northern Sweden,’ is a Goddess, and is a Scala-based ‘toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM.’ Akka is not exclusive to Scala only and can be used ‘from Java today.’ Using Akka with Scala does allow you to do some things you cannot do with Akka when used with Java.Akka is used by many large companies to solve real problems. Akka is built to scare in many directions and to provide extreme flexibility. One of Akka’s goals is to ‘manage system overload.’ Akka uses Actors: ‘Akka’s unit of code organization.’ According to Klang, ‘Actors help you create concurrent, scalable, and fault-tolerant applications.’ Actors ‘keep many ‘policy decisions’ separate from the business logic.’ Actors originated in 1970s (‘like all cool stuff in computer science’) and Erlang has been using actors with ‘great success’ for several years. Klang warned to avoid ‘thinking in terms of shared state, a leaky abstraction.’ He added that threads and locks are ‘a means of execution’ rather than structural. He said this mixes execution with business logic. Concurrent collections are good optimizations for local uses. Actors are ‘distributable by design’ and Klang had a slide listing several bullets explaining this statement. He stated that an actor can be an alternative to ‘a thread,’ ‘an object instance,’ ‘a callback or listener,’ ‘a singleton or service,’ ‘a router, load-balancer, or pool,’ ‘Java EE session bean or Message-Driven Bean,’ ‘out of process service,’ and ‘finite state machines.’ Klang referenced a video of Carl Hewitt on actors. An actor is a ‘fundamental unit of computation that embodies several key characteristics.’ Klang showed code examples in my preferred format in a presentation: embedded in his slides with color syntax highlighting. He showed step 0 (‘DEFINE’) in which his code defined the Actor’s class and the Actor’s behavior. Once defined, the first operation (I – ‘CREATE’) ‘creates a new instance of an Actor.’ The created Actor is extremely lightweight and its ‘state and behavior are indistinguishable from each other.’ He drove this last point home: ‘The only way to observe state is by sending an actor a message and seeing how the actor reacts.’ The Actor is ‘very strong encapsulation’ of state, behavior, and message queue. Akka provides an ActorSystem for creating Akka Actor instances. An instance of Props is provided to the Actor because actors need props. Step 2 (‘SEND’) involves ‘sending a message to an Actor’ and ‘everything happens reactively’ and ‘everything is asynchronous and lockless.’ Akka supports a ‘fire and forget’ mode with the actor’s tell method. However, Akka provides guaranteed order of delivery. The reply is implemented in Akka with getSender().tell(). Step 3 (‘BECOME’) ‘redefines the Actor’s behavior’ and is ‘triggered reactively by receipt of message.’ The reasons one might want to change the behavior of an actor at runtime include supporting highly contended actor transforming to an actor pool or to implement graceful degradation. Actors can supervise other Actors, leading to Step 4 (‘SUPERVISE’). A ‘supevisor detects and responds to the failures of the Actor(s) it supervises’ and Klang stated that this translates to ‘a clean separation and processing and error handling.’ Klang talked about ‘failure management in Java, C, and C#’ where you are ‘given a single thread of control.’ He put it this way in a bullet: ‘If this thread blows up, you are screwed.’ The implication of this is that all ‘explicit error handling’ is done ‘within the single thread’ and ‘tangled up’ with the business code. Klang said the way to deal with error handling is to push the error handling out away from the business logic. He then referenced the onion-layer error kernel. Klang talked about callbacks (preRestart and postRestart) provided for Actors to handle failures. A Router is a special case of Actor. Klang showed a slide with code using a RoundRobinRouter. He also showed being able to define the deployment scenario outside of the code in configuration file and referencing that from the code with a path. He took this example even further to show code for ‘remote deployment’ specifying a URL with the ‘akka’ protocol, a host name, and a port. Everything that Klang presented to this point is available today as Akka 2.0. Klang said that there will be Akka Cluster in the to-be-released-soon Akka 2.1. He asked for feedback to ensure that the correct APIs and correct functionality are available for clustering in Akka 2.2. More information on Akka clustering is available in the specification, the user guide, and the code itself. Akka 2.1 will also feature Akka Camel based on Apache Camel. The Typesafe Console is also available to monitor an Akka application and there is a live demo of this available. Reference: JavaOne 2012: Up, Up, and Out: Scaling Software with Akka from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: