Featured FREE Whitepapers

What's New Here?


Native vs Green threads

Native vs Green threads  Understanding a multi-threaded program have always been a wild goose chase for many programmers. There are always many aspects to consider when writing a multi-threaded program. Green threads vs Native Threads The difference between green threads vs native threads is something that programmers may be unaware of. Both are mechanisms are ways of achieving a ‘multi-threaded program’. It is the compiler / interpreter variants that usually implements green or native threads. Native threads uses the operating systems threading capability to execute multi-threaded programs on the other hand, the green threads emulates the multi-threading without using the underlying capabilities of the OS. Green threads are also known as ‘Cooperative Threading’ where each processes co-operate with each other to emulate a multi-threaded environment. Execution in a multi-core machines Another advantage of native threads is that multiple processes would execute in parallel in a multi-core machine. On the other hand, green threads are executed on a single core the entire time and it is the VM that gives a multi-threading notion. So actually there is only singe executing in case of green threads. Process Scheduling Native threads uses OS scheduling algorithm. Modern day OSes support pre-emptive scheduling. Green threads can use any kind of scheduling algorithm. Synchronization and Resource sharing Native threads usually have complicated synchronization and resource sharing. Multiple processes would require kernel level locks for synchronization. Synchronizations are pretty easier in case of green threads. Reference: Native vs Green threads from our JCG partner George Janeve at the Janeve.Me blog....

5′ on IT-Architecture: root concepts explained by the pioneers of software architecture

The last couple of weeks I am working on a new software architecture course specifically for the insurance and financial sector. During the preparations I was reading many of the most cited articles on software architecture. The concepts described in these articles are so fundamental (and still up-to-date) that every architect really should to know about them. I have enjoyed reading such ‘old’ stuff. I first read most of the cited articles during my studies at university in the mid 90s. It is surprising to realize that, the longer you’re in this business, the more you agree to the ideas explained – in articles that were written 40 years ago! I’ve decided to qoute the original text passages – may be I thought it would be overbearing to explain it in my own words ;-) I hope you enjoy reading these text passages from the pioneers of software architecture. On the criteria for system decomposition ‘Many readers will now see what criteria were used in each decomposition. In the first decomposition the criterion used was to make each major step in the processing a module. One might say that to get the first decomposition one makes a flowchart. This is the most common approach to decomposition or modularization. It is an outgrowth of all programmer training which teaches us that we should begin with a rough flowchart and move from there to a detailed implementation. The flowchart was a useful abstraction for systems with on the order of 5,000-10,000 instructions, but as we move beyond that it does not appear to be sufficient; something additional is needed. The second decomposition was made using ‘information hiding’ as a criterion. The modules no longer correspond to steps in the processing. [...] Every module in the second decomposition is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.’ in: On the Criteria To Be Used in Decomposing Systems into Modules, D.L. Parnas, 1972 On the information hiding design principle ‘Our module structure is based on the decomposition criterion known as information hiding [IH]. According to this principle, system details that are likely to change independently should be the secrets of separate modules; the only assumptions that should appear in the interfaces between modules are those that are considered unlikely to change. Each data structure is used in only one module; it may be directly accessed by one or more programs within the module but not by programs outside the module. Any other program that requires information stored in a module’s data structures must obtain it by calling access programs belonging to that module. Applying this principle is not always easy. It is an attempt to minimize the expected cost of software and requires that the designer estimate the likelihood of changes. Such estimates are based on past experience, and may require knowledge of the application area, as well as an understanding of hardware and software technology.’ in: The Modular Structure of Complex Systems, D.L. Parnas, 1985 On module hierarchies ‘In discussions of system structure it is easy to confuse the benefits of a good decomposition with those of a hierarchical structure. We have a hierarchical structure if a certain relation may be defined between the modules or programs and that relation is a partial ordering. The relation we are concerned with is ‘uses’ or ‘depends upon’. [...] The partial ordering gives us two additional benefits. First, parts of the system are benefited (simplified) because they use the services of lower levels. Second, we are able to cut off the upper levels and still have a usable and useful product. [...] The existence of the hierarchical structure assures us that we can ‘prune’ off the upper levels of the tree and start a new tree on the old trunk. If we had designed a system in which the ‘low level’ modules made some use of the ‘high level’ modules, we would not have the hierarchy, we would find it much harder to remove portions of the system, and ‘level’ would not have much meaning in the system.’ in: On the Criteria To Be Used in Decomposing Systems into Modules, D.L. Parnas, 1972 On the separation of concerns ‘Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained —on the contrary!— by tackling these various aspects simultaneously. It is what I sometimes have called ‘the separation of concerns’, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of. This is what I mean by ‘focussing one’s attention upon some aspect’: it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect’s point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.’ in: On the role of scientific thought, Edsger W. Dijkstra, 1974 On conceptual integrity ‘Such design coherence in a tool not only delights, it also yields ease of learning and ease of use. The tool does what one expects it to do. I argued [...] that conceptual integrity is the most important consideration in system design. Sometimes the virtue is called coherence, sometimes consistency, sometimes uniformity of style [...] The solo designer or artist usually produces works with this integrity subconsciously; he tends to make each microsdecision the same way each time he encounters it (barring strong reasons). If he fails to produce such integrity, we consider the work flawed, not great.’ in: The Design of Design, Frederick P. Brooks, 2010 (originally introduced in: The Mythical Man Month, 1975) Reference: 5′ on IT-Architecture: root concepts explained by the pioneers of software architecture from our JCG partner Niklas....

Getting Started with Spring Social – Part 2

A few weeks ago I wrote a post demonstrating what I thought was the simplest application you could write using Spring Social. This application read and displayed a Twitter user’s public data and was written as an introduction to Spring Social and the social coding arena. However, getting your application to display your user’s public data is only half the story and most of the time you’ll need to display your user’s private data. In this blog, I’m going to cover the scenario where you have a requirement to display a user’s Facebook or other Software as a Service (SaaS) provider data on one or two pages of your application. The idea here is to try to demonstrate the smallest and simplest thing you can to to add Spring Social to an application that requires your user to log in to Facebook or other SaaS provider. Creating the App To create the application, the first step is to create a basic Spring MVC Project using the template section of the SpringSource Toolkit Dashboard. This provides a webapp that’ll get you started. The next step is to set up the pom.xml by adding the following dependencies: <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-crypto</artifactId> <version>${org.springframework.security.crypto-version}</version> </dependency><!-- Spring Social --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-core</artifactId> <version>${spring-social.version}</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-web</artifactId> <version>${spring-social.version}</version> </dependency> <!-- Facebook API --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-facebook</artifactId> <version>${org.springframework.social-facebook-version}</version> </dependency><!-- JdbcUserConfiguration --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${org.springframework-version}</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.3.159</version> </dependency><!-- CGLIB, only required and used for @Configuration usage: could be removed in future release of Spring --> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2</version> </dependency> …obviously you’ll also need to add the following to the %lt;properties/> section of the file: <spring-social.version>1.0.2.RELEASE</spring-social.version> <org.springframework.social-facebook-version>1.0.1.RELEASE</org.springframework.social-facebook-version> <org.springframework.security.crypto-version>3.1.0.RELEASE</org.springframework.security.crypto-version> You’ll notice that I’ve added a specific pom entry for spring-security-crypto: this is because I’m using Spring 3.0.6. In Spring 3.1.x, this has become part of the core libraries. The only other point to note is that there is also a dependency on spring-jdbc and h2. This is because Spring’s UserConnectionRepository default implementation: JdbcUsersConnectionRepository uses them and hence they’re required even though this app doesn’t persist anything to a database (so far as I can tell). The Classes The social coding functionality consists of four classes (and one of those I’ve pinched from Keith Donald’s Spring Social Quick Start Sample code):FacebookPostsController SocialContext FacebookConfig UserCookieGeneratorFacebookPostsController is the business end of the application responsible for getting hold of the user’s Facebook data and pushing it into the model ready for display. @Controllerpublic class FacebookPostsController {private static final Logger logger = LoggerFactory.getLogger(FacebookPostsController.class);private final SocialContext socialContext;@Autowiredpublic FacebookPostsController(SocialContext socialContext) {this.socialContext = socialContext;}@RequestMapping(value = 'posts', method = RequestMethod.GET)public String showPostsForUser(HttpServletRequest request, HttpServletResponse response, Model model) throws Exception {String nextView;if (socialContext.isSignedIn(request, response)) {List<Post> posts = retrievePosts();model.addAttribute('posts', posts);nextView = 'show-posts';} else {nextView = 'signin';}return nextView;}private List<Post> retrievePosts() {Facebook facebook = socialContext.getFacebook();FeedOperations feedOps = facebook.feedOperations();List<Post> posts = feedOps.getHomeFeed();logger.info('Retrieved ' + posts.size() + ' posts from the Facebook authenticated user');return posts;}} As you can see, from a high-level viewpoint the logic of what we’re trying to achieve is pretty simple: IF user is signed in THEN read Facebook data, display Facebook data ELSE ask user to sign in when user has signed in, go back to the beginning END IF The FacebookPostsController delegates the task of handling the sign in logic to the SocialContext class. You can probably guess that I got the idea for this class from Spring’s really useful ApplicationContext. The idea here is that there is one class that’s responsible for gluing your application to Spring Social. public class SocialContext implements ConnectionSignUp, SignInAdapter {/*** Use a random number generator to generate IDs to avoid cookie clashes* between server restarts*/private static Random rand;/*** Manage cookies - Use cookies to remember state between calls to the* server(s)*/private final UserCookieGenerator userCookieGenerator;/** Store the user id between calls to the server */private static final ThreadLocal<String> currentUser = new ThreadLocal<String>();private final UsersConnectionRepository connectionRepository;private final Facebook facebook;public SocialContext(UsersConnectionRepository connectionRepository, UserCookieGenerator userCookieGenerator,Facebook facebook) {this.connectionRepository = connectionRepository;this.userCookieGenerator = userCookieGenerator;this.facebook = facebook;rand = new Random(Calendar.getInstance().getTimeInMillis());}@Overridepublic String signIn(String userId, Connection<?> connection, NativeWebRequest request) {userCookieGenerator.addCookie(userId, request.getNativeResponse(HttpServletResponse.class));return null;}@Overridepublic String execute(Connection<?> connection) {return Long.toString(rand.nextLong());}public boolean isSignedIn(HttpServletRequest request, HttpServletResponse response) {boolean retVal = false;String userId = userCookieGenerator.readCookieValue(request);if (isValidId(userId)) {if (isConnectedFacebookUser(userId)) {retVal = true;} else {userCookieGenerator.removeCookie(response);}}currentUser.set(userId);return retVal;}private boolean isValidId(String id) {return isNotNull(id) && (id.length() > 0);}private boolean isNotNull(Object obj) {return obj != null;}private boolean isConnectedFacebookUser(String userId) {ConnectionRepository connectionRepo = connectionRepository.createConnectionRepository(userId);Connection<Facebook> facebookConnection = connectionRepo.findPrimaryConnection(Facebook.class);return facebookConnection != null;}public String getUserId() {return currentUser.get();}public Facebook getFacebook() {return facebook;}} SocialContext implements Spring Social’s ConnectionSignUp and SignInAdapter interfaces. It contains three methods isSignedIn(), signIn(), execute(). isSignedIn is called by the FacebookPostsController class to implement the logic above, whilst signIn() and execute() are called by Spring Social. From my previous blogs you’ll remember that OAuth requires lots of trips between the browser, your app and the SaaS provider. In making these trips the application needs to save the state of several OAuth arguments such as: client_id, redirect_uri and others. Spring Social hides all this complexity from your application by mapping the state of the OAuth conversation to one variable that your webapp controls. This is the userId; however, don’t think of think of this as a user name because it’s never seen by the user, it’s just a unique identifier that links a number of HTTP requests to an SaaS provider connection (such as Facebook) in the Spring Social core. Because of its simplicity, I’ve followed Keith Donald’s idea of using cookies to pass the user id between the browser and the server in order to maintain state. I’ve also borrowed his UserCookieGenerator class from the Spring Social Quick Start to help me along. The isSignedIn(...) method uses UserCookieGenerator to figure out if the HttpServletRequest object contains a cookie that contains a valid user id. If it does then it also figures out if Spring Social’s UsersConnectionRepository contains a ConnectionRepository linked to the same user id. If both of these tests return true then the application will request and display the user’s Facebook data. If one of the two tests returns false, then the user will be asked to sign in. SocialContext has been written specifically for this sample and contains enough functionality to demonstrate what I’m talking about in this blog. This means that it’s currently a little rough and ready, though it could be improved to cover connections to any / many providers and then reused in different applications. The final class to mention is FacebookConfig, which is loosely based upon the Spring Social sample code. There are two main differences between this code and the sample code with the first of these being that the FacebookConfig class implements the InitializingBean interface. This is so that the usersConnectionRepositiory variable can be injected into the socialContext and in turn the socialContext can be injected into the usersConnectionRepositiory as its ConnectionSignUp implementation. The second difference is that I’m implementing a providerSignInController(...) method to provide a correctly configured ProviderSignInController object to be used by Spring Social to sign in to Facebook. The only change to the default I’ve made here is to set the ProviderSignInController’s postSignInUrl property to “ /posts”. This is the url of the page that will contain the users Facebook data and will be called once the user sign in is complete. @Configurationpublic class FacebookConfig implements InitializingBean {private static final Logger logger = LoggerFactory.getLogger(FacebookConfig.class);private static final String appId = '439291719425239';private static final String appSecret = '65646c3846ab46f0b44d73bb26087f06';private SocialContext socialContext;private UsersConnectionRepository usersConnectionRepositiory;@Injectprivate DataSource dataSource;/*** Point to note: the name of the bean is either the name of the method* 'socialContext' or can be set by an attribute** @Bean(name='myBean')*/@Beanpublic SocialContext socialContext() {return socialContext;}@Beanpublic ConnectionFactoryLocator connectionFactoryLocator() {logger.info('getting connectionFactoryLocator');ConnectionFactoryRegistry registry = new ConnectionFactoryRegistry();registry.addConnectionFactory(new FacebookConnectionFactory(appId, appSecret));return registry;}/*** Singleton data access object providing access to connections across all* users.*/@Beanpublic UsersConnectionRepository usersConnectionRepository() {return usersConnectionRepositiory;}/*** Request-scoped data access object providing access to the current user's* connections.*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public ConnectionRepository connectionRepository() {String userId = socialContext.getUserId();logger.info('Createung ConnectionRepository for user: ' + userId);return usersConnectionRepository().createConnectionRepository(userId);}/*** A proxy to a request-scoped object representing the current user's* primary Facebook account.** @throws NotConnectedException* if the user is not connected to facebook.*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public Facebook facebook() {return connectionRepository().getPrimaryConnection(Facebook.class).getApi();}/*** Create the ProviderSignInController that handles the OAuth2 stuff and* tell it to redirect back to /posts once sign in has completed*/@Beanpublic ProviderSignInController providerSignInController() {ProviderSignInController providerSigninController = new ProviderSignInController(connectionFactoryLocator(),usersConnectionRepository(), socialContext);providerSigninController.setPostSignInUrl('/posts');return providerSigninController;}@Overridepublic void afterPropertiesSet() throws Exception {JdbcUsersConnectionRepository usersConnectionRepositiory = new JdbcUsersConnectionRepository(dataSource,connectionFactoryLocator(), Encryptors.noOpText());socialContext = new SocialContext(usersConnectionRepositiory, new UserCookieGenerator(), facebook());usersConnectionRepositiory.setConnectionSignUp(socialContext);this.usersConnectionRepositiory = usersConnectionRepositiory;}} Application Flow If you run this application 2 you’re first presented with the home screen containing a simple link inviting you to display your posts. The first time you click on this link, you’re re-directed to the /signin page. Pressing the ‘sign in’ button tells the ProviderSignInController to contact Facebook. Once authentication is complete, then the ProviderSignInController directs the app back to the /posts page and this time it displays the Facebook data. Configuration For completeness, I thought that I should mention the XML config, although there’s not much of it because I’m using the Spring annotation @Configuration on the FacebookConfig class. I have imported “ data.xml” from the Spring Social so that JdbcUsersConnectionRepository works and added <context:component-scan base-package='com.captaindebug.social' /> …for autowiring. Summary Although this sample app is based upon connecting your app to your user’s Facebook data, it can be easily modified to use any of the Spring Social client modules. If you like a challenge, try implementing Sina-Weibo where everything’s in Chinese – it’s a challenge, but Google Translate comes in really useful. 1 Spring Social and Other OAuth Blogs:Getting Started with Spring Social Facebook and Twitter: Behind the Scenes The OAuth Administration Steps OAuth 2.0 Webapp Flow Overview2 The code is available on Github at: https://github.com/roghughe/captaindebug.git Reference: Getting Started with Spring Social – Part 2 from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

MongoDB in 30 minutes

I have recently been bitten by the NoSQL bug – or as colleague of mine Mark Atwell coined “Burn the Where!” movement. While I have no intention of shunning friendly “SELECT … WHERE” anytime soon – or in foreseeable future, I did manage to get my hands dirty with some code. In this article I share the knowledge of my first attempts in NoSQL world. In this article we will aim to get a basic java application up and running that is capable of interacting with MongoDB. By itself that is not really a tall task and perhaps you could get that in under 10 minutes. Click here or click here or click here, for some excellent material. However, I wanted to push it a bit further. I want to add ORM support. I have chosen Morphia for this article. I also want to add DAO pattern, unit testing, and logging. In short, I want to it feel, “almost like” the kind of code that most of us would have written for enterprise applications using, let’s say Java, Hibernate and Oracle. And, we will try to do all this in under 30 minutes. My intention is to give a reassurance to Java + RDBMS developers that Java + NoSQL is not very alien. It is similar code and easy to try out. It is perhaps pertinent to add at this point that I have no affinity to NoSQL and no issues with RDBMS. I beieve they both have their own uses ( click here for some excellent material). As a technologist, I just like knowing my tools better so that I can do justice to my profession. This article is solely aimed at helping likeminded people to dip their toes in NoSQL. Ok, enought talk. Let’s get started. You will need a few softwares / tools before you follow this article. Download and install MongoDB server, if you have not already done so. I am assuming you have Java, some Java IDE and a build and release tool. I am using jdk 7, Eclipse (STS) and Maven 3 for this article. I start by creating a vanilla standard java application using Maven. I like using a batch file for this. File: codeRepo\MavenCommands.bat ECHO OFFREM ============================= REM Set the env. variables. REM ============================= SET PATH=%PATH%;C:\ProgramFiles\apache-maven-3.0.3\bin; SET JAVA_HOME=C:\ProgramFiles\Java\jdk1.7.0REM ============================= REM Create a simple java application. REM ============================= call mvn archetype:create ^ -DarchetypeGroupId=org.apache.maven.archetypes ^ -DgroupId=org.academy ^ -DartifactId=dbLayer002pause Import it in Eclipse. Use Maven to compile and run just to be sure. mvn -e clean install. This should compile the code and run the default tests as well. Once you have crossed this hurdle, now let’s get down to some coding. Let’s create an Entity object to start with. I think a Person class with fname would serve our purpose. I acknowledge that it is trivial but it does the job for a tutorial. File: /dbLayer002/src/main/java/org/academy/entity/Person.java package org.academy.entity;public class Person { private String fname;[...] }I have not mentioned the getters and setters for brevity. Now let us get the MongoDB java driver and attach it to the application. I like have Maven do this bit for me. You could obviously do this bit manually as well. Your choice. I am just lazy. File: /dbLayer002/pom.xml [...] <!-- MongDB java driver to hook up to MongoDB server --> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version> </dependency> [...]This will allow us to write a util class for connecting to MongoDB server instance. I am assuming you have a server up and running in your machine with default settings. File: /dbLayer002/src/main/java/org/academy/util/MongoUtil.java public class MongoUtil {private final static Logger logger = LoggerFactory .getLogger(MongoUtil.class);private static final int port = 27017; private static final String host = "localhost"; private static Mongo mongo = null;public static Mongo getMongo() { if (mongo == null) { try { mongo = new Mongo(host, port); logger.debug("New Mongo created with [" + host + "] and [" + port + "]"); } catch (UnknownHostException | MongoException e) { logger.error(e.getMessage()); } } return mongo; } }We will need logger setup in our application for this class to compile. Click here for my article around logging. All we need to do is to hook up Maven with the correct dependencies. File: /dbLayer002/pom.xml [...] <slf4j.version>1.6.1</slf4j.version> [...] <!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency> And also, we will need to specify, exactly what to log. File: /dbLayer002/src/java/resources/log4j.properties # Set root logger level to DEBUG and its only appender to A1. log4j.rootLogger=DEBUG, A1# configure A1 to spit out data in console log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n At this point, you already have an Entity class and a utility class to hook up to the database. Ideally I would go about writing a DAO and then somehow use the ORM to join up the DAO with the database. However, Morphia has some excellent DAO support. Also it has some annotations to tie up the Entity with database elements. So, although I would have liked the Entity and DAO to be totally agnostic of the ORM and database, it is not the case here. I know on the face of it, it sounds like Morphia or MongoDB is forcing me to deviate from good code structure, let me hasten to add, that it is not any worse than other ORMs e.g. Hibernate with Annotations would have also forced me to make the same kind of compromise. So, let’s bring in Morphia in the picture. Enters the stage our ever helpful tool, Maven. A bit of an avoidable hitch here. When I was writing this document I could not get the version of Morphia that I wanted in the central Maven repository. So, I had to configure Maven to use the Morphia repository as well. Hopefully this is just a temporary situation. File: /dbLayer002/pom.xml [...] <!-- Required for Morphia --> <repositories> <repository> <id>Morphia repository</id> <url>http://morphia.googlecode.com/svn/mavenrepo/</url> </repository> </repositories> [...] <!-- Morphia - ORM for MongoDB --> <dependency> <groupId>com.google.code.morphia</groupId> <artifactId>morphia</artifactId> <version>0.98</version> </dependency> As I mentioned above, Morphia allows us to annotate the Entity class (much like Hibernate annotations). So, here is how the updated Entity class looks like. File: /dbLayer002/src/main/java/org/academy/entity/Person.java [...] @Entity public class Person { @Id private ObjectId id; [...]And now we can also add a DAO layer and lean on the BasicDAO that Morphia provides. File: /dbLayer002/src/main/java/org/academy/dao/PersonDAO.java public class PersonDAO extends BasicDAO<Person, ObjectId> {public PersonDAO(Mongo mongo, Morphia morphia, String dbName) { super(mongo, morphia, dbName); } [...]The beauty of the BasicDAO is that just by extending that, my own DAO class already has enough functionality to do the basic CRUD operations, although I have just added a constructor. Don’t believe it? Ok, lets write a test for that. File: /dbLayer002/src/test/java/org/academy/dao/PersonDAOTest.java public class PersonDAOTest { private final static Logger logger = LoggerFactory .getLogger(PersonDAOTest.class);private Mongo mongo; private Morphia morphia; private PersonDAO personDao; private final String dbname = "peopledb";@Before public void initiate() { mongo = MongoUtil.getMongo(); morphia = new Morphia(); morphia.map(Person.class); personDao = new PersonDAO(mongo, morphia, dbname); }@Test public void test() { long counter = personDao.count(); logger.debug("The count is [" + counter + "]");Person p = new Person(); p.setFname("Partha"); personDao.save(p);long newCounter = personDao.count(); logger.debug("The new count is [" + newCounter + "]");assertTrue((counter + 1) == newCounter); [...]As you might have already noticed. I have used JUnit 4. If you have been following this article as is till now you would have an earlier version of JUnit in your project. To ensure that you use JUnit 4, you just have to configure Maven to use that by adding the correct dependency in pom.xml. File: /dbLayer002/pom.xml <!-- Unit test. --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency>You are good to go. If you run the tests they should pass. Of course you could / should get into your database and check that the data is indeed saved. I will refer you to the MongoDB documentation which I think are quite decent. Last but not least, let me assure you that BasicDAO is not restrictive in any way. I am sure puritans would point out that if my DAO class needs to extend the BasicDAO that is a limitation on the source code structure anyway. Ideally should not have been required. And I agree with that. However, I also want show that for all practical purposes of DAO, you have sufficient flexibility. Let’s go on and add a custom find method to our DAO, that is specific to this Entity i.e. Person. Let’s say we want to be able to search on the basis of firstname and that we want to use regular expression for that. Here is how the code will look like. File: /dbLayer002/src/main/java/org/academy/dao/PersonDAO.java public class PersonDAO extends BasicDAO<Person, ObjectId> {[...] public Iterator<Person> findByFname(String fname){ Pattern regExp = Pattern.compile(fname + ".*", Pattern.CASE_INSENSITIVE); return ds.find(entityClazz).filter("fname", regExp).iterator(); } [...] } Important to reemphasize here that, I have just added a custom search function to my DAO, by adding precisely three lines of code (four if you add the last parentheses). In my books, that is quite flexible. Being true to my unflinching love for automated testing, lets add a quick test to check this functionality before we wrap up. File: /dbLayer002/src/test/java/org/academy/dao/PersonDAOTest.java [...] Iterator<Person> iteratorPerson = personDao.findByFname("Pa"); int personCounter = 0 ; while(iteratorPerson.hasNext()){ personCounter ++; logger.debug("["+personCounter+"]" + iteratorPerson.next().getFname()); } [...]Before you point out, yes, there are no assert() here, so this is not a real test. Let me assure you that my test class is indeed complete. It’s just that this snippet here does not have the assert function. So, that’s it. You have used Java to connect to a NoSQL database, using an ORM, through a DAO layer (where you have used the ORM’s functionalities and done some addition as well). You have also done proper logging and unit testing. Not bad use of half an hour, eh? Happy coding. Further readingA version of this article – slightly edited, is also available at this link at Javalobby.Reference: MongoDB in 30 minutes from our JCG partner Partho at the Tech for Enterprise blog....

After I/O 2012

Starting from registration to giveaways, the I/O madness going far each year. Being sold in 20 mins this year, did not stop google to give away even more stuff. With this pace and expected release of google glass for next year, most probably next year registration will go even more chaotic! So Google, please either stop giving free stuff (I know you won’t) or move to Moscone North and have a bigger I/O!Keynotes of Google I/O is something out of this world. It is more like a rock concert than a tech conference keynote. Extending I/O to three days did not make it less compact. Usually you have at least 2 parallel sessions you want to attend and not much time left for playground and office hours but the best thing with I/O is all sessions are made available online instantly.So here is the wrap-up from I/O 2012!AndroidGoogle is well aware of weak points of Android and doing everything to fix it. Unlike Apple who is sure on the perfection of the initial design and only making enhancements on the platform, Google can rapidly change the whole experience. The hardware buttons are gone, the options button is gone, best practices for the UI has changed a lot. Android started with as a highly flexible os and since from ICS, getting nicer and more responsive UI with each update where apple started with sleek UI but not much functionality, now adding the missing parts. Google addressed 3 major problems of Android and did best to provide a solution.UI experience: With the Jelly bean, UI and animations are much faster and responsive. Chrome brings a well known user experience and a great integration with the desktop counter part. Jelly Bean offers a very smooth and sexy UI very comparable to iOS experience. UI of the Apps: Android apps tend to have problems with UI because of lack of tools and different screens sizes and resolutions. Latest toolkit brings everything an app developer would ever need. Preview and easy fix for all possible screen sizes, easy problem detection with Lint, more capable UI designer, much much better emulator may hopefully help developers to build higher quality apps. OS versions: Although ICS has been released more than 6 months ago, there are very few devices had chance to upgrade. ICS has less then 10% share and most of those devices are pre-bundled with ICS which means nearly zero updates to current devices. This reflects a huge problem for the Android ecosystem. Whatever Google brings to innovate never gets into the devices in our pockets. Google had released the Platform Development Kit for the device manufacturers which would hopefully address this issue.GlassExplorer edition is on presage and current product demos show Glass is quite near functional although Google still insists it is early development stage where Microsoft claims Surface to be complete although there are no major technical details. Google Glass can be the next revolution after iPhone. With Google Now, Google Glass will offer location and context aware interaction to your daily life. Glass can be controlled in three different ways, head movements, voice and with its touchpad. Although it is not confirmed there is a rumor on another way of control through a simple ring which would make sense since most people wear rings and a ring can offer iPod click wheel like control. Glass does not offer radio and use wifi for data. Battery life is still something being worked on although with minimal usage it can survive up to a day of usage but if you are planning to hangout while sky diving it may not survive more than few hours. It is powered by Android which would offer an easy adaption for app developers. Developer only Explorer edition will be shipping end of this year so most probably by the time of next I/O we will be experimenting the finished product.GWTThe dying rumors are over. It is clear that Google had difficult times after the GWT team dissolved. They even had hard times accepting patches since GWT is heavily used in internal projects. Google stepped back from the governance of GWT and formed a steering committee. The steering committee is a great chance to move forward and it is great to see members like Daniel Kurka who wrote mGWT. Release candidate of version 2.5 is out and the feature set is unbelievable. Super dev mode, source maps works like magic and the compiler is much better optimized.Google+Google Events is smart! It was a great strategy to release it just before the I/O party in which every attendee has at least two android device.Compute CloudFinally limitations of App Engine is over. Google offers cloud computing just like Amazon’s EC2. With the Compute Cloud, Google can now offer a full stack of services.Google NowFinally the Google Now which is an integrated location and context aware service. Google Now can understand you are at work, about to leave and tell you the traffic to home or can understand you are at the airport and knows your flight from your recent searches so can guide you to your gate and even inform if there is a delay. It is amazing, mush more seamless and integrated to your life than siri.So I/O is over and it was great,. Just wondering what will the registration be like next year since the expectations of Google Glass freebie is quite high.Reference: After I/O 2012 from our JCG partner Murat Yener at the Developer Chronicles blog....

Reference to dynamic proxy in a proxied class

There was an interesting question in Stackoverflow about how a Spring Bean can get a reference to the proxy created by Spring to handle transactions, Spring AOP, Caching, Async flows etc. A reference to the proxy was required because if there is a call to itself by the proxied bean, this call would completely bypass the proxy. Consider an InventoryService interface: public interface InventoryService{ public Inventory create(Inventory inventory); public List<Inventory> list(); public Inventory findByVin(String vin); public Inventory update(Inventory inventory); public boolean delete(Long id); public Inventory compositeUpdateService(String vin, String newMake); } Consider also that there is a default implementation for this service, and assume that the last method compositeUpdateService, internally invokes two methods on the bean itself, like this: public Inventory compositeUpdateService(String vin, String newMake) { logger.info("composite Update Service called"); Inventory inventory = this.findByVin(vin); inventory.setMake(newMake); this.update(inventory); return inventory; } If I now create an aspect to advice any calls to InventoryService with the objective of tracking how long each method call takes, Spring AOP would create a dynamic proxy for the InventoryService bean:However the calls to compositeUpdateService will record the time only at the level of this method, the calls that compositeUpdateService internally makes to findByVin, update is bypassing the proxy and hence will not be tracked:A good fix for this is to use the full power of AspectJ – AspectJ would change the bytecode of all the methods of DefaultInventoryService to include the call to the advice. A workaround that we worked out was to inject a reference to the proxy itself into the bean and instead of calling say this.findByVin and this.update, call proxy.findByVin and proxy.update! So now how do we cleanly inject in a reference to the proxy into the bean – the solution that I offered was to create an interface to mark beans interested in their own proxies: public interface ProxyAware<T> { void setProxy(T proxy); } The interested interface and its implementation would look like this: public interface InventoryService extends ProxyAware<InventoryService>{ ... }public class DefaultInventoryService implements InventoryService{ ...private InventoryService proxy; @Override public void setProxy(InventoryService proxy) { this.proxy = proxy; } } and then define a BeanPostProcessor to inject in this proxy! public class ProxyInjectingBeanPostProcessor implements BeanPostProcessor, Ordered { @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { return bean; }@Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (AopUtils.isAopProxy((bean))){ try { Object target = ((Advised)bean).getTargetSource().getTarget();if (target instanceof ProxyAware){ ((ProxyAware) target).setProxy(bean); } } catch (Exception e) { return bean; } } return bean; }@Override public int getOrder() { return Integer.MAX_VALUE; } } Not the cleanest of implementations, but works! Reference: http://blog.springsource.org/2012/05/23/understanding-proxy-usage-in-spring/ Reference: Reference to dynamic proxy in a proxied class from our JCG partner Biju Kunjummen at the all and sundry blog....

ZK in Action: MVVM – Working Together with ZK Client API

In the previous posts we’ve implemented the following functionalities with ZK’s MVVM:load data into a table save data with form binding delete entries and update the View programmaticallyA key distinction between ZK MVVM and ZK MVC implementation wise is that we do not access and manipulate the UI components directly in the controller(ViewModel) class. In this post, we’ll see how we can delegate some of the UI manipulation to client side code, as well as how to pass parameters from View to ViewModel. Objective Build an update function to our simple inventory CRUD feature. Users can edit-in-place entries in the table and given the choice to update or discard the changes made. Modified entries are highlighted in red.ZK Features in ActionZK Client-side APIs ZK Style Class MVVM : Pass Parameters from View to ViewModelImplementation in Steps   Enable In-place editing in Listbox so we can edit entries: <listcell> <textbox inplace="true" value="@load(each.name)" ...</textbox> </listcell> .... <listcell> <doublebox inplace="true" value="@load(each.price)" ...</textbox> </listcell> ...inplace=”true” renders input elements such as Textbox without their borders displaying them as plain labels; the borders appear only if the input element is selected line 2, 6,”each” refers to each Item object in the data collectionOnce an entry is edited we want to give users the option to Update or Discard the change.  The Update and Discard buttons need to be visible only if user has made modifications on the Listbox entries. First we define JavaScript functions to show and hide the Update and Discard buttons: <toolbar> ... <span id="edit_btns" visible="false" ...> <toolbarbutton label="Update" .../> <toolbarbutton label="Discard" .../> </span> </toolbar><script type="text/javascript"> function hideEditBtns(){ jq('$edit_btns').hide(); } function showEditBtns(){ jq('$edit_btns').show(); }</script> ...line 2, we wrap the Update and Discard and set visibility to false line 9, 13, we define functions which hide and show the Update and Discard buttons respectively line 11, 15, we make use of jQuery selector jq( ‘$edit_btns’) to retrieve the ZK widget whose ID is “edit_btns”; notice the selector pattern for a ZK widget ID is ‘$’, not ‘#’When entries in the Listbox are modified, we’ll make the Update/Discard buttons visible and make the modified values red. Once either Update or Discard button is clicked, we’d like to hide the buttons again Since this is pure UI interactions, we’ll make use of ZK’s client side APIs: <style> .inputs { font-weight: 600; } .modified { color: red; } </style> ... <toolbar xmlns:w="client" > ... <span id="edit_btns" visible="false" ...> <toolbarbutton label="Update" w:onClick="hideEditBtns()" .../> <toolbarbutton label="Discard" w:onClick="hideEditBtns()" .../> </span> </toolbar><script type="text/javascript"> //show hide functionszk.afterMount(function(){ jq('.inputs').change(function(){ showEditBtns(); $(this).addClass('modified'); }) }); </script> ... <listcell> <doublebox inplace="true" sclass="inputs" value="@load(each.price)" ...</textbox> </listcell> ...line 2, we specify a style class for our input elements (Textbox, Intbox, Doublebox, Datebox) and assign it to input elements’ sclass attribute, eg. line 26; sclass defines style class for ZK widgets line 18~20, we get all the input elements by matching their sclass name and assign an onChange event handler. Once the value in an input element is changed, the Update/Discard buttons will become visible and the modified value will be highlighted in red. line 17, zk.afterMount is run when ZK widgets are created line 6, we specify the client namespace so we can register client side onClick event listeners with the syntax “w:onClick”. Note we can still register our usual onClick event listener that’s handled at the server simultaneously. line 9, 10, we assign client side onClick event listener; the hideEditBtns function would be called to make the buttons invisible againDefine a method to store the modified Item objects into a collection so the changes could be updated in batch if user choose to do so: public class InventoryVM {private HashSet<Item> itemsToUpdate = new HashSet<item>(); ...@Command public void addToUpdate(@BindingParam("entry") Item item){ itemsToUpdate.add(item); }line 6, we annotate this method as a command method so it can be invoked from View line 7, @BindingParam(“entry”) Item item binds an arbitrarily named parameter called “entry”; we anticipate the parameter would be of type ItemCreate a method to update the changes made in View to the Data Model public class InventoryVM {private List<Item> items; private HashSet<Item> itemsToUpdate = new HashSet<item>(); ...@NotifyChange("items") @Command public void updateItems() throws Exception{ for (Item i : itemsToUpdate){ i.setDatemod(new Date()); DataService.getInstance().updateItem(i); } itemsToUpdate.clear(); items = getItems(); }When modifications are made on Listbox entries, invoke the addToUpdate method and pass to it the edited Item object which in turn is saved to the itemsToUpdate collection <listitem> <listcell> <doublebox value="@load(each.price) @save(each.name, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> ... </listitem>@save(each.name, before=’updateItems’) ensures that modified values are not saved unless updateItems is called (ie. when user click the “Update” button)Finally, when user clicks Update, we call the updateItems method to update changes to Data Model. If Discard is clicked, we call getItems to refresh the Listbox without applying any changes ... <toolbarbutton label="Update" onClick="@command('updateItems')" .../> <toolbarbutton label="Discard" onClick="@command('getItems')" .../> ...In a NutshellUnder the MVVM pattern we strive to keep the ViewModel code independent of any View components Since we don’t have direct reference to the UI components in the ViewModel code, we can delegate the UI manipulation (in our sample code, show/hide, style change) code to the client using ZK’s client side APIs We can make use of jQuery selectors and APIs at ZK client side We can easily pass parameters from View to ViewModel with @BindingParamNext up, we’ll go over a bit more on ZK Styling before we tackle the MVVM validators and converters.   ViewModel (ZK in Action[0]~[3]): public class InventoryVM {private List<Item> items; private Item newItem; private Item selected; private HashSet<Item> itemsToUpdate = new HashSet<Item>(); public InventoryVM(){} //CREATE @NotifyChange("newItem") @Command public void createNewItem(){ newItem = new Item("", "",0, 0,new Date()); } @NotifyChange({"newItem","items"}) @Command public void saveItem() throws Exception{ DataService.getInstance().saveItem(newItem); newItem = null; items = getItems(); } @NotifyChange("newItem") @Command public void cancelSave() throws Exception{ newItem = null; } //READ @NotifyChange("items") @Command public List<Item> getItems() throws Exception{ items = DataService.getInstance().getAllItems(); for (Item j : items){ System.out.println(j.getModel()); } Clients.evalJavaScript("zk.afterMount(function(){jq('.inputs').removeClass('modified').change(function(){$(this).addClass('modified');showEditBtns();})});"); //how does afterMount work in this case? return items; } //UPDATE @NotifyChange("items") @Command public void updateItems() throws Exception{ for (Item i : itemsToUpdate){ i.setDatemod(new Date()); DataService.getInstance().updateItem(i); } itemsToUpdate.clear(); items = getItems(); } @Command public void addToUpdate(@BindingParam("entry") Item item){ itemsToUpdate.add(item); } //DELETE @Command public void deleteItem() throws Exception{ if (selected != null){ String str = "The item with name \""+selected.getName()+"\" and model \""+selected.getModel()+"\" will be deleted."; Messagebox.show(str,"Confirm Deletion", Messagebox.OK|Messagebox.CANCEL, Messagebox.QUESTION, new EventListener<Event>(){ @Override public void onEvent(Event event) throws Exception { if (event.getName().equals("onOK")){ DataService.getInstance().deleteItem(selected); items = getItems(); BindUtils.postNotifyChange(null, null, InventoryVM.this, "items"); } } }); } else { Messagebox.show("No Item was Selected"); } }public Item getNewItem() { return newItem; }public void setNewItem(Item newItem) { this.newItem = newItem; }public Item getselected() { return selected; }public void setselected(Item selected) { this.selected = selected; } }View (ZK in Action[0]~[3]): <zk> <style> .z-toolbarbutton-cnt { font-size: 17px;} .edit-btns {border: 2px solid #7EAAC6; padding: 6px 4px 10px 4px; border-radius: 6px;} .inputs { font-weight: 600; } .modified { color: red; } </style> <script type="text/javascript"> function hideEditBtns(){ jq('$edit_btns').hide(); }function showEditBtns(){ jq('$edit_btns').show(); }zk.afterMount(function(){ jq('.inputs').change(function(){ $(this).addClass('modified'); showEditBtns(); }) }); </script> <window apply="org.zkoss.bind.BindComposer" viewModel="@id('vm') @init('lab.sphota.zk.ctrl.InventoryVM')" xmlns:w="client"> <toolbar width="100%"> <toolbarbutton label="Add" onClick="@command('createNewItem')" /> <toolbarbutton label="Delete" onClick="@command('deleteItem')" disabled="@load(empty vm.selected)" /> <span id="edit_btns" sclass="edit-btns" visible="false"> <toolbarbutton label="Update" onClick="@command('updateItems')" w:onClick="hideEditBtns()"/> <toolbarbutton label="Discard" onClick="@command('getItems')" w:onClick="hideEditBtns()" /> </span> </toolbar> <groupbox mold="3d" form="@id('itm') @load(vm.newItem) @save(vm.newItem, before='saveItem')" visible="@load(not empty vm.newItem)"> <caption label="New Item"></caption> <grid width="50%"> <rows> <row> <label value="Item Name" width="100px"></label> <textbox value="@bind(itm.name)" /> </row> <row> <label value="Model" width="100px"></label> <textbox value="@bind(itm.model)" /> </row> <row> <label value="Unit Price" width="100px"></label> <decimalbox value="@bind(itm.price)" format="#,###.00" constraint="no empty, no negative" /> </row> <row> <label value="Quantity" width="100px"></label> <spinner value="@bind(itm.qty)" constraint="no empty,min 0 max 999: Quantity Must be Greater Than Zero" /> </row> <row> <cell colspan="2" align="center"> <button width="80px" label="Save" mold="trendy" onClick="@command('saveItem')" /> <button width="80px" label="Cancel" mold="trendy" onClick="@command('cancelSave')" /> </cell> </row> </rows> </grid> </groupbox> <listbox selectedItem="@bind(vm.selected)" model="@load(vm.items) "> <listhead> <listheader label="Name" sort="auto" hflex="2" /> <listheader label="Model" sort="auto" hflex="1" /> <listheader label="Quantity" sort="auto" hflex="1" /> <listheader label="Unit Price" sort="auto" hflex="1" /> <listheader label="Last Modified" sort="auto" hflex="2" /> </listhead> <template name="model"> <listitem> <listcell> <textbox inplace="true" width="110px" sclass="inputs" value="@load(each.name) @save(each.name, before='updateItems')" onChange="@command('addToUpdate',entry=each)"> </textbox> </listcell> <listcell> <textbox inplace="true" width="110px" sclass="inputs" value="@load(each.model) @save(each.model, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell> <intbox inplace="true" sclass="inputs" value="@load(each.qty) @save(each.qty, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell> <doublebox inplace="true" sclass="inputs" format="###,###.00" value="@load(each.price) @save(each.price, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell label="@load(each.datemod)" /> </listitem> </template> </listbox> </window> </zk>ZK Client-side Reference ZK Developer Reference: @BindingParamReference: ZK in Action [3] : MVVM – Working Together with ZK Client API from our JCG partner Lance Lu at the Tech Dojo blog....

Eat your own dog food, but throw in some unknown biscuits for variety

Well, the example app that Jorge Aliss asked me to write to demonstrate a combination of SecureSocial and Deadbolt is working, and working nicely, and as soon as I’ve prettied it up a bit I’ll be releasing it to github and pushing it out to, probably, Heroku.Deadbolt came about as a result of two projects I worked on that required a specific approach to security that wasn’t addressed by any existing modules of the Play! framework. I wrote and released the module, and it’s been updated every time a new feature was needed by my own projects or through a request by someone else. To use the vernacular, I eat my own dog food.Let’s call me person A, and I eat my own dog food. Other people – let’s aggregate them into a single entity, person B…person B also eats my dog food, and delicious it is too. Based on my own experiences, Deadbolt dog food covers all the nutritional needs of my own highly-specialized pedigree dogs, and happily, also satisfies person B’s pet food requirements. Good enough…but what about when you suddenly switch dog – and nutritional requirements – and need something completely different? Then you suddenly might find that the dog food you’ve lovingly created and molded has a huge gap right where the vitamins should be.End of dog food analogy. It’s getting annoying.The demo app I’ve written – SociallySecure – allows you to log in through various social open auth providers and create an account which can then have other OAuth accounts linked to it. Once they’re linked, you can log in through any of them and get into your personal SociallySecure account. The key account is Twitter (at the moment – this is just a demo!) because security is applied to your tweets when someone is viewing your account page. If you’re friends, or you’ve decided your tweets should be public, the user can see all the exceptional and banal things you’ve tweeted along with those of the people you follow. If you’ve decided your tweets are visible only to friends, when someone views your user page then your tweets are not visible. Fairly simple stuff, but enough to demonstrate dynamic security.Fairly simple stuff – except for the part where I’m not only considering the user privileges of the logged-in user, but that user’s privileges in relation to another user.Deadbolt offers dynamic checks in the form of the #restrictedresource tag at the view level, and @RestrictedResource annotation at the controller level. If you’re in the controller level, you can already calculate what you need. In the view level, you’re pretty much screwed unless you’re able to pass in additional information. This is what the latest version of Deadbolt gives you, and without it you have a much more restricted version of dynamic security unless you *really* jump through some hoops.If you’re curious about how this works, you can pass a map of string parameters to a tag argument called resourceParameters – this map is then available in your RestrictedResourceHandler. You can use it in two ways:The string key/value pairs can be used by you directly The value can be used to do a lookup in the request, or session, or cache, or whateverThe point of this blog post, however, is to emphasize than if you’re going to release something as open source, you’ll have a much higher chance of meeting developer needs if, once you’ve covered all your own use cases, sit down for a couple of hours with another developer and ask them to use it to develop something reasonably complex. It’s pretty much what you’ll see when releasing a product – you can test it to hell and back, but within a couple of hours of it going live you can pretty much guarantee that a user has found a bug.Think of crucially missing gaps in your code as bugs. The code works perfectly well when it’s running according to what you have in mind, but as soon as it hits the real world then you’ll find that not everyone thinks like you.Reference: Eat your own dog food, but throw in some unknown biscuits for variety from our JCG partner Steve Chaloner at the Objectify blog....

Applying Back Pressure When Overloaded

How should a system respond when under sustained load? Should it keep accepting requests until its response times follow the deadly hockey stick, followed by a crash? All too often this is what happens unless a system is designed to cope with the case of more requests arriving than is it capable of processing. If we are seeing a sustained arrival rate of requests, greater than our system is capable of processing, then something has to give. Having the entire system degrade is not the ideal service we want to give our customers. A better approach would be to process transactions at our systems maximum possible throughput rate, while maintaining a good response time, and rejecting requests above this arrival rate.Let’s consider a small art gallery as an metaphor. In this gallery the typical viewer spends on average 20 minutes browsing, and the gallery can hold a maximum of 30 viewers. If more than 30 viewers occupy the gallery at the same time then customers become unhappy because they cannot have a clear view of the paintings. If this happens they are unlikely to purchase or return. To keep our viewers happy it is better to recommend that some viewers visit the café a few doors down and come back when the gallery is less busy. This way the viewers in the gallery get to see all the paintings without other viewers in the way, and in the meantime those we cannot accommodate enjoy a coffee. If we apply Little’s Law we cannot have customers arriving at more than 90 per hour, otherwise the maximum capacity is exceeded. If between 9:00-10:00 they are arriving at 100 per hour, then I’m sure the café down the road will appreciate the extra 10 customers.Within our systems the available capacity is generally a function of the size of our thread pools and time to process individual transactions. These thread pools are usually fronted by queues to handle bursts of traffic above our maximum arrival rate. If the queues are unbounded, and we have a sustained arrival rate above the maximum capacity, then the queues will grow unchecked. As the queues grow they increasingly add latency beyond acceptable response times, and eventually they will consume all memory causing our systems to fail. Would it not be better to send the overflow of requests to the café while still serving everyone else at the maximum possible rate? We can do this by designing our systems to apply “Back Pressure”. Figure 1Separation of concerns encourages good systems design at all levels. I like to layer a design so that the gateways to third parties are separated from the main transaction services. This can be achieved by having gateways responsible for protocol translation and border security only. A typical gateway could be a web container running Servlets. Gateways accept customer requests, apply appropriate security, and translate the channel protocols for forwarding to the transaction service hosting the domain model. The transaction service may use a durable store if transactions need to be preserved. For example, the state of a chat server domain model may not require preservation, whereas a model for financial transactions must be kept for many years for compliance and business reasons.Figure 1. above is a simplified view of the typical request flow in many systems. Pools of threads in a gateway accept user requests and forward them to a transaction service. Let’s assume we have asynchronous transaction services fronted by an input and output queues, or similar FIFO structures. If we want the system to meet a response time quality-of-service (QOS) guarantee, then we need to consider the three following variables:The time taken for individual transactions on a thread The number of threads in a pool that can execute transactions in parallel The length of the input queue to set the maximum acceptable latency max latency = (transaction time / number of threads) * queue length queue length = max latency / (transaction time / number of threads)By allowing the queue to be unbounded the latency will continue to increase. So if we want to set a maximum response time then we need to limit the queue length.By bounding the input queue we block the thread receiving network packets which will apply back pressure up stream. If the network protocol is TCP, similar back pressure is applied via the filling of network buffers, on the sender. This process can repeat all the way back via the gateway to the customer. For each service we need to configure the queues so that they do their part in achieving the required quality-of-service for the end-to-end customer experience.One of the biggest wins I often find is to improve the time taken to process individual transaction latency. This helps in the best and worst case scenarios.Worst Case ScenarioLet’s say the queue is unbounded and the system is under sustained heavy load. Things can begin to go wrong very quickly in subtle ways before memory is exhausted. What do you think will happen when the queue is larger than the processor cache? The consumer threads will be suffering cache misses just at the time when they are struggling to keep up, thus compounding the problem. This can cause a system to get into trouble very quickly and eventually crash. Under Linux this is particularly nasty because malloc, or one of its friends, will succeed because Linux allows “ Over Commit” by default, then later at the point of using that memory, the OOM Killer will start shooting processes. When the OS starts shooting processes, you just know things are not going to end well!What About Synchronous Designs?You may say that with synchronous designs there are no queues. Well not such obvious ones. If you have a thread pool then it will have a lock, or semaphore, wait queues to assign threads. If you are crazy enough to allocate a new thread on every request, then once you are over the huge cost of thread creation, your thread is in the run queue for a processor to execute. Also, these queues involve context switches and condition variables which greatly increase the costs. You just cannot run away from queues, they are everywhere! Best to embrace them and design for the quality-of-service your system needs to deliver to its customers. If we must have queues, then design for them, and maybe choose some nice lock-free ones with great performance.When we need to support synchronous protocols like REST then use back pressure, signalled by our full incoming queue at the gateway, to send a meaningful “server busy” message such as the HTTP 503 status code. The customer can then interpret this as time for a coffee and cake at the café down the road.Subtleties To Watch Out For…You need to consider the whole end-to-end service. What if a client is very slow at consuming data from your system? It could tie up a thread in the gateway taking it out of action. Now you have less threads working the queue so the response time will be increasing. Queues and threads need to be monitored, and appropriate action needs to be taken when thresholds are crossed. For example, when a queue is 70% full, maybe an alert should be raised so an investigation can take place? Also, transaction times need to be sampled to ensure they are in the expected range.SummaryIf we do not consider how our systems will behave when under heavy load then they will most likely seriously degrade at best, and at worst crash. When they crash this way, we get to find out if there are any really evil data corruption bugs lurking in those dark places. Applying back pressure is one effective technique for coping with sustained high-load, such that maximum throughput can be delivered without degrading system performance for the already accepted requests and transactions.Reference: Applying Back Pressure When Overloaded from our JCG partner Martin Thompson at the Mechanical Sympathy blog....

Project Jigsaw Booted from Java 8?

In his post Project Jigsaw: Late for the train, Mark Reinhold’s proposes ‘to defer Project Jigsaw to the next release, Java 9.’ He explains the reasoning for this: ‘some significant technical challenges remain’ and there is ‘not enough time left for the broad evaluation, review, and feedback which such a profound change to the Platform demands.’ Reinhold also proposes ‘to aim explicitly for a regular two-year release cycle going forward.’ Based on the comments on that post, it seems that this news is not being particularly well received by the Java developer community. Markus Karg writes, ‘In fact it is a bit ridiculous that Jigsaw is stripped from JDK 8 as it was already stripped from JDK 7. … Just give up the idea and use Maven.’ Jon Fisher writes, ‘I don’t think this is a good idea for the java platform. … Delaying this will only turn java in to a leagacy technology.’ The comment from ninja is, ‘Whatever route you guys decide to go, I think it’s time to prioritize Java the platform ahead of Java the language.’ Although this news is generally receiving unfavorable reviews from the Java developer community, the explanations do differ to some degree. Some of those commenting think the modularization of Project Jigsaw is needed now (already may be too late), others think OSGi (or Maven or Ivy) should be used instead and Project Jigsaw abandoned, others would rather get other new features and aren’t worried about the modularization being pushed to Java 9, and others simply want to use Groovy or Scala instead. The question was posed whether other features of Java 8 should be dropped in favor of Jigsaw. As one of the two ‘flagship’ features of Java 8 (lambda expressions being the other one), I too am disappointed to see that it is likely that modularity will be delayed until Java 9. However, Reinhold points out that if the proposal to jettison Jigsaw from Java 8 is accepted, ‘Java 8 will ship on time, around September 2013′ and is planned to ‘include the widely-anticipated Project Lambda (JSR 335), the new Date/Time API (JSR 310), Type Annotations (JSR 308), and a selection of the smaller features already in progress.’ I really want a new Date/Time API and I think the lambda expressions will dramatically improve what we can do in Java. Because of this, I’ll be excited to get my hands on Java 8 even without modularity. Reference: Project Jigsaw Booted from Java 8? from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: