Featured FREE Whitepapers

What's New Here?


Getting Started with Spring Social – Part 2

A few weeks ago I wrote a post demonstrating what I thought was the simplest application you could write using Spring Social. This application read and displayed a Twitter user’s public data and was written as an introduction to Spring Social and the social coding arena. However, getting your application to display your user’s public data is only half the story and most of the time you’ll need to display your user’s private data. In this blog, I’m going to cover the scenario where you have a requirement to display a user’s Facebook or other Software as a Service (SaaS) provider data on one or two pages of your application. The idea here is to try to demonstrate the smallest and simplest thing you can to to add Spring Social to an application that requires your user to log in to Facebook or other SaaS provider. Creating the App To create the application, the first step is to create a basic Spring MVC Project using the template section of the SpringSource Toolkit Dashboard. This provides a webapp that’ll get you started. The next step is to set up the pom.xml by adding the following dependencies: <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-crypto</artifactId> <version>${org.springframework.security.crypto-version}</version> </dependency><!-- Spring Social --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-core</artifactId> <version>${spring-social.version}</version> </dependency> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-web</artifactId> <version>${spring-social.version}</version> </dependency> <!-- Facebook API --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-facebook</artifactId> <version>${org.springframework.social-facebook-version}</version> </dependency><!-- JdbcUserConfiguration --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${org.springframework-version}</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.3.159</version> </dependency><!-- CGLIB, only required and used for @Configuration usage: could be removed in future release of Spring --> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2</version> </dependency> …obviously you’ll also need to add the following to the %lt;properties/> section of the file: <spring-social.version>1.0.2.RELEASE</spring-social.version> <org.springframework.social-facebook-version>1.0.1.RELEASE</org.springframework.social-facebook-version> <org.springframework.security.crypto-version>3.1.0.RELEASE</org.springframework.security.crypto-version> You’ll notice that I’ve added a specific pom entry for spring-security-crypto: this is because I’m using Spring 3.0.6. In Spring 3.1.x, this has become part of the core libraries. The only other point to note is that there is also a dependency on spring-jdbc and h2. This is because Spring’s UserConnectionRepository default implementation: JdbcUsersConnectionRepository uses them and hence they’re required even though this app doesn’t persist anything to a database (so far as I can tell). The Classes The social coding functionality consists of four classes (and one of those I’ve pinched from Keith Donald’s Spring Social Quick Start Sample code):FacebookPostsController SocialContext FacebookConfig UserCookieGeneratorFacebookPostsController is the business end of the application responsible for getting hold of the user’s Facebook data and pushing it into the model ready for display. @Controllerpublic class FacebookPostsController {private static final Logger logger = LoggerFactory.getLogger(FacebookPostsController.class);private final SocialContext socialContext;@Autowiredpublic FacebookPostsController(SocialContext socialContext) {this.socialContext = socialContext;}@RequestMapping(value = 'posts', method = RequestMethod.GET)public String showPostsForUser(HttpServletRequest request, HttpServletResponse response, Model model) throws Exception {String nextView;if (socialContext.isSignedIn(request, response)) {List<Post> posts = retrievePosts();model.addAttribute('posts', posts);nextView = 'show-posts';} else {nextView = 'signin';}return nextView;}private List<Post> retrievePosts() {Facebook facebook = socialContext.getFacebook();FeedOperations feedOps = facebook.feedOperations();List<Post> posts = feedOps.getHomeFeed();logger.info('Retrieved ' + posts.size() + ' posts from the Facebook authenticated user');return posts;}} As you can see, from a high-level viewpoint the logic of what we’re trying to achieve is pretty simple: IF user is signed in THEN read Facebook data, display Facebook data ELSE ask user to sign in when user has signed in, go back to the beginning END IF The FacebookPostsController delegates the task of handling the sign in logic to the SocialContext class. You can probably guess that I got the idea for this class from Spring’s really useful ApplicationContext. The idea here is that there is one class that’s responsible for gluing your application to Spring Social. public class SocialContext implements ConnectionSignUp, SignInAdapter {/*** Use a random number generator to generate IDs to avoid cookie clashes* between server restarts*/private static Random rand;/*** Manage cookies - Use cookies to remember state between calls to the* server(s)*/private final UserCookieGenerator userCookieGenerator;/** Store the user id between calls to the server */private static final ThreadLocal<String> currentUser = new ThreadLocal<String>();private final UsersConnectionRepository connectionRepository;private final Facebook facebook;public SocialContext(UsersConnectionRepository connectionRepository, UserCookieGenerator userCookieGenerator,Facebook facebook) {this.connectionRepository = connectionRepository;this.userCookieGenerator = userCookieGenerator;this.facebook = facebook;rand = new Random(Calendar.getInstance().getTimeInMillis());}@Overridepublic String signIn(String userId, Connection<?> connection, NativeWebRequest request) {userCookieGenerator.addCookie(userId, request.getNativeResponse(HttpServletResponse.class));return null;}@Overridepublic String execute(Connection<?> connection) {return Long.toString(rand.nextLong());}public boolean isSignedIn(HttpServletRequest request, HttpServletResponse response) {boolean retVal = false;String userId = userCookieGenerator.readCookieValue(request);if (isValidId(userId)) {if (isConnectedFacebookUser(userId)) {retVal = true;} else {userCookieGenerator.removeCookie(response);}}currentUser.set(userId);return retVal;}private boolean isValidId(String id) {return isNotNull(id) && (id.length() > 0);}private boolean isNotNull(Object obj) {return obj != null;}private boolean isConnectedFacebookUser(String userId) {ConnectionRepository connectionRepo = connectionRepository.createConnectionRepository(userId);Connection<Facebook> facebookConnection = connectionRepo.findPrimaryConnection(Facebook.class);return facebookConnection != null;}public String getUserId() {return currentUser.get();}public Facebook getFacebook() {return facebook;}} SocialContext implements Spring Social’s ConnectionSignUp and SignInAdapter interfaces. It contains three methods isSignedIn(), signIn(), execute(). isSignedIn is called by the FacebookPostsController class to implement the logic above, whilst signIn() and execute() are called by Spring Social. From my previous blogs you’ll remember that OAuth requires lots of trips between the browser, your app and the SaaS provider. In making these trips the application needs to save the state of several OAuth arguments such as: client_id, redirect_uri and others. Spring Social hides all this complexity from your application by mapping the state of the OAuth conversation to one variable that your webapp controls. This is the userId; however, don’t think of think of this as a user name because it’s never seen by the user, it’s just a unique identifier that links a number of HTTP requests to an SaaS provider connection (such as Facebook) in the Spring Social core. Because of its simplicity, I’ve followed Keith Donald’s idea of using cookies to pass the user id between the browser and the server in order to maintain state. I’ve also borrowed his UserCookieGenerator class from the Spring Social Quick Start to help me along. The isSignedIn(...) method uses UserCookieGenerator to figure out if the HttpServletRequest object contains a cookie that contains a valid user id. If it does then it also figures out if Spring Social’s UsersConnectionRepository contains a ConnectionRepository linked to the same user id. If both of these tests return true then the application will request and display the user’s Facebook data. If one of the two tests returns false, then the user will be asked to sign in. SocialContext has been written specifically for this sample and contains enough functionality to demonstrate what I’m talking about in this blog. This means that it’s currently a little rough and ready, though it could be improved to cover connections to any / many providers and then reused in different applications. The final class to mention is FacebookConfig, which is loosely based upon the Spring Social sample code. There are two main differences between this code and the sample code with the first of these being that the FacebookConfig class implements the InitializingBean interface. This is so that the usersConnectionRepositiory variable can be injected into the socialContext and in turn the socialContext can be injected into the usersConnectionRepositiory as its ConnectionSignUp implementation. The second difference is that I’m implementing a providerSignInController(...) method to provide a correctly configured ProviderSignInController object to be used by Spring Social to sign in to Facebook. The only change to the default I’ve made here is to set the ProviderSignInController’s postSignInUrl property to “ /posts”. This is the url of the page that will contain the users Facebook data and will be called once the user sign in is complete. @Configurationpublic class FacebookConfig implements InitializingBean {private static final Logger logger = LoggerFactory.getLogger(FacebookConfig.class);private static final String appId = '439291719425239';private static final String appSecret = '65646c3846ab46f0b44d73bb26087f06';private SocialContext socialContext;private UsersConnectionRepository usersConnectionRepositiory;@Injectprivate DataSource dataSource;/*** Point to note: the name of the bean is either the name of the method* 'socialContext' or can be set by an attribute** @Bean(name='myBean')*/@Beanpublic SocialContext socialContext() {return socialContext;}@Beanpublic ConnectionFactoryLocator connectionFactoryLocator() {logger.info('getting connectionFactoryLocator');ConnectionFactoryRegistry registry = new ConnectionFactoryRegistry();registry.addConnectionFactory(new FacebookConnectionFactory(appId, appSecret));return registry;}/*** Singleton data access object providing access to connections across all* users.*/@Beanpublic UsersConnectionRepository usersConnectionRepository() {return usersConnectionRepositiory;}/*** Request-scoped data access object providing access to the current user's* connections.*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public ConnectionRepository connectionRepository() {String userId = socialContext.getUserId();logger.info('Createung ConnectionRepository for user: ' + userId);return usersConnectionRepository().createConnectionRepository(userId);}/*** A proxy to a request-scoped object representing the current user's* primary Facebook account.** @throws NotConnectedException* if the user is not connected to facebook.*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public Facebook facebook() {return connectionRepository().getPrimaryConnection(Facebook.class).getApi();}/*** Create the ProviderSignInController that handles the OAuth2 stuff and* tell it to redirect back to /posts once sign in has completed*/@Beanpublic ProviderSignInController providerSignInController() {ProviderSignInController providerSigninController = new ProviderSignInController(connectionFactoryLocator(),usersConnectionRepository(), socialContext);providerSigninController.setPostSignInUrl('/posts');return providerSigninController;}@Overridepublic void afterPropertiesSet() throws Exception {JdbcUsersConnectionRepository usersConnectionRepositiory = new JdbcUsersConnectionRepository(dataSource,connectionFactoryLocator(), Encryptors.noOpText());socialContext = new SocialContext(usersConnectionRepositiory, new UserCookieGenerator(), facebook());usersConnectionRepositiory.setConnectionSignUp(socialContext);this.usersConnectionRepositiory = usersConnectionRepositiory;}} Application Flow If you run this application 2 you’re first presented with the home screen containing a simple link inviting you to display your posts. The first time you click on this link, you’re re-directed to the /signin page. Pressing the ‘sign in’ button tells the ProviderSignInController to contact Facebook. Once authentication is complete, then the ProviderSignInController directs the app back to the /posts page and this time it displays the Facebook data. Configuration For completeness, I thought that I should mention the XML config, although there’s not much of it because I’m using the Spring annotation @Configuration on the FacebookConfig class. I have imported “ data.xml” from the Spring Social so that JdbcUsersConnectionRepository works and added <context:component-scan base-package='com.captaindebug.social' /> …for autowiring. Summary Although this sample app is based upon connecting your app to your user’s Facebook data, it can be easily modified to use any of the Spring Social client modules. If you like a challenge, try implementing Sina-Weibo where everything’s in Chinese – it’s a challenge, but Google Translate comes in really useful. 1 Spring Social and Other OAuth Blogs:Getting Started with Spring Social Facebook and Twitter: Behind the Scenes The OAuth Administration Steps OAuth 2.0 Webapp Flow Overview2 The code is available on Github at: https://github.com/roghughe/captaindebug.git Reference: Getting Started with Spring Social – Part 2 from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

MongoDB in 30 minutes

I have recently been bitten by the NoSQL bug – or as colleague of mine Mark Atwell coined “Burn the Where!” movement. While I have no intention of shunning friendly “SELECT … WHERE” anytime soon – or in foreseeable future, I did manage to get my hands dirty with some code. In this article I share the knowledge of my first attempts in NoSQL world. In this article we will aim to get a basic java application up and running that is capable of interacting with MongoDB. By itself that is not really a tall task and perhaps you could get that in under 10 minutes. Click here or click here or click here, for some excellent material. However, I wanted to push it a bit further. I want to add ORM support. I have chosen Morphia for this article. I also want to add DAO pattern, unit testing, and logging. In short, I want to it feel, “almost like” the kind of code that most of us would have written for enterprise applications using, let’s say Java, Hibernate and Oracle. And, we will try to do all this in under 30 minutes. My intention is to give a reassurance to Java + RDBMS developers that Java + NoSQL is not very alien. It is similar code and easy to try out. It is perhaps pertinent to add at this point that I have no affinity to NoSQL and no issues with RDBMS. I beieve they both have their own uses ( click here for some excellent material). As a technologist, I just like knowing my tools better so that I can do justice to my profession. This article is solely aimed at helping likeminded people to dip their toes in NoSQL. Ok, enought talk. Let’s get started. You will need a few softwares / tools before you follow this article. Download and install MongoDB server, if you have not already done so. I am assuming you have Java, some Java IDE and a build and release tool. I am using jdk 7, Eclipse (STS) and Maven 3 for this article. I start by creating a vanilla standard java application using Maven. I like using a batch file for this. File: codeRepo\MavenCommands.bat ECHO OFFREM ============================= REM Set the env. variables. REM ============================= SET PATH=%PATH%;C:\ProgramFiles\apache-maven-3.0.3\bin; SET JAVA_HOME=C:\ProgramFiles\Java\jdk1.7.0REM ============================= REM Create a simple java application. REM ============================= call mvn archetype:create ^ -DarchetypeGroupId=org.apache.maven.archetypes ^ -DgroupId=org.academy ^ -DartifactId=dbLayer002pause Import it in Eclipse. Use Maven to compile and run just to be sure. mvn -e clean install. This should compile the code and run the default tests as well. Once you have crossed this hurdle, now let’s get down to some coding. Let’s create an Entity object to start with. I think a Person class with fname would serve our purpose. I acknowledge that it is trivial but it does the job for a tutorial. File: /dbLayer002/src/main/java/org/academy/entity/Person.java package org.academy.entity;public class Person { private String fname;[...] }I have not mentioned the getters and setters for brevity. Now let us get the MongoDB java driver and attach it to the application. I like have Maven do this bit for me. You could obviously do this bit manually as well. Your choice. I am just lazy. File: /dbLayer002/pom.xml [...] <!-- MongDB java driver to hook up to MongoDB server --> <dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version> </dependency> [...]This will allow us to write a util class for connecting to MongoDB server instance. I am assuming you have a server up and running in your machine with default settings. File: /dbLayer002/src/main/java/org/academy/util/MongoUtil.java public class MongoUtil {private final static Logger logger = LoggerFactory .getLogger(MongoUtil.class);private static final int port = 27017; private static final String host = "localhost"; private static Mongo mongo = null;public static Mongo getMongo() { if (mongo == null) { try { mongo = new Mongo(host, port); logger.debug("New Mongo created with [" + host + "] and [" + port + "]"); } catch (UnknownHostException | MongoException e) { logger.error(e.getMessage()); } } return mongo; } }We will need logger setup in our application for this class to compile. Click here for my article around logging. All we need to do is to hook up Maven with the correct dependencies. File: /dbLayer002/pom.xml [...] <slf4j.version>1.6.1</slf4j.version> [...] <!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency> And also, we will need to specify, exactly what to log. File: /dbLayer002/src/java/resources/log4j.properties # Set root logger level to DEBUG and its only appender to A1. log4j.rootLogger=DEBUG, A1# configure A1 to spit out data in console log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n At this point, you already have an Entity class and a utility class to hook up to the database. Ideally I would go about writing a DAO and then somehow use the ORM to join up the DAO with the database. However, Morphia has some excellent DAO support. Also it has some annotations to tie up the Entity with database elements. So, although I would have liked the Entity and DAO to be totally agnostic of the ORM and database, it is not the case here. I know on the face of it, it sounds like Morphia or MongoDB is forcing me to deviate from good code structure, let me hasten to add, that it is not any worse than other ORMs e.g. Hibernate with Annotations would have also forced me to make the same kind of compromise. So, let’s bring in Morphia in the picture. Enters the stage our ever helpful tool, Maven. A bit of an avoidable hitch here. When I was writing this document I could not get the version of Morphia that I wanted in the central Maven repository. So, I had to configure Maven to use the Morphia repository as well. Hopefully this is just a temporary situation. File: /dbLayer002/pom.xml [...] <!-- Required for Morphia --> <repositories> <repository> <id>Morphia repository</id> <url>http://morphia.googlecode.com/svn/mavenrepo/</url> </repository> </repositories> [...] <!-- Morphia - ORM for MongoDB --> <dependency> <groupId>com.google.code.morphia</groupId> <artifactId>morphia</artifactId> <version>0.98</version> </dependency> As I mentioned above, Morphia allows us to annotate the Entity class (much like Hibernate annotations). So, here is how the updated Entity class looks like. File: /dbLayer002/src/main/java/org/academy/entity/Person.java [...] @Entity public class Person { @Id private ObjectId id; [...]And now we can also add a DAO layer and lean on the BasicDAO that Morphia provides. File: /dbLayer002/src/main/java/org/academy/dao/PersonDAO.java public class PersonDAO extends BasicDAO<Person, ObjectId> {public PersonDAO(Mongo mongo, Morphia morphia, String dbName) { super(mongo, morphia, dbName); } [...]The beauty of the BasicDAO is that just by extending that, my own DAO class already has enough functionality to do the basic CRUD operations, although I have just added a constructor. Don’t believe it? Ok, lets write a test for that. File: /dbLayer002/src/test/java/org/academy/dao/PersonDAOTest.java public class PersonDAOTest { private final static Logger logger = LoggerFactory .getLogger(PersonDAOTest.class);private Mongo mongo; private Morphia morphia; private PersonDAO personDao; private final String dbname = "peopledb";@Before public void initiate() { mongo = MongoUtil.getMongo(); morphia = new Morphia(); morphia.map(Person.class); personDao = new PersonDAO(mongo, morphia, dbname); }@Test public void test() { long counter = personDao.count(); logger.debug("The count is [" + counter + "]");Person p = new Person(); p.setFname("Partha"); personDao.save(p);long newCounter = personDao.count(); logger.debug("The new count is [" + newCounter + "]");assertTrue((counter + 1) == newCounter); [...]As you might have already noticed. I have used JUnit 4. If you have been following this article as is till now you would have an earlier version of JUnit in your project. To ensure that you use JUnit 4, you just have to configure Maven to use that by adding the correct dependency in pom.xml. File: /dbLayer002/pom.xml <!-- Unit test. --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency>You are good to go. If you run the tests they should pass. Of course you could / should get into your database and check that the data is indeed saved. I will refer you to the MongoDB documentation which I think are quite decent. Last but not least, let me assure you that BasicDAO is not restrictive in any way. I am sure puritans would point out that if my DAO class needs to extend the BasicDAO that is a limitation on the source code structure anyway. Ideally should not have been required. And I agree with that. However, I also want show that for all practical purposes of DAO, you have sufficient flexibility. Let’s go on and add a custom find method to our DAO, that is specific to this Entity i.e. Person. Let’s say we want to be able to search on the basis of firstname and that we want to use regular expression for that. Here is how the code will look like. File: /dbLayer002/src/main/java/org/academy/dao/PersonDAO.java public class PersonDAO extends BasicDAO<Person, ObjectId> {[...] public Iterator<Person> findByFname(String fname){ Pattern regExp = Pattern.compile(fname + ".*", Pattern.CASE_INSENSITIVE); return ds.find(entityClazz).filter("fname", regExp).iterator(); } [...] } Important to reemphasize here that, I have just added a custom search function to my DAO, by adding precisely three lines of code (four if you add the last parentheses). In my books, that is quite flexible. Being true to my unflinching love for automated testing, lets add a quick test to check this functionality before we wrap up. File: /dbLayer002/src/test/java/org/academy/dao/PersonDAOTest.java [...] Iterator<Person> iteratorPerson = personDao.findByFname("Pa"); int personCounter = 0 ; while(iteratorPerson.hasNext()){ personCounter ++; logger.debug("["+personCounter+"]" + iteratorPerson.next().getFname()); } [...]Before you point out, yes, there are no assert() here, so this is not a real test. Let me assure you that my test class is indeed complete. It’s just that this snippet here does not have the assert function. So, that’s it. You have used Java to connect to a NoSQL database, using an ORM, through a DAO layer (where you have used the ORM’s functionalities and done some addition as well). You have also done proper logging and unit testing. Not bad use of half an hour, eh? Happy coding. Further readingA version of this article – slightly edited, is also available at this link at Javalobby.Reference: MongoDB in 30 minutes from our JCG partner Partho at the Tech for Enterprise blog....

After I/O 2012

Starting from registration to giveaways, the I/O madness going far each year. Being sold in 20 mins this year, did not stop google to give away even more stuff. With this pace and expected release of google glass for next year, most probably next year registration will go even more chaotic! So Google, please either stop giving free stuff (I know you won’t) or move to Moscone North and have a bigger I/O!Keynotes of Google I/O is something out of this world. It is more like a rock concert than a tech conference keynote. Extending I/O to three days did not make it less compact. Usually you have at least 2 parallel sessions you want to attend and not much time left for playground and office hours but the best thing with I/O is all sessions are made available online instantly.So here is the wrap-up from I/O 2012!AndroidGoogle is well aware of weak points of Android and doing everything to fix it. Unlike Apple who is sure on the perfection of the initial design and only making enhancements on the platform, Google can rapidly change the whole experience. The hardware buttons are gone, the options button is gone, best practices for the UI has changed a lot. Android started with as a highly flexible os and since from ICS, getting nicer and more responsive UI with each update where apple started with sleek UI but not much functionality, now adding the missing parts. Google addressed 3 major problems of Android and did best to provide a solution.UI experience: With the Jelly bean, UI and animations are much faster and responsive. Chrome brings a well known user experience and a great integration with the desktop counter part. Jelly Bean offers a very smooth and sexy UI very comparable to iOS experience. UI of the Apps: Android apps tend to have problems with UI because of lack of tools and different screens sizes and resolutions. Latest toolkit brings everything an app developer would ever need. Preview and easy fix for all possible screen sizes, easy problem detection with Lint, more capable UI designer, much much better emulator may hopefully help developers to build higher quality apps. OS versions: Although ICS has been released more than 6 months ago, there are very few devices had chance to upgrade. ICS has less then 10% share and most of those devices are pre-bundled with ICS which means nearly zero updates to current devices. This reflects a huge problem for the Android ecosystem. Whatever Google brings to innovate never gets into the devices in our pockets. Google had released the Platform Development Kit for the device manufacturers which would hopefully address this issue.GlassExplorer edition is on presage and current product demos show Glass is quite near functional although Google still insists it is early development stage where Microsoft claims Surface to be complete although there are no major technical details. Google Glass can be the next revolution after iPhone. With Google Now, Google Glass will offer location and context aware interaction to your daily life. Glass can be controlled in three different ways, head movements, voice and with its touchpad. Although it is not confirmed there is a rumor on another way of control through a simple ring which would make sense since most people wear rings and a ring can offer iPod click wheel like control. Glass does not offer radio and use wifi for data. Battery life is still something being worked on although with minimal usage it can survive up to a day of usage but if you are planning to hangout while sky diving it may not survive more than few hours. It is powered by Android which would offer an easy adaption for app developers. Developer only Explorer edition will be shipping end of this year so most probably by the time of next I/O we will be experimenting the finished product.GWTThe dying rumors are over. It is clear that Google had difficult times after the GWT team dissolved. They even had hard times accepting patches since GWT is heavily used in internal projects. Google stepped back from the governance of GWT and formed a steering committee. The steering committee is a great chance to move forward and it is great to see members like Daniel Kurka who wrote mGWT. Release candidate of version 2.5 is out and the feature set is unbelievable. Super dev mode, source maps works like magic and the compiler is much better optimized.Google+Google Events is smart! It was a great strategy to release it just before the I/O party in which every attendee has at least two android device.Compute CloudFinally limitations of App Engine is over. Google offers cloud computing just like Amazon’s EC2. With the Compute Cloud, Google can now offer a full stack of services.Google NowFinally the Google Now which is an integrated location and context aware service. Google Now can understand you are at work, about to leave and tell you the traffic to home or can understand you are at the airport and knows your flight from your recent searches so can guide you to your gate and even inform if there is a delay. It is amazing, mush more seamless and integrated to your life than siri.So I/O is over and it was great,. Just wondering what will the registration be like next year since the expectations of Google Glass freebie is quite high.Reference: After I/O 2012 from our JCG partner Murat Yener at the Developer Chronicles blog....

Reference to dynamic proxy in a proxied class

There was an interesting question in Stackoverflow about how a Spring Bean can get a reference to the proxy created by Spring to handle transactions, Spring AOP, Caching, Async flows etc. A reference to the proxy was required because if there is a call to itself by the proxied bean, this call would completely bypass the proxy. Consider an InventoryService interface: public interface InventoryService{ public Inventory create(Inventory inventory); public List<Inventory> list(); public Inventory findByVin(String vin); public Inventory update(Inventory inventory); public boolean delete(Long id); public Inventory compositeUpdateService(String vin, String newMake); } Consider also that there is a default implementation for this service, and assume that the last method compositeUpdateService, internally invokes two methods on the bean itself, like this: public Inventory compositeUpdateService(String vin, String newMake) { logger.info("composite Update Service called"); Inventory inventory = this.findByVin(vin); inventory.setMake(newMake); this.update(inventory); return inventory; } If I now create an aspect to advice any calls to InventoryService with the objective of tracking how long each method call takes, Spring AOP would create a dynamic proxy for the InventoryService bean:However the calls to compositeUpdateService will record the time only at the level of this method, the calls that compositeUpdateService internally makes to findByVin, update is bypassing the proxy and hence will not be tracked:A good fix for this is to use the full power of AspectJ – AspectJ would change the bytecode of all the methods of DefaultInventoryService to include the call to the advice. A workaround that we worked out was to inject a reference to the proxy itself into the bean and instead of calling say this.findByVin and this.update, call proxy.findByVin and proxy.update! So now how do we cleanly inject in a reference to the proxy into the bean – the solution that I offered was to create an interface to mark beans interested in their own proxies: public interface ProxyAware<T> { void setProxy(T proxy); } The interested interface and its implementation would look like this: public interface InventoryService extends ProxyAware<InventoryService>{ ... }public class DefaultInventoryService implements InventoryService{ ...private InventoryService proxy; @Override public void setProxy(InventoryService proxy) { this.proxy = proxy; } } and then define a BeanPostProcessor to inject in this proxy! public class ProxyInjectingBeanPostProcessor implements BeanPostProcessor, Ordered { @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { return bean; }@Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (AopUtils.isAopProxy((bean))){ try { Object target = ((Advised)bean).getTargetSource().getTarget();if (target instanceof ProxyAware){ ((ProxyAware) target).setProxy(bean); } } catch (Exception e) { return bean; } } return bean; }@Override public int getOrder() { return Integer.MAX_VALUE; } } Not the cleanest of implementations, but works! Reference: http://blog.springsource.org/2012/05/23/understanding-proxy-usage-in-spring/ Reference: Reference to dynamic proxy in a proxied class from our JCG partner Biju Kunjummen at the all and sundry blog....

ZK in Action: MVVM – Working Together with ZK Client API

In the previous posts we’ve implemented the following functionalities with ZK’s MVVM:load data into a table save data with form binding delete entries and update the View programmaticallyA key distinction between ZK MVVM and ZK MVC implementation wise is that we do not access and manipulate the UI components directly in the controller(ViewModel) class. In this post, we’ll see how we can delegate some of the UI manipulation to client side code, as well as how to pass parameters from View to ViewModel. Objective Build an update function to our simple inventory CRUD feature. Users can edit-in-place entries in the table and given the choice to update or discard the changes made. Modified entries are highlighted in red.ZK Features in ActionZK Client-side APIs ZK Style Class MVVM : Pass Parameters from View to ViewModelImplementation in Steps   Enable In-place editing in Listbox so we can edit entries: <listcell> <textbox inplace="true" value="@load(each.name)" ...</textbox> </listcell> .... <listcell> <doublebox inplace="true" value="@load(each.price)" ...</textbox> </listcell> ...inplace=”true” renders input elements such as Textbox without their borders displaying them as plain labels; the borders appear only if the input element is selected line 2, 6,”each” refers to each Item object in the data collectionOnce an entry is edited we want to give users the option to Update or Discard the change.  The Update and Discard buttons need to be visible only if user has made modifications on the Listbox entries. First we define JavaScript functions to show and hide the Update and Discard buttons: <toolbar> ... <span id="edit_btns" visible="false" ...> <toolbarbutton label="Update" .../> <toolbarbutton label="Discard" .../> </span> </toolbar><script type="text/javascript"> function hideEditBtns(){ jq('$edit_btns').hide(); } function showEditBtns(){ jq('$edit_btns').show(); }</script> ...line 2, we wrap the Update and Discard and set visibility to false line 9, 13, we define functions which hide and show the Update and Discard buttons respectively line 11, 15, we make use of jQuery selector jq( ‘$edit_btns’) to retrieve the ZK widget whose ID is “edit_btns”; notice the selector pattern for a ZK widget ID is ‘$’, not ‘#’When entries in the Listbox are modified, we’ll make the Update/Discard buttons visible and make the modified values red. Once either Update or Discard button is clicked, we’d like to hide the buttons again Since this is pure UI interactions, we’ll make use of ZK’s client side APIs: <style> .inputs { font-weight: 600; } .modified { color: red; } </style> ... <toolbar xmlns:w="client" > ... <span id="edit_btns" visible="false" ...> <toolbarbutton label="Update" w:onClick="hideEditBtns()" .../> <toolbarbutton label="Discard" w:onClick="hideEditBtns()" .../> </span> </toolbar><script type="text/javascript"> //show hide functionszk.afterMount(function(){ jq('.inputs').change(function(){ showEditBtns(); $(this).addClass('modified'); }) }); </script> ... <listcell> <doublebox inplace="true" sclass="inputs" value="@load(each.price)" ...</textbox> </listcell> ...line 2, we specify a style class for our input elements (Textbox, Intbox, Doublebox, Datebox) and assign it to input elements’ sclass attribute, eg. line 26; sclass defines style class for ZK widgets line 18~20, we get all the input elements by matching their sclass name and assign an onChange event handler. Once the value in an input element is changed, the Update/Discard buttons will become visible and the modified value will be highlighted in red. line 17, zk.afterMount is run when ZK widgets are created line 6, we specify the client namespace so we can register client side onClick event listeners with the syntax “w:onClick”. Note we can still register our usual onClick event listener that’s handled at the server simultaneously. line 9, 10, we assign client side onClick event listener; the hideEditBtns function would be called to make the buttons invisible againDefine a method to store the modified Item objects into a collection so the changes could be updated in batch if user choose to do so: public class InventoryVM {private HashSet<Item> itemsToUpdate = new HashSet<item>(); ...@Command public void addToUpdate(@BindingParam("entry") Item item){ itemsToUpdate.add(item); }line 6, we annotate this method as a command method so it can be invoked from View line 7, @BindingParam(“entry”) Item item binds an arbitrarily named parameter called “entry”; we anticipate the parameter would be of type ItemCreate a method to update the changes made in View to the Data Model public class InventoryVM {private List<Item> items; private HashSet<Item> itemsToUpdate = new HashSet<item>(); ...@NotifyChange("items") @Command public void updateItems() throws Exception{ for (Item i : itemsToUpdate){ i.setDatemod(new Date()); DataService.getInstance().updateItem(i); } itemsToUpdate.clear(); items = getItems(); }When modifications are made on Listbox entries, invoke the addToUpdate method and pass to it the edited Item object which in turn is saved to the itemsToUpdate collection <listitem> <listcell> <doublebox value="@load(each.price) @save(each.name, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> ... </listitem>@save(each.name, before=’updateItems’) ensures that modified values are not saved unless updateItems is called (ie. when user click the “Update” button)Finally, when user clicks Update, we call the updateItems method to update changes to Data Model. If Discard is clicked, we call getItems to refresh the Listbox without applying any changes ... <toolbarbutton label="Update" onClick="@command('updateItems')" .../> <toolbarbutton label="Discard" onClick="@command('getItems')" .../> ...In a NutshellUnder the MVVM pattern we strive to keep the ViewModel code independent of any View components Since we don’t have direct reference to the UI components in the ViewModel code, we can delegate the UI manipulation (in our sample code, show/hide, style change) code to the client using ZK’s client side APIs We can make use of jQuery selectors and APIs at ZK client side We can easily pass parameters from View to ViewModel with @BindingParamNext up, we’ll go over a bit more on ZK Styling before we tackle the MVVM validators and converters.   ViewModel (ZK in Action[0]~[3]): public class InventoryVM {private List<Item> items; private Item newItem; private Item selected; private HashSet<Item> itemsToUpdate = new HashSet<Item>(); public InventoryVM(){} //CREATE @NotifyChange("newItem") @Command public void createNewItem(){ newItem = new Item("", "",0, 0,new Date()); } @NotifyChange({"newItem","items"}) @Command public void saveItem() throws Exception{ DataService.getInstance().saveItem(newItem); newItem = null; items = getItems(); } @NotifyChange("newItem") @Command public void cancelSave() throws Exception{ newItem = null; } //READ @NotifyChange("items") @Command public List<Item> getItems() throws Exception{ items = DataService.getInstance().getAllItems(); for (Item j : items){ System.out.println(j.getModel()); } Clients.evalJavaScript("zk.afterMount(function(){jq('.inputs').removeClass('modified').change(function(){$(this).addClass('modified');showEditBtns();})});"); //how does afterMount work in this case? return items; } //UPDATE @NotifyChange("items") @Command public void updateItems() throws Exception{ for (Item i : itemsToUpdate){ i.setDatemod(new Date()); DataService.getInstance().updateItem(i); } itemsToUpdate.clear(); items = getItems(); } @Command public void addToUpdate(@BindingParam("entry") Item item){ itemsToUpdate.add(item); } //DELETE @Command public void deleteItem() throws Exception{ if (selected != null){ String str = "The item with name \""+selected.getName()+"\" and model \""+selected.getModel()+"\" will be deleted."; Messagebox.show(str,"Confirm Deletion", Messagebox.OK|Messagebox.CANCEL, Messagebox.QUESTION, new EventListener<Event>(){ @Override public void onEvent(Event event) throws Exception { if (event.getName().equals("onOK")){ DataService.getInstance().deleteItem(selected); items = getItems(); BindUtils.postNotifyChange(null, null, InventoryVM.this, "items"); } } }); } else { Messagebox.show("No Item was Selected"); } }public Item getNewItem() { return newItem; }public void setNewItem(Item newItem) { this.newItem = newItem; }public Item getselected() { return selected; }public void setselected(Item selected) { this.selected = selected; } }View (ZK in Action[0]~[3]): <zk> <style> .z-toolbarbutton-cnt { font-size: 17px;} .edit-btns {border: 2px solid #7EAAC6; padding: 6px 4px 10px 4px; border-radius: 6px;} .inputs { font-weight: 600; } .modified { color: red; } </style> <script type="text/javascript"> function hideEditBtns(){ jq('$edit_btns').hide(); }function showEditBtns(){ jq('$edit_btns').show(); }zk.afterMount(function(){ jq('.inputs').change(function(){ $(this).addClass('modified'); showEditBtns(); }) }); </script> <window apply="org.zkoss.bind.BindComposer" viewModel="@id('vm') @init('lab.sphota.zk.ctrl.InventoryVM')" xmlns:w="client"> <toolbar width="100%"> <toolbarbutton label="Add" onClick="@command('createNewItem')" /> <toolbarbutton label="Delete" onClick="@command('deleteItem')" disabled="@load(empty vm.selected)" /> <span id="edit_btns" sclass="edit-btns" visible="false"> <toolbarbutton label="Update" onClick="@command('updateItems')" w:onClick="hideEditBtns()"/> <toolbarbutton label="Discard" onClick="@command('getItems')" w:onClick="hideEditBtns()" /> </span> </toolbar> <groupbox mold="3d" form="@id('itm') @load(vm.newItem) @save(vm.newItem, before='saveItem')" visible="@load(not empty vm.newItem)"> <caption label="New Item"></caption> <grid width="50%"> <rows> <row> <label value="Item Name" width="100px"></label> <textbox value="@bind(itm.name)" /> </row> <row> <label value="Model" width="100px"></label> <textbox value="@bind(itm.model)" /> </row> <row> <label value="Unit Price" width="100px"></label> <decimalbox value="@bind(itm.price)" format="#,###.00" constraint="no empty, no negative" /> </row> <row> <label value="Quantity" width="100px"></label> <spinner value="@bind(itm.qty)" constraint="no empty,min 0 max 999: Quantity Must be Greater Than Zero" /> </row> <row> <cell colspan="2" align="center"> <button width="80px" label="Save" mold="trendy" onClick="@command('saveItem')" /> <button width="80px" label="Cancel" mold="trendy" onClick="@command('cancelSave')" /> </cell> </row> </rows> </grid> </groupbox> <listbox selectedItem="@bind(vm.selected)" model="@load(vm.items) "> <listhead> <listheader label="Name" sort="auto" hflex="2" /> <listheader label="Model" sort="auto" hflex="1" /> <listheader label="Quantity" sort="auto" hflex="1" /> <listheader label="Unit Price" sort="auto" hflex="1" /> <listheader label="Last Modified" sort="auto" hflex="2" /> </listhead> <template name="model"> <listitem> <listcell> <textbox inplace="true" width="110px" sclass="inputs" value="@load(each.name) @save(each.name, before='updateItems')" onChange="@command('addToUpdate',entry=each)"> </textbox> </listcell> <listcell> <textbox inplace="true" width="110px" sclass="inputs" value="@load(each.model) @save(each.model, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell> <intbox inplace="true" sclass="inputs" value="@load(each.qty) @save(each.qty, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell> <doublebox inplace="true" sclass="inputs" format="###,###.00" value="@load(each.price) @save(each.price, before='updateItems')" onChange="@command('addToUpdate',entry=each)" /> </listcell> <listcell label="@load(each.datemod)" /> </listitem> </template> </listbox> </window> </zk>ZK Client-side Reference ZK Developer Reference: @BindingParamReference: ZK in Action [3] : MVVM – Working Together with ZK Client API from our JCG partner Lance Lu at the Tech Dojo blog....

Eat your own dog food, but throw in some unknown biscuits for variety

Well, the example app that Jorge Aliss asked me to write to demonstrate a combination of SecureSocial and Deadbolt is working, and working nicely, and as soon as I’ve prettied it up a bit I’ll be releasing it to github and pushing it out to, probably, Heroku.Deadbolt came about as a result of two projects I worked on that required a specific approach to security that wasn’t addressed by any existing modules of the Play! framework. I wrote and released the module, and it’s been updated every time a new feature was needed by my own projects or through a request by someone else. To use the vernacular, I eat my own dog food.Let’s call me person A, and I eat my own dog food. Other people – let’s aggregate them into a single entity, person B…person B also eats my dog food, and delicious it is too. Based on my own experiences, Deadbolt dog food covers all the nutritional needs of my own highly-specialized pedigree dogs, and happily, also satisfies person B’s pet food requirements. Good enough…but what about when you suddenly switch dog – and nutritional requirements – and need something completely different? Then you suddenly might find that the dog food you’ve lovingly created and molded has a huge gap right where the vitamins should be.End of dog food analogy. It’s getting annoying.The demo app I’ve written – SociallySecure – allows you to log in through various social open auth providers and create an account which can then have other OAuth accounts linked to it. Once they’re linked, you can log in through any of them and get into your personal SociallySecure account. The key account is Twitter (at the moment – this is just a demo!) because security is applied to your tweets when someone is viewing your account page. If you’re friends, or you’ve decided your tweets should be public, the user can see all the exceptional and banal things you’ve tweeted along with those of the people you follow. If you’ve decided your tweets are visible only to friends, when someone views your user page then your tweets are not visible. Fairly simple stuff, but enough to demonstrate dynamic security.Fairly simple stuff – except for the part where I’m not only considering the user privileges of the logged-in user, but that user’s privileges in relation to another user.Deadbolt offers dynamic checks in the form of the #restrictedresource tag at the view level, and @RestrictedResource annotation at the controller level. If you’re in the controller level, you can already calculate what you need. In the view level, you’re pretty much screwed unless you’re able to pass in additional information. This is what the latest version of Deadbolt gives you, and without it you have a much more restricted version of dynamic security unless you *really* jump through some hoops.If you’re curious about how this works, you can pass a map of string parameters to a tag argument called resourceParameters – this map is then available in your RestrictedResourceHandler. You can use it in two ways:The string key/value pairs can be used by you directly The value can be used to do a lookup in the request, or session, or cache, or whateverThe point of this blog post, however, is to emphasize than if you’re going to release something as open source, you’ll have a much higher chance of meeting developer needs if, once you’ve covered all your own use cases, sit down for a couple of hours with another developer and ask them to use it to develop something reasonably complex. It’s pretty much what you’ll see when releasing a product – you can test it to hell and back, but within a couple of hours of it going live you can pretty much guarantee that a user has found a bug.Think of crucially missing gaps in your code as bugs. The code works perfectly well when it’s running according to what you have in mind, but as soon as it hits the real world then you’ll find that not everyone thinks like you.Reference: Eat your own dog food, but throw in some unknown biscuits for variety from our JCG partner Steve Chaloner at the Objectify blog....

Applying Back Pressure When Overloaded

How should a system respond when under sustained load? Should it keep accepting requests until its response times follow the deadly hockey stick, followed by a crash? All too often this is what happens unless a system is designed to cope with the case of more requests arriving than is it capable of processing. If we are seeing a sustained arrival rate of requests, greater than our system is capable of processing, then something has to give. Having the entire system degrade is not the ideal service we want to give our customers. A better approach would be to process transactions at our systems maximum possible throughput rate, while maintaining a good response time, and rejecting requests above this arrival rate.Let’s consider a small art gallery as an metaphor. In this gallery the typical viewer spends on average 20 minutes browsing, and the gallery can hold a maximum of 30 viewers. If more than 30 viewers occupy the gallery at the same time then customers become unhappy because they cannot have a clear view of the paintings. If this happens they are unlikely to purchase or return. To keep our viewers happy it is better to recommend that some viewers visit the café a few doors down and come back when the gallery is less busy. This way the viewers in the gallery get to see all the paintings without other viewers in the way, and in the meantime those we cannot accommodate enjoy a coffee. If we apply Little’s Law we cannot have customers arriving at more than 90 per hour, otherwise the maximum capacity is exceeded. If between 9:00-10:00 they are arriving at 100 per hour, then I’m sure the café down the road will appreciate the extra 10 customers.Within our systems the available capacity is generally a function of the size of our thread pools and time to process individual transactions. These thread pools are usually fronted by queues to handle bursts of traffic above our maximum arrival rate. If the queues are unbounded, and we have a sustained arrival rate above the maximum capacity, then the queues will grow unchecked. As the queues grow they increasingly add latency beyond acceptable response times, and eventually they will consume all memory causing our systems to fail. Would it not be better to send the overflow of requests to the café while still serving everyone else at the maximum possible rate? We can do this by designing our systems to apply “Back Pressure”. Figure 1Separation of concerns encourages good systems design at all levels. I like to layer a design so that the gateways to third parties are separated from the main transaction services. This can be achieved by having gateways responsible for protocol translation and border security only. A typical gateway could be a web container running Servlets. Gateways accept customer requests, apply appropriate security, and translate the channel protocols for forwarding to the transaction service hosting the domain model. The transaction service may use a durable store if transactions need to be preserved. For example, the state of a chat server domain model may not require preservation, whereas a model for financial transactions must be kept for many years for compliance and business reasons.Figure 1. above is a simplified view of the typical request flow in many systems. Pools of threads in a gateway accept user requests and forward them to a transaction service. Let’s assume we have asynchronous transaction services fronted by an input and output queues, or similar FIFO structures. If we want the system to meet a response time quality-of-service (QOS) guarantee, then we need to consider the three following variables:The time taken for individual transactions on a thread The number of threads in a pool that can execute transactions in parallel The length of the input queue to set the maximum acceptable latency max latency = (transaction time / number of threads) * queue length queue length = max latency / (transaction time / number of threads)By allowing the queue to be unbounded the latency will continue to increase. So if we want to set a maximum response time then we need to limit the queue length.By bounding the input queue we block the thread receiving network packets which will apply back pressure up stream. If the network protocol is TCP, similar back pressure is applied via the filling of network buffers, on the sender. This process can repeat all the way back via the gateway to the customer. For each service we need to configure the queues so that they do their part in achieving the required quality-of-service for the end-to-end customer experience.One of the biggest wins I often find is to improve the time taken to process individual transaction latency. This helps in the best and worst case scenarios.Worst Case ScenarioLet’s say the queue is unbounded and the system is under sustained heavy load. Things can begin to go wrong very quickly in subtle ways before memory is exhausted. What do you think will happen when the queue is larger than the processor cache? The consumer threads will be suffering cache misses just at the time when they are struggling to keep up, thus compounding the problem. This can cause a system to get into trouble very quickly and eventually crash. Under Linux this is particularly nasty because malloc, or one of its friends, will succeed because Linux allows “ Over Commit” by default, then later at the point of using that memory, the OOM Killer will start shooting processes. When the OS starts shooting processes, you just know things are not going to end well!What About Synchronous Designs?You may say that with synchronous designs there are no queues. Well not such obvious ones. If you have a thread pool then it will have a lock, or semaphore, wait queues to assign threads. If you are crazy enough to allocate a new thread on every request, then once you are over the huge cost of thread creation, your thread is in the run queue for a processor to execute. Also, these queues involve context switches and condition variables which greatly increase the costs. You just cannot run away from queues, they are everywhere! Best to embrace them and design for the quality-of-service your system needs to deliver to its customers. If we must have queues, then design for them, and maybe choose some nice lock-free ones with great performance.When we need to support synchronous protocols like REST then use back pressure, signalled by our full incoming queue at the gateway, to send a meaningful “server busy” message such as the HTTP 503 status code. The customer can then interpret this as time for a coffee and cake at the café down the road.Subtleties To Watch Out For…You need to consider the whole end-to-end service. What if a client is very slow at consuming data from your system? It could tie up a thread in the gateway taking it out of action. Now you have less threads working the queue so the response time will be increasing. Queues and threads need to be monitored, and appropriate action needs to be taken when thresholds are crossed. For example, when a queue is 70% full, maybe an alert should be raised so an investigation can take place? Also, transaction times need to be sampled to ensure they are in the expected range.SummaryIf we do not consider how our systems will behave when under heavy load then they will most likely seriously degrade at best, and at worst crash. When they crash this way, we get to find out if there are any really evil data corruption bugs lurking in those dark places. Applying back pressure is one effective technique for coping with sustained high-load, such that maximum throughput can be delivered without degrading system performance for the already accepted requests and transactions.Reference: Applying Back Pressure When Overloaded from our JCG partner Martin Thompson at the Mechanical Sympathy blog....

Project Jigsaw Booted from Java 8?

In his post Project Jigsaw: Late for the train, Mark Reinhold’s proposes ‘to defer Project Jigsaw to the next release, Java 9.’ He explains the reasoning for this: ‘some significant technical challenges remain’ and there is ‘not enough time left for the broad evaluation, review, and feedback which such a profound change to the Platform demands.’ Reinhold also proposes ‘to aim explicitly for a regular two-year release cycle going forward.’ Based on the comments on that post, it seems that this news is not being particularly well received by the Java developer community. Markus Karg writes, ‘In fact it is a bit ridiculous that Jigsaw is stripped from JDK 8 as it was already stripped from JDK 7. … Just give up the idea and use Maven.’ Jon Fisher writes, ‘I don’t think this is a good idea for the java platform. … Delaying this will only turn java in to a leagacy technology.’ The comment from ninja is, ‘Whatever route you guys decide to go, I think it’s time to prioritize Java the platform ahead of Java the language.’ Although this news is generally receiving unfavorable reviews from the Java developer community, the explanations do differ to some degree. Some of those commenting think the modularization of Project Jigsaw is needed now (already may be too late), others think OSGi (or Maven or Ivy) should be used instead and Project Jigsaw abandoned, others would rather get other new features and aren’t worried about the modularization being pushed to Java 9, and others simply want to use Groovy or Scala instead. The question was posed whether other features of Java 8 should be dropped in favor of Jigsaw. As one of the two ‘flagship’ features of Java 8 (lambda expressions being the other one), I too am disappointed to see that it is likely that modularity will be delayed until Java 9. However, Reinhold points out that if the proposal to jettison Jigsaw from Java 8 is accepted, ‘Java 8 will ship on time, around September 2013′ and is planned to ‘include the widely-anticipated Project Lambda (JSR 335), the new Date/Time API (JSR 310), Type Annotations (JSR 308), and a selection of the smaller features already in progress.’ I really want a new Date/Time API and I think the lambda expressions will dramatically improve what we can do in Java. Because of this, I’ll be excited to get my hands on Java 8 even without modularity. Reference: Project Jigsaw Booted from Java 8? from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

5 tips for proper Java Heap size

Determination of proper Java Heap size for a production system is not a straightforward exercise. In my Java EE enterprise experience, I have seen multiple performance problem cases due to inadequate Java Heap capacity and tuning. This article will provide you with 5 tips that can help you determine optimal Java Heap size, as a starting point, for your current or new production environment. Some of these tips are also very useful regarding the prevention and resolution of java.lang.OutOfMemoryError problems; including memory leaks. Please note that these tips are intended to “help you” determine proper Java Heap size. Since each IT environment is unique, you are actually in the best position to determine precisely the required Java Heap specifications of your client’s environment. Some of these tips may also not be applicable in the context of a very small Java standalone application but I still recommend you to read the entire article. Future articles will include tips on how to choose the proper Java VM garbage collector type for your environment and applications. #1 – JVM: you always fear what you don’t understand How can you expect to configure, tune and troubleshoot something that you don’t understand? You may never have the chance to write and improve Java VM specifications but you are still free to learn its foundation in order to improve your knowledge and troubleshooting skills. Some may disagree, but from my perspective, the thinking that Java programmers are not required to know the internal JVM memory management is an illusion. Java Heap tuning and troubleshooting can especially be a challenge for Java & Java EE beginners. Find below a typical scenario: – Your client production environment is facing OutOfMemoryError on a regular basis and causing lot of business impact. Your support team is under pressure to resolve this problem – A quick Google search allows you to find examples of similar problems and you now believe (and assume) that you are facing the same problem – You then grab JVM -Xms and -Xmx values from another person OutOfMemoryError problem case, hoping to quickly resolve your client’s problem – You then proceed and implement the same tuning to your environment. 2 days later you realize problem is still happening (even worse or little better)…the struggle continues… What went wrong? – You failed to first acquire proper understanding of the root cause of your problem – You may also have failed to properly understand your production environment at a deeper level (specifications, load situation etc.). Web searches is a great way to learn and share knowledge but you have to perform your own due diligence and root cause analysis – You may also be lacking some basic knowledge of the JVM and its internal memory management, preventing you to connect all the dots together My #1 tip and recommendation to you is to learn and understand the basic JVM principles along with its different memory spaces. Such knowledge is critical as it will allow you to make valid recommendations to your clients and properly understand the possible impact and risk associated with future tuning considerations. Now find below a quick high level reference guide for the Java VM: The Java VM memory is split up to 3 memory spaces:The Java Heap. Applicable for all JVM vendors, usually split between YoungGen (nursery) & OldGen (tenured) spaces. The PermGen (permanent generation). Applicable to the Sun HotSpot VM only (PermGen space will be removed in future Java 7 or Java 8 updates) The Native Heap (C-Heap). Applicable for all JVM vendors.I recommend that you review each article below, including Sun white paper on the HotSpot Java memory management. I also encourage you to download and look at the OpenJDK implementation. ## Sun HotSpot VM http://javaeesupportpatterns.blogspot.com/2011/08/java-heap-space-hotspot-vm.html ## IBM VM http://javaeesupportpatterns.blogspot.com/2012/02/java-heap-space-ibm-vm.html ## Oracle JRockit VM http://javaeesupportpatterns.blogspot.com/2012/02/java-heap-space-jrockit- vm.html ## Sun (Oracle) – Java memory management white paper http://java.sun.com/j2se/reference/whitepapers/memorymanagement_whitepaper.pdf ## OpenJDK – Open-source Java implementation http://openjdk.java.net/ As you can see, the Java VM memory management is more complex than just setting up the biggest value possible via –Xmx. You have to look at all angles, including your native and PermGen space requirement along with physical memory availability (and # of CPU cores) from your physical host(s). It can get especially tricky for 32-bit JVM since the Java Heap and native Heap are in a race. The bigger your Java Heap, smaller the native Heap. Attempting to setup a large Heap for a 32-bit VM e.g .2.5 GB+ increases risk of native OutOfMemoryError depending of your application(s) footprint, number of Threads etc. 64-bit JVM resolves this problem but you are still limited to physical resources availability and garbage collection overhead (cost of major GC collections go up with size). The bottom line is that the bigger is not always the better so please do not assume that you can run all your 20 Java EE applications on a single 16 GB 64-bit JVM process. #2 – Data and application is king: review your static footprint requirement Your application(s) along with its associated data will dictate the Java Heap footprint requirement. By static memory, I mean “predictable” memory requirements as per below. – Determine how many different applications you are planning to deploy to a single JVM process e.g. number of EAR files, WAR files, jar files etc. The more applications you deploy to a single JVM, higher demand on native Heap – Determine how many Java classes will be potentially loaded at runtime; including third part API’s. The more class loaders and classes that you load at runtime, higher demand on the HotSpot VM PermGen space and internal JIT related optimization objects – Determine data cache footprint e.g. internal cache data structures loaded by your application (and third party API’s) such as cached data from a database, data read from a file etc. The more data caching that you use, higher demand on the Java Heap OldGen space – Determine the number of Threads that your middleware is allowed to create. This is very important since Java threads require enough native memory or OutOfMemoryError will be thrown. For example, you will need much more native memory and PermGen space if you are planning to deploy 10 separate EAR applications on a single JVM process vs. only 2 or 3. Data caching not serialized to a disk or database will require extra memory from the OldGen space. Try to come up with reasonable estimates of the static memory footprint requirement. This will be very useful to setup some starting point JVM capacity figures before your true measurement exercise (e.g. tip #4). For 32-bit JVM, I usually do not recommend a Java Heap size high than 2 GB (-Xms2048m, -Xmx2048m) since you need enough memory for PermGen and native Heap for your Java EE applications and threads. This assessment is especially important since too many applications deployed in a single 32-bit JVM process can easily lead to native Heap depletion; especially in a multi threads environment. For a 64-bit JVM, a Java Heap size of 3 GB or 4 GB per JVM process is usually my recommended starting point. #3 – Business traffic set the rules: review your dynamic footprint requirement Your business traffic will typically dictate your dynamic memory footprint. Concurrent users & requests generate the JVM GC “heartbeat” that you can observe from various monitoring tools due to very frequent creation and garbage collections of short & long lived objects. As you saw from the above JVM diagram, a typical ratio of YoungGen vs. OldGen is 1:3 or 33%. For a typical 32-bit JVM, a Java Heap size setup at 2 GB (using generational & concurrent collector) will typically allocate 500 MB for YoungGen space and 1.5 GB for the OldGen space. Minimizing the frequency of major GC collections is a key aspect for optimal performance so it is very important that you understand and estimate how much memory you need during your peak volume. Again, your type of application and data will dictate how much memory you need. Shopping cart type of applications (long lived objects) involving large and non-serialized session data typically need large Java Heap and lot of OldGen space. Stateless and XML processing heavy applications (lot of short lived objects) require proper YoungGen space in order to minimize frequency of major collections. Example: – You have 5 EAR applications (~2 thousands of Java classes) to deploy (which include middleware code as well…) – Your native heap requirement is estimated at 1 GB (has to be large enough to handle Threads creation etc.) – Your PermGen space is estimated at 512 MB – Your internal static data caching is estimated at 500 MB – Your total forecast traffic is 5000 concurrent users at peak hours – Each user session data footprint is estimated at 500 K – Total footprint requirement for session data alone is 2.5 GB under peak volume As you can see, with such requirement, there is no way you can have all this traffic sent to a single JVM 32-bit process. A typical solution involves splitting (tip #5) traffic across a few JVM processes and / or physical host (assuming you have enough hardware and CPU cores available). However, for this example, given the high demand on static memory and to ensure a scalable environment in the long run, I would also recommend 64-bit VM but with a smaller Java Heap as a starting point such as 3 GB to minimize the GC cost. You definitely want to have extra buffer for the OldGen space so I typically recommend up to 50% memory footprint post major collection in order to keep the frequency of Full GC low and enough buffer for fail-over scenarios. Most of the time, your business traffic will drive most of your memory footprint, unless you need significant amount of data caching to achieve proper performance which is typical for portal (media) heavy applications. Too much data caching should raise a yellow flag that you may need to revisit some design elements sooner than later. #4 – Don’t guess it, measure it! At this point you should: – Understand the basic JVM principles and memory spaces – Have a deep view and understanding of all applications along with their characteristics (size, type, dynamic traffic, stateless vs. stateful objects, internal memory caches etc.) – Have a very good view or forecast on the business traffic (# of concurrent users etc.) and for each application – Some ideas if you need a 64-bit VM or not and which JVM settings to start with – Some ideas if you need more than one JVM (middleware) processes But wait, your work is not done yet. While this above information is crucial and great for you to come up with “best guess” Java Heap settings, it is always best and recommended to simulate your application(s) behaviour and validate the Java Heap memory requirement via proper profiling, load & performance testing.   You can learn and take advantage of tools such as JProfiler (future articles will include tutorials on JProfiler). From my perspective, learning how to use a profiler is the best way to properly understand your application memory footprint. Another approach I use for existing production environments is heap dump analysis using the Eclipse MAT tool. Heap Dump analysis is very powerful and allow you to view and understand the entire memory footprint of the Java Heap, including class loader related data and is a must do exercise in any memory footprint analysis; especially memory leaks.Java profilers and heap dump analysis tools allow you to understand and validate your application memory footprint, including detection and resolution of memory leaks. Load and performance testing is also a must since this will allow you to validate your earlier estimates by simulating your forecast concurrent users. It will also expose your application bottlenecks and allow you to further fine tune your JVM settings. You can use tools such as Apache JMeter which is very easy to learn and use or explore other commercial products. Finally, I have seen quite often Java EE environments running perfectly fine until the day where one piece of the infrastructure start to fail e.g. hardware failure. Suddenly the environment is running at reduced capacity (reduced # of JVM processes) and the whole environment goes down. What happened? There are many scenarios that can lead to domino effects but lack of JVM tuning and capacity to handle fail-over (short term extra load) is very common. If your JVM processes are running at 80%+ OldGen space capacity with frequent garbage collections, how can you expect to handle any fail-over scenario? Your load and performance testing exercise performed earlier should simulate such scenario and you should adjust your tuning settings properly so your Java Heap has enough buffer to handle extra load (extra objects) at short term. This is mainly applicable for the dynamic memory footprint since fail-over means redirecting a certain % of your concurrent users to the available JVM processes (middleware instances). #5 – Divide and conquer At this point you have performed dozens of load testing iterations. You know that your JVM is not leaking memory. Your application memory footprint cannot be reduced any further. You tried several tuning strategies such as using a large 64-bit Java Heap space of 10 GB+, multiple GC policies but still not finding your performance level acceptable? In my experience I found that, with current JVM specifications, proper vertical and horizontal scaling which involved creating a few JVM processes per physical host and across several hosts will give you the throughput and capacity that you are looking for. Your IT environment will also more fault tolerant if you break your application list in a few logical silos, with their own JVM process, Threads and tuning values. This “divide and conquer” strategy involves splitting your application(s) traffic to multiple JVM processes and will provide you with: – Reduced Java Heap size per JVM process (both static & dynamic footprint) – Reduced complexity of JVM tuning – Reduced GC elapsed and pause time per JVM process – Increased redundancy and fail-over capabilities – Aligned with latest Cloud and IT virtualization strategies The bottom line is that when you find yourself spending too much time in tuning that single elephant 64-bit JVM process, it is time to revisit your middleware and JVM deployment strategy and take advantage of vertical & horizontal scaling. This implementation strategy is more taxing for the hardware but will really pay off in the long run. Please provide any comment and share your experience on JVM Heap sizing and tuning. Reference: 5 tips for proper Java Heap size from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

GlassFish Operations: Log Notifications

The of the most prominent requirements for application servers derive from the operations space. Taking this into account the next Java EE platform specification will focus entirely on platform as a service (PaaS) and cloud operations. Looking at what we have today still leaves a couple of questions unanswered. The one I get asked for quite a lot is: ‘How to configure GlassFish to receive notifications/alerts/messages on important log entries?’. Seems to be a good topic to blog about. Application Logging vs. System Logging vs. Monitoring You basically have three options here. Either you choose to integrate some notification magic into your application logging or you go with the system logging or you go with the more classy monitoring approach. However the differences should be clear. By default GlassFish does not provide any third-party logging integration. Whatever logging way you go from a framework perspective you will end up logging application specific events. If you are looking for some kind of application server specific notifications you have to take the system logging or the monitoring road. System LoggingThe easiest configuration I have ever done. GlassFish supports Unix Syslog. By checking the ‘Write to system log’ box in the ‘Logger Settings’ at your desired configuration you enable this feature. In fact this works like a charm but has a couple of drawbacks. Syslog is a protocol that allows a machine to send event notification messages across IP networks to event message collectors – also known as Syslog Servers or Syslog Daemons. It’s a connection-less UDP based IP protocol. Some kind of broadcast. If you want to react upon ERRORS or other severity messages you have to use the facilities which come with your syslog server/daemon. This might be a more high sophisticated appliance (STRM, Log Manager) or a piece of software. Most notably the syslog format isn’t encrypted in any way so you have to be careful about configuring this. Syslog-ng isn’t supported as of latest 3.1.2. And some more hints if you are interested in how this is done. Have a look at com.sun.enterprise.server.logging.Syslog and SyslogHandler. You see that you can only send messages to localhost. No chance to configure that. You have to use syslog forwarding if you want that stuff to end up on another machine. Running on Windows requires to install one of the many open or closed software products. I tested http://www.thestarsoftware.com/syslogdaemonlite.html’>Star SysLog Daemon Lite and was quite happy with the results. One last note: If you are stumbling over older Google search results refering to something called ‘GlassFish Performance Advisor’ .. that actually no longer exists. Hasn’t been ported to 3.x branch. Application Logging In fact, GlassFish doesn’t provide a special logging framework integration unlike WebLogic server does. So going this way you will lose the core GlassFish system logs and you can only focus on application specific logging.And that is where the integration actually happens. On an application level. Almost any recent frameworks (Log4J, LogBack) have a couple of providers for you to use out of the box. In terms of notifications email is still the simplest way to go. For that you have to look for the right appender (SMTP). Both LogBack ( SMTPAppender ) and Log4j ( SMTPAppender ) offer something here. Configuration is straight forward and only requires you to input some stuff about your SMTP infrastructure. Don’t forget to set the right level of logging here. You are definitely not willing to have all DEBUG level messages send to your inbox. Still thinking about the syslog thing? Both Log4j and LogBack can produce syslog messages with their SyslogAppenders ( log4j, logback). But all the above mentioned drawbacks also apply here. On top of that you will not be able to receive the GlassFish core log messsages with your syslog server. There might be very rare and special situations where this would be of any help. Monitoring One last thing to mention is the monitoring approach. A couple of monitoring suites can actually read your logfiles (keyword: logfile adapters) and you can configure your logging solution to react to certain pattern. While I don’t consider that elegant it might be a stable solution to work with and I have seen this a lot in the wild. Another approach could be to use JMX or even the GlassFish admin REST interface to find out about the relevant metrics. But both don’t provide access to the logging subsystem. Reference: GlassFish Operations: Log Notifications from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: