Featured FREE Whitepapers

What's New Here?


Quartz scheduler misfire instructions explained

Sometimes Quartz is not capable of running your job at the time when you desired. There are three reasons for that:all worker threads were busy running other jobs (probably with higher priority) the scheduler itself was down the job was scheduled with start time in the past (probably a coding error)You can increase the number of worker threads by simply customizing the org.quartz.threadPool.threadCount in quartz.properties (default is 10). But you cannot really do anything when the whole application/server/scheduler was down. The situation when Quartz was incapable of firing given trigger is called misfire. Do you know what Quartz is doing when it happens? Turns out there are various strategies (called misfire instructions) Quartz can take and also there are some defaults if you haven’t thought about it. But in order to make your application robust and predictable (especially under heavy load or maintenance) you should really make sure your triggers and jobs are configured conciously. There are different configuration options (available misfire instructions) depending on the trigger chosen. Also Quartz behaves differently depending on trigger setup (so called smart policy). Although the misfire instructions are described in the documentation, I found it hard to understand what do they really mean. So I created this small summary article. Before I dive into the details, there is yet another configuration option that should be described. It is org.quartz.jobStore.misfireThreshold (in milliseconds), defaulting to 60000 (a minute). It defines how late the trigger should be to be considered misfired. With default setup if trigger was suppose to be fired 30 seconds ago, Quartz will happily just run it. Such delay is not considered misfiring. However if the trigger is discovered 61 seconds after the scheduled time – the special misfire handler thread takes care of it, obeying the misfire instruction. For test purposes we will set this parameter to 1000 (1 second) so that we can test misfiring quickly. Simple trigger without repeating In our first example we will see how misfiring is handled by simple triggers scheduled to run only once: val trigger = newTrigger(). startAt(DateUtils.addSeconds(new Date(), -10)). build()The same trigger but with explicitly set misfire instruction handler: val trigger = newTrigger(). startAt(DateUtils.addSeconds(new Date(), -10)). withSchedule( simpleSchedule(). withMisfireHandlingInstructionFireNow() //MISFIRE_INSTRUCTION_FIRE_NOW ). build()For the purpose of testing I am simply scheduling the trigger to run 10 seconds ago (so it is 10 seconds late by the time it is created!) In real world you would normally never schedule triggers like that. Instead imagine the trigger was set correctly but by the time it was scheduled the scheduler was down or didn’t have any free worker threads. Nevertheless, how will Quartz handle this extraordinary situation? In the first code snippet above no misfire handling instruction is set (so called smart policy is used in that case). The second code snippet explicitly defines what kind of behaviour do we expect when misfiring occurs. See the table:Simple trigger repeating fixed number of times This scenario is much more complicated. Imagine we have scheduled some job to repeat fixed number of times: val trigger = newTrigger(). startAt(dateOf(9, 0, 0)). withSchedule( simpleSchedule(). withRepeatCount(7). withIntervalInHours(1). WithMisfireHandlingInstructionFireNow() //or other ). build()In this example the trigger is suppose to fire 8 times (first execution + 7 repetitions) every hour, beginning at 9 AM today ( startAt(dateOf(9, 0, 0)). Thus the last execution should occur at 4 PM. However assume that due to some reason the scheduler was not capable of running jobs at 9 and 10 AM and it discovered that fact at 10:15 AM, i.e. 2 firings misfired. How will the scheduler behave in this situation?Simple trigger repeating infinitely In this scenario trigger repeats infinite number of times at a given interval: val trigger = newTrigger(). startAt(dateOf(9, 0, 0)). withSchedule( simpleSchedule(). withRepeatCount(SimpleTrigger.REPEAT_INDEFINITELY). withIntervalInHours(1). WithMisfireHandlingInstructionFireNow() //or other ). build()Once again trigger should fire on every hour, beginning at 9 AM today ( startAt(dateOf(9, 0, 0)). However the scheduler was not capable of running jobs at 9 and 10 AM and it discovered that fact at 10:15 AM, i.e. 2 firings misfired. This is a more general situation compared to simple trigger running fixed number of times. CRON triggers CRON triggers are the most popular ones amongst Quartz users. However there are also two other available triggers: DailyTimeIntervalTrigger (e.g. fire every 25 minutes) and CalendarIntervalTrigger (e.g. fire every 5 months). They support triggering policies not possible in both CRON and simple triggers. However they understand the same misfire handling instructions as CRON trigger. val trigger = newTrigger(). withSchedule( cronSchedule("0 0 9-17 ? * MON-FRI"). withMisfireHandlingInstructionFireAndProceed() //or other ). build()In this example the trigger should fire every hour between 9 AM and 5 PM, from Monday to Friday. But once again first two invocations were missed (so the trigger misfired) and this situation was discovered at 10:15 AM. Note that available misfire instructions are different compared to simple triggers:QTZ-283Note: QTZ-283: MISFIRE_INSTRUCTION_IGNORE_MISFIRE_POLICY not working with JDBCJobStore – apparently there is a bug when JDBCJobStore is used, keep an eye on that issue. As you can see various triggers behave differently based on the actual setup. Moreover, even though the so called smart policy is provided, often the decision is based on business requirements. Essentially there are three major strategies: ignore, run immediately and continue and discard and wait for next. They all have different use-cases: Use ignore policies when you want to make sure all scheduled executions were triggered, even if it means multiple misfired triggers will fire. Think about a job that generates report every hour based on orders placed during that last hour. If the server was down for 8 hours, you still want to have that reports generated, as soon as you can. In this case the ignore policies will simply run all triggers scheduled during that 8 hour as fast as scheduler can. They will be several hours late, but will eventually be executed. Use now* policies when there are jobs executing periodically and upon misfire situation they should run as soon as possible, but only once. Think of a job that cleans /tmp directory every minute. If the scheduler was busy for 20 minutes and finally can run this job, you don’t want to run in 20 times! One is enough, but make sure it runs as fast it can. Then back to your normal one-minute intervals. Finally next* policies are good when you want to make sure your job runs at particular points in time. For example you need to fetch stock prices quarter past every hour. They change rapidly so if your job misfired and it is already 20 minutes past full hour, don’t bother. You missed the correct time by 5 minutes and now you don’t really care. It is better to have a gap rather than an inaccurate value. In this case Quartz will skip all misfired executions and simply wait for the next one. Reference: Quartz scheduler misfire instructions explained from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

The Java EE 6 Example – Galleria Part 2

You probably followed me with the last Java EE 6 Galleria example posts. The first one was the basic introduction. The second one was about running it on latest GlassFish. Someone of the RedHat guys mentioned, that we should look into bringing this example off from GlassFish. Great ;) Thanks for the nice idea. That is exactly what we are going to do today. I am going to bring the Galleria example to latest WebLogic 12c. PreparationGet yourself in the mood for some configuration. You already have the latest NetBeans 7.1 installed and you are going to download the WebLogic 12c ZIP distribution in a second. After you have downloaded the wls1211_dev.zip put it to a location of choice and unzip it. From now on we are going to call this folder the %MW_HOME% folder. Open a command line and setup %JAVA_HOME%, %JAVA_VENDOR% and %MW_HOME% variables in it: set JAVA_HOME=D:\jdk1.7.0_04 set MW_HOME=D:\temp\wls12zip set JAVA_VENDOR=SunAfter you have done this one final step is to run the installation configuration script configure.cmd in the MW_HOME directory. This is a one time thing to do. Setup your WebLogic DomainNext thing we need is a WebLogic domain. Open a new command line prompt. Setup your environment in the current shell by running the %MW_HOME%\wlserver\server\bin\setWLSEnv.cmd script. Execute the %MW_HOME%\wlserver\common\bin\config.cmd and follow the wizard to create a basic WebLogic Server Domain called test-domain in a folder of your choice (e.g. D:\temp\test-domain). Give a username and password of your choice (e.g. system/system1) and click through the wizard until you have a “finish” button down there. WebLogic needs the Derby client jar file in order to configure and use the database. Copy the derbyclient- from your m2 repository to the test-domain\lib folder. Now lets start the newly created domain manually by running the startWebLogic.cmd in your newly created domain directory. Verify that everything is up and running by navigating to  http://localhost:7001/console and logging in with the credentials from above. Navigate to “Services > Data Sources” and select the “New” button from above the table. Select a “Generic Datasource” and enter a name of your choice (e.g. GalleriaPool) and enter jdbc/galleriaDS as the JNDI-Name. Select Derby as the Database Type and click “next”. Select Derby’s Driver (Type 4) and click “Next” and “Next” and enter the connection properties (Database: GALLERIATEST, Host: localhost. User and Password: APP” and click “Next”. If you like to, you can hit the “Test Configuration” button on top and make sure everything is setup in the right way. Next the most tricky part. We need a JDBC realm like the one we configured for GlassFish. First difference here is, that we don’t actually create a new realm but add an authentication mechanism to the available one. There is a nasty limitation with WebLogic. You can configure as many security realms as you like, but only one can be active at a given time. This stopped myself for a while until I got the tip from Michel Schildmeijer (thanks, btw!). Navigate to “Security Realms” and select “myrealm” from the table. Switch to the Providers tab. Select “New” above the table of the Authentication Providers. Enter “GalleriaAuthenticator” as the name and select “SQLAuthenticator” from the dropdow-box as a type. Click ok. Select the GalleriaAuthenticator and set the Control Flag: SUFFICIENT and save. After that switch to the “Provider Specific” tab. Enter the following: Data Source Name: GalleriaPool Password Style Retained: unchecked Password Algorithm: SHA-512 Password Style: SALTEDHASHED SQL Get Users Password: SELECT PASSWORD FROM USERS WHERE USERID = ? SQL Set User Password: UPDATE USERS SET PASSWORD = ? WHERE USERID = ? SQL User Exists: SELECT USERID FROM USERS WHERE USERID = ? SQL List Users: SELECT USERID FROM USERS WHERE USERID LIKE ? SQL Create User: INSERT INTO USERS VALUES ( ? , ? ) SQL Remove User: DELETE FROM USERS WHERE USERID = ? SQL List Groups: SELECT GROUPID FROM GROUPS WHERE GROUPID LIKE ? SQL Group Exists: SELECT GROUPID FROM GROUPS WHERE GROUPID = ? SQL Create Group: INSERT INTO GROUPS VALUES ( ? ) SQL Remove Group: DELETE FROM GROUPS WHERE GROUPID = ? SQL Is Member: SELECT USERID FROM USERS_GROUPS WHERE GROUPID = ? AND USERID = ? SQL List Member Groups: SELECT GROUPID FROM USERS_GROUPS WHERE USERID = ? SQL List Group Members: SELECT USERID FROM USERS_GROUPS WHERE GROUPID = ? AND USERID LIKE ? SQL Remove Group Memberships: DELETE FROM USERS_GROUPS WHERE GROUPID = ? OR GROUPID = ? SQL Add Member To Group: INSERT INTO USERS_GROUPS VALUES( ?, ?) SQL Remove Member From Group: DELETE FROM USERS_GROUPS WHERE GROUPID = ? AND USERID = ? SQL Remove Group Member: DELETE FROM USERS_GROUPS WHERE GROUPID = ? Descriptions Supported: uncheckedSave your changes. and go back to the “Providers” tab. Click on the “Reorder” button and push the GalleriaAuthenticator to the top of the list. Click “ok”, when done and stop your WebLogic instance. You are free to restart it at any time. Configure your ProjectsJava EE is portable. Right. And you should be able to run the same deployment without any changes on the WebLogic 12c. That is theory. In practice you will have to touch the deployment. Because WebLogic has some issues with Hibernate. And it is a lot more crotchety when it comes to deployments than GlassFish is. First of all you have to create a “galleria-ear\src\main\application\META-INF” folder. Put a blank weblogic-application.xml there and put the following code in it: <?xml version='1.0' encoding='UTF-8'?> <weblogic-application xmlns="http://xmlns.oracle.com/weblogic/weblogic-application" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-application http://xmlns.oracle.com/weblogic/weblogic-application/1.4/weblogic-application.xsd"> <prefer-application-packages> <package-name>antlr.*</package-name> </prefer-application-packages> </weblogic-application>That tells WebLogic to prefer the application packaged libraries over those already present in the server. Let’s go ahead. We need to add the Hibernate dependencies to the ear. With GlassFish we skipped that step, because we installed the Hibernate package with the server. Here we go. Open the galleria-ear pom.xml and add the following to the dependencies section: <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>4.0.1.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>4.0.1.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>4.2.0.Final</version> </dependency> <dependency> <groupId>org.jboss.logging</groupId> <artifactId>jboss-logging</artifactId> <version>3.1.0.CR2</version> </dependency>You also need to look at the maven-ear-plugin and add the following to the <configuration>: <defaultLibBundleDir>lib</defaultLibBundleDir>And if you are there already, remove the commons-codec jarModule. It doesn’t hurt, but it get’s packaged into the ear/lib folder, so you can skip it. Next navigate to the galleria-jsf project and open the web.xml. The <login-config> is incomplete and should look like this: <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/Login.xhtml</form-login-page> <form-error-page>/Login.xhtml</form-error-page> </form-login-config> </login-config> <security-role> <description>All registered Users belong to this Group</description> <role-name>RegisteredUsers</role-name> </security-role> You need to define the possible roles, too otherwise the WebLogic security stuff will start to complain. Add a blank weblogic.xml to the galleria-jsf\src\main\webapp\WEB-INF folder and add the following lines to it: <?xml version="1.0" encoding="UTF-8"?> <weblogic-web-app xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <security-role-assignment> <role-name>RegisteredUsers</role-name> <principal-name>RegisteredUsers</principal-name> </security-role-assignment> <session-descriptor> <timeout-secs>3600</timeout-secs> <invalidation-interval-secs>60</invalidation-interval-secs> <cookie-name>GalleriaCookie</cookie-name> <cookie-max-age-secs>-1</cookie-max-age-secs> <url-rewriting-enabled>false</url-rewriting-enabled> </session-descriptor> </weblogic-web-app>We are mapping the web.xml role to a WebLogic role here. You could have skipped this, but I like it this way so you don’t get confused. The session-descriptor element is taking care of the JSESSION cookie name. If you wouldn’t change it, you would get into trouble with signed in users to the admin console. Move on the the galleria-ejb project. Create a blank weblogic-ejb-jar.xml in the “galleria-ejb\src\main\resources\META-INF” folder. Put the following code in it: <?xml version="1.0" encoding="UTF-8"?> <weblogic-ejb-jar xmlns="http://xmlns.oracle.com/weblogic/weblogic-ejb-jar" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-ejb-jar http://xmlns.oracle.com/weblogic/weblogic-ejb-jar/1.0/weblogic-ejb-jar.xsd"> <security-role-assignment> <role-name>RegisteredUsers</role-name> <principal-name>RegisteredUsers</principal-name> </security-role-assignment> </weblogic-ejb-jar>Comparable to the web.xml/weblogic.xml this also tells WebLogic how to map the ejb-jar.xml security roles to WebLogic roles. Fine, open the persistence.xml and add the following lines:  <property name="hibernate.dialect" value="org.hibernate.dialect.DerbyDialect" /> <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WeblogicJtaPlatform" />The first one explicitly selects the Derby dialect for Hibernate. The second one tells Hibernate where and how to look for the transactions. All done. Now you should be able to build the project again and deploy it. Use the admin console or NetBeans to deploy it. Thanks for taking the time to follow this lengthy post. I hope it was helpful!Want to know what it takes to get the unit and integration tests up and running? Read on! Time to move on with Vineet’s Java EE 6 Galleria Example. After some introduction, a basic getting started guide on GlassFish and the WebLogic 12c deployment it finally is time to dive into testing. I skipped this part in the earlier posts because of two reasons. The first one obviously was that I wanted to write a separate post about it. The second one was that there was work to do to get this up and running on latest GlassFish 3.1.2. Vineet and team did a great job releasing Arquillian GlassFish Remote Container CR3 a few days back which now also covers GlassFish 3.1.2. Time to get you started with testing the Galleria Example. What you are going to test The Galleria was build to achieve comprehensive test coverage through the use of both unit and integration tests, written in JUnit 4. The unit and integration tests for EJBs and the domain model rely on the EJB 3.1 container API. The integration tests for the presentation layer relies on the Arquillian project and its Drone extension (for execution of Selenium tests). Please make sure to update to the latest sources from the Galleria Project because Vineet updated the Arquillian GlassFish container to CR3 and the Selenium stuff to support latest FireFox browser. Unit-Testing the Domain Layer The tests for the domain layer fall into five separate categories. The first three (Bean Verification tests, Mutual Registration verification tests, Tests for the Constructor, equals and hashcode methods) cover the basic needs for nearly any JPA based application. If you take a walk down the galleria-ejb\src\test sources you can see them covered in the info.galleria.domain package. Every domain object is covered by a suite class which includes all three kinds of unit tests. Lets start with looking at the Bean Verification tests. The behavior associated with the getters and setters in the properties of JavaBeans are fairly easy to verify. All one needs to do, is to invoke the setter first, then the getter, and verify whether the instance returned by the getter is equal to the instance passed to the setter. The project uses the BeanVerifier class (under the src/tests/java root of the galleria-ejb module) for this. The AlbumBeanVerifier test simply is a parameterized test which tests every single property. The only exception in this case is the coverPhoto property which has a special behavior beyond the simple JavaBean pattern. Next on the list are the tests for the constructor, equals and hashcode methods. Constructors for the domain entities are tested by merely creating new instances of the classes, while asserting the validity of the properties that would be set by the constructor. The equals and hashcode methods are verified via the use of the EqualsVerifier class from the EqualsVerifier project.The last category of the more basic tests are the  mutual registration (PDF) verification tests. You simply want to check if modifications to the relationships between the entities, actually result in changes to both the parent and the child properties. See the comprehensive wiki page for more details about the implementation. All these are executed during the unit-testing phase and are covered by **/*Suite.java classes. In addition to the basic tests you also find one-off tests written for specific cases where the basic test-models are insufficient or unsuitable. In such instances, the one-off tests verify such behavior through hand-written assertions. You find them in **/*Tests.java classes in the same package. The domain layer tests finish with the JPA repository tests. Each of the *RepositoryTest.java classes test the CRUD behavior of the handled domain objects. The common AbstractRepositoryTest handles the test data and resets the database after every test-run. The database creation itself is handled by the maven-dbdeploy-plugin. Looking at the pom.xml you can see, that it is bound to two Maven lifecycle phases (process-test-resources and pre-integration-test) which makes sure it gets called twice. First time before the unit-tests and second before the integration testing (compare below). All those unit-tests are executed with surefire. If you issue a mvn test on the galleria-ejb project you can see a total of 138 tests passing. These are the four **/*Suite tests, the four **/*RepositoryTests and two additional tests. If you briefly look at the console output you see, that this is all happening in a Java SE environment. No container tests have been done until now. Integration Testing the Domain Layer So this really only covered the basics which everybody should do and probably knows. Doing integration testing is another story. This is done by the **/*IntegrationSuite tests. The name intentionally does not use the default naming conventions in order to prevent them from running during Maven’s unit-testing phase. To hook into the integration-test phases of Maven the Galleria example makes use of the  Maven Failsafe Plugin.  You can find the integration test Suite in the info.galleria.service.ejb package. The AbstractIntegrationTest takes care about handling the test data (comparable what the AbstractRepositoryTest) did for the unit-tests. The Suite contains a *ServiceIntegrationTest for every domain object. You can walk through the single tests in every *IntegrationTest and get a feeling for what is happening here. The ServicesIntegrationSuite takes care of starting and stopping the EJBContainer by using the AbstractIntegrationTest.startup();. This method is comparably unspectacular and simple: logger.info("Starting the embedded container."); Map<String, Object> props = new HashMap<String, Object>(); props.put("org.glassfish.ejb.embedded.glassfish.installation.root", "./glassfish-integrationtest-install/glassfish"); container = EJBContainer.createEJBContainer(props); context = container.getContext(); datasource = (DataSource) context.lookup("jdbc/galleriaDS");The most important point here is, that the embeddable EJBContainer is configured via an existing GlassFish domain which can be found directly in the galleria-ejb\glassfish-integrationtest-install folder. If you look at the glassfish\domains\domain1\config\domain.xml you can see, that all the configuration is already done for you. But where does the deployment come from? This is easy to answer. By default the  embeddable EJBContainer searches your classpath for ejb-jar jarfiles or folders and deploys them. If you want to see a little bit more verbose output from the EJBContainer you have to provide a customlogging.properties file in src/test/resources and add some simple lines to it: handlers=java.util.logging.ConsoleHandler java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level=FINEST #add some more packages if you need them info.galleria.service.ejb.level=FINEadd the following to the maven-failsafe-plugin configuration section of your pom.xml: <systemPropertyVariables> <java.util.logging.config.file>${basedir}/src/test/resources/customlogging. properties</java.util.logging.config.file> </systemPropertyVariables>The integration tests should finish successfully and maven should print something comparable to the following: Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.726 secNotice, that the integration tests take significantly longer than the normal unit-tests. This is no surprise at all and you should make sure to only execute them when needed so you are not slowed down with your development. Integration Testing the Presentation Layer The domain layer is covered with tests. What is missing is the presentation layer. You can find the presentation layer tests in the galleria-jsf project. Start with examining the pom.xml. You find a couple of new dependencies here. Namely Arquillian, Selenium and Drone. First some configuration again. Scroll down to the profile section you can see a integration-test profile which makes use of the maven-glassfish-plugin and controls a container which is configured with a couple of properties. Add the definition  to the properties section on the top of the pom: <galleria.glassfish.testDomain.user>admin</galleria.glassfish. testDomain.user> <galleria.glassfish.testDomain.passwordFile>D:/glassfish-3.1.2-b22/glassfish3/glassfish/domains/ test-domain/config/local-password</galleria.glassfish.testDomain.passwordFile> <galleria.glassfish.testDomain.glassfishDirectory>D:/glassfish-3.1.2-b22/glassfish3/glassfish/</galleria.glassfish.testDomain.glassfishDirectory> <galleria.glassfish.testDomain.domainName>test-domain</galleria.glassfish.testDomain.domainName> <galleria.glassfish.testDomain.adminPort>10048</galleria.glassfish.testDomain.adminPort> <galleria.glassfish.testDomain.httpPort>10080</galleria.glassfish.testDomain.httpPort> <galleria.glassfish.testDomain.httpsPort>10081</galleria.glassfish.testDomain.httpsPort>You can copy this from the development profile of the galleria-ejb project as described in part 2. You also should already have the domain in place. Next down to the maven-surefire-plugin you can see, that it follows the same conventions that the galleria-ejb project has. But looking at the test-classes you can see, that there isn’t a single unit-test here. So you can directly move on to the maven-failsafe-plugin which handles the integration tests. There is one single AllPagesIntegrationTest which covers the complete testing. Let’s go there. It is an Arquillian Testcase which is executed as a client against a remote instance. Beside the definition of the deployment (@Deployment) you also again see a couple of setUp and tearDown methods which do some initialization and destruction. One thing has to be handled separately. You also see a writeCoverageData() method in there which obviously connects to some kind of Socket and reads data from it. This is the JaCoCo (Java Code Coverage Library) hook which can produce a coverage data set of the tests. To make this work you will have to download the package and extract it to a place of you choice. Next go to your GlassFish test-domain\config\domain.xml and open it. Find the server-config – java-config and add the following there: <jvm-options>-javaagent:D:\path\to\jacoco-\lib\jacocoagent.jar=output=tcpserver</jvm-options>This enables the coverage agent for GlassFish. All configuration done now. Back to NetBeans, select the integration-test profile (in NetBeans you can do this by selecting the entry from the dropdown box next the the little hammer in the icon area or by using -Pintegration-test as a maven command line switch. The console output tells you, that the GlassFish domain is started and the info.galleria.view.AllPagesIntegrationTest is running. Be warned, a FireFox instance is popping up during this run and the Arquillian Drone extension is driving your Selenium Tests.Seeing a browser remote controlled in this way looks and feels weird if you haven’t seen this before. If everything works you should now see this in the console output: Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 138.014 secIf you are using another locale than en for your browser you may have completely failed tests. So there is additional need to configure Drone to support it. Drone allows you to specify a Firefox profile via arquillian.xml. You can create a Firefox profile that is configure to send accept-language headers with en/en-US as the value.To create a new profile, start Firefox Profile manager with the following command (you might need to close all running instances): firefox -profilemanager (on Windows you need to execute the command “d:\complete\path\to\firefox.exe” -profilemanager) in a cmd shell. Remember the location on the disc where this profile is created – you’ll need it later. To configure the created profile, in Firefox, go to Options (menu) -> Content (tab) -> Languages (fieldset) -> Choose to add the English language (and move it top, as the preferred one). Now navigate the galleria-jsf\src\test\resources\arquillian.xml, you can then add a property: <extension qualifier="webdriver"> <property name="implementationClass">org.openqa.selenium.firefox.FirefoxDriver</property> <property name="firefoxProfile">location of the Firefox profile for English</property> </extension>All done now. You should be able to run the complete clean and build process now without any test failing. A big “green bar” :) Reference: The Java EE 6 Example – Running Galleria on WebLogic 12 – Part 3 ,The Java EE 6 Example – Testing Galleria – Part 4 from our JCG partner Markus Eisele at the Markus Eisele blog....

Serious about your software career? Leave your job

I recently resigned my position as senior software engineer and technical lead for a middleware services group at Wells Fargo. The job was great: work from home, great immediate manager, respected among the team members, trusted to explore new technologies when justified, boss stood up for us and got us the tools, training, and working environments we needed, etc, etc. Something still prompted me to move, and it’s not the first time I’ve done so. I’ve opted to resign jobs that had great setups in the past, either as a full-time or consultant, and in this blog I try to articulate why. I believe to be successful and well-rounded in the technology/software space, you have to change jobs every few years or so. Ultimately, as a software engineer, your job is to solve problems using technology. In most cases, a problem can be solved in many different ways, but not all solutions are created equal. The more problems and solutions you’ve seen and experienced, the more apt you are to solve the problem with a “better” or “elegant” solution. In my opinion, you have to experience how problems are solved in different groups, and different companies using different methods, different approaches, etc etc to really become proficient at problem solving and weigh the benefits and tradeoffs that come with a solution. Otherwise, the traditions and customs of a single company crush your mind from thinking “outside the box” or evaluate how similar problems have been solved in the past by similar companies. Another part of the equation is ability to learn and your exposure to new technologies. Big companies offer the “this is the way we’ve always done it and we’re not going to change” mentality which is really a career killer for a software engineer. If you’re career goals involve trying to climb the corporate ladder, then by all means embrace the corporate mindset but if you want to stay in the technology space and excel, you will have to seek out opportunities to expose yourself to new technologies and problems.. I feel at this point in my career, I can’t settle for all the comforts of a cushy corporate job. I am still young enough and interested enough in technology to the point that I want to push myself. I want to get out and be exposed to new problems. I crave learning and the challenges of doing so. I honestly feel that if you’re not learning and not solving new problems and not thinking outside of the box you’re going to end up like those technology folks complaining about not having a job because the technology they cling to is slowly going away or drying up. I don’t want to end up complaining about something that I have control over right now. In the end, the technology industry is about problem solving, ability to learn, and pushing yourself to not get comfortable. Maybe I’m cynical in this respect, but the longer you stay at a big company, the more locked-in you get and the more dependent you become on that company (pension, retirement, tenure, job-security, whatever). The longer you stay, the less motivated you get to learn the new technologies that aren’t being used at your company. The longer you stay, you *think* you become critical to their operations, but before you know it the operations themselves are being phased out and your chances of being kept around are becoming slimmer and slimmer. I believe times have changed, and trying to stay at a corporate job in a company for 30 years is a career killer for a software engineer. I want my resume to be my job security, not the number of years I’ve had the corporate mentality beaten into me. Who knows, though. My wife and I are expecting our first child in the next few weeks, and I know my priorities will shift big time. My focus will be on her and my family. Maybe I’ll do a 180 change of opinion about staying at a big company. But while I’m still motivated, I have to explore other options and opportunities that I know will solve all three of those items mentioned above: exposure to problem solving, learning, and staying hungry. So I continue my journey in the software craft by taking on the role of Principal Consultant at an open-source subscription company, FuseSource, who is the support company behind Apache Camel, ActiveMQ, ServiceMix, CXF, and a few others. I will be helping different companies use these open-source projects, facilitate proper design of their architecture, deliver training, and i’m sure much more. It seems to be a good balance of exposure to new problems, learning opportunities, and working with some of the smartest people in the open-source space which will drive me to stay hungry. Wish me luck! Reference: Serious about your software career? Leave your job from our JCG partner Christian Posta at the Christian Posta Software blog....

Drools Guvnor – Manage access

Externalize business or technical rules is very important for scalable applications but the BRMS service access should be managed. guvnor provides control UI access and operations using role based authorizations. There are several permissions types as listed in drools-guvnor reference manual. Admin with all permissions. Analyst or Analyst read-only: analyst permissions for a specific category. Package admin, Package developer or Package read-only: package permissions for a specific package. Allow user authentication control by updating the file compenent.xml located into the server deployed folder ... <component name="org.jboss.seam.security.roleBasedPermissionResolver> <property name="enableRoleBasedAuthorization">false</property> </component> // change false to true ...Embedded Guvnor in Jboss server control access configuration: Stop guvnor server if started in user guest mode and enable role based authorization. Add drools-guvnor access policy in the file login-config.xml located in server/default/conf <application-policy name="drools-guvnor"> <authentication> <login-module code="org.jboss.security.auth.spi.UsersRolesLoginModule" flag="required"> <module-option name="usersProperties"> props/drools-guvnor-users.properties</module-option> <module-option name="rolesProperties"> props/drools-guvnor-roles.properties</module-option> </login-module> </authentication> </application-policy>Create properties files for users and roles with respective contents: # A roles.properties file for UsersRolesLoginModule (drools-guvnor-roles.properties) superuser=admin packuser=package.admin rulesviewer=package.readonly# A users.properties file for UsersRolesLoginModule (drools-guvnor-users.properties) rulesviewer=drools packuser=proto superuser=adminRestart the Jboss guvnor server and log into web interface using created accounts. Using lightweight container Tomcat and Mysql server – Configuring drools-guvnor JAAS authentication module Prequisites: Working with Drools Guvnor 5.3 deployed in Apache tomcat 6 running with Mysql 5.JDK version 1.6 0 – Deploy guvnor application with context name drools-guvnor. All users are guests then go the administration panel and set authorization for user admin or create another user with authorizations. Stop the server and we are going to enable Jaas database authentication 1 – Create authdb schema with guvnorusers table in mysql database. CREATE TABLE guvnorusers ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `username` varchar(255) DEFAULT NULL, `password` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) );INSERT INTO guvnorusers values (1,"admin","admin");2 – Build a custom loginModule  Download my custom loginModule sources customloginmodule_sources Compile and export this sources as java archive (jar). 3 – In %TOMCAT_HOME%/lib  Copy the loginModule exported jar file and the mysql connector jar. 4 – In %TOMCAT_HOME%/conf/context.xml, we add a resource declaration <Resource name="jdbc/URDroolsDS" auth="Container" type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://yourserveradress:3306/authdb" username="dbuser" password="dbuserpassword" maxActive="20" maxIdle="10" maxWait="-1" />5 – Update %TOMCAT_HOME%/webapps/drools-guvnor/WEB-INF/components.xml to configure our repository to use external database and security settings <security:identity authenticate-method="#{authenticator.authenticate}" jaas-config-name="drools-guvnor"/><security:role-based-permission-resolver enable-role-based-authorization="true"/>6 – Update %TOMCAT_HOME%/conf/server.xml to add a Realm declaration <Realm className="org.apache.catalina.realm.LockOutRealm"> ... <Realm appName="drools-guvnor" className="com.test.droolsproto.loginModule.Realm.DroolsJaasRealm" dataSourceName="jdbc/URDroolsDS" localDataSource="true"/> ... </Realm>7 – Create a file jaasConfig on %TOMCAT_HOME%/conf with this content: drools-guvnor{ com.test.droolsproto.loginModule.module.DroolsLoginModule required debug=true; };8 – Before runing Tomcat create in %TOMCAT_HOME%/bin a setenv.sh file if you running on linux or setenv.bat on windows with this content (Working on linux) … JAVA_OPTS=”-Xms128m -Xmx256m -Djava.security.auth.login.config=$CATALINA_HOME/conf/jaasConfig” export JAVA_OPTS …Now it’s time to restart your guvnor server and check authentication! Reference: Drools-guvnor manage access – part 1 ,Drools-guvnor manage access – part 2 from our JCG partner Gael-Jurin Nkouyee at the NGJWEBLOG blog....

Taking over the Codebase, Solving the Spaghetti Crisis

We’ve all been there. Somebody asks if you can take a look at their website that has been stagnant for a while. Something small needs to be changed. You feel up for a challenge, so you dive in. What you find is a mess. It’s really nobody’s fault. Things evolved over time, different developers and designers have done their thing at various times. Nobody meant any ill will, everybody just did their best. But here you are. Those problems come in many forms, but the one that pops up the most is the shared server that’s running a website for that shop next door. Over the years the owner became more and more reliant on the site. Maybe it contains his master inventory, maybe his contact database. It started out as nice novelty that nobody depended on, but now it’s quickly becoming mission critical. So there you are, you’ve just opened the public HTML folder in your FTP client and you’ve got PHPMyAdmin opened in the browser. This is my current action plan, feel free to add and suggest tools. Backup the entire site For the love of god, make a backup now! Chances are good that you are looking at the only copy of the site in existence. If anything happens with the server, if you mess up something small, you’re going to be very very sorry. Both cPanel and Plesk, the most popular domain control panels offer backup solutions out of the box. They are not perfect, but they allow you to create a full dump of the entire site. If you can schedule a daily backup, that’s a plus. If you can send the backup somewhere off site, another plus. If you have shell access to the server, there are a whole slew of other tools available that may or may not be easier to use than the above. Whatever you do, also check the backup. Does it contain a database dump? Does it contain all files? If you’re going to be messing with DNS and e-mail, you may want to check if that’s backed up too. You’re now at a point where you can start developing/debugging with some confidence. It’s not perfect, but at least you have something to fall back on. I’d take it at least one step further. Get the files under version control If you’re going to be making multiple changes for multiple different tasks, you’re going to want to have all the code under version control. The easiest way: just put the entire public HTML folder under version control. You may be versioning too many files there, but at least you’re not missing anything. One typical issue that pops up is the fact that not every one is going to be using version control. For instance, even the simplest WordPress blog can cause issues, because it is possible to edit some of the files from within the administration console. If you have shell access, you could install and use version control on the server itself. But that doesn’t work for shared hosting. I haven’t found the perfect, automated, solution but there are a few tools out there that allow you to view the difference between an FTP directory and a local on. Beyond Compare 3 is a pretty good one, once you get past its archaic interface. You’re now at a pretty good place. Major disasters will be solved by the backup and smaller issues can be resolved by rolling back the change that caused them. There’s still one wildcard: the database. Especially if your work involves structural changes to the database, you may want to look into … Version control for the database Few people do database version control and when it happens, it doesn’t always work quite right. But if you want to feel save doing that normalization operation on a few tables, there’s no way around it. You have to get the database in your version control system. Start here and if you want to take it further, there have been many tools written since that post that will make your life easier. (Unit) Test the code Depending on the language of the application, you may just automatically be writing tests, even before you considered creating a backup. If it’s a PHP site however, chances are nobody has thought of this before. You may think testing isn’t important for your particular application, but do me a favor: get at least one test in there, so that you’ve got the structure set up. If you ever add new code, you will be much more likely to add more tests. Start the test suite with a single test and let it grow from there. Integration/GUI testing & Continuous integration If you get through the previous steps, you’re doing better than most. Automated GUI testing, a continuous integration server or even a continuous deployment environment, etc. it’s all icing on the cake. But if you’ve got the time and budget to set this up, you’re going to be a very happy developer down the road. Conclusion Many sites out there are still alive only by the mere fact that nothing bad ever happened until now. If you’re going to be updating such a site, chances are good that you will be held responsible if anything goes wrong. Even if it is completely beyond your control. The above steps will make sure that you are prepared and that you can start refactoring the code without worrying. This is an evolving article, I will be updating it as I go along. Tips are more than welcome. Reference: Taking over the Codebase, Solving the Spaghetti Crisis from our JCG partner Peter Backx at the Streamhead blog....

Using slf4j with logback tutorial

In current post I am going to show you how to configure your application to use slf4j and logback as logger solution. The Simple Logging Facade For Java (slf4j) is a simple facade for various logging frameworks, like JDK logging (java.util.logging), log4j, or logback. Even it contains a binding tat will delegate all logger operations to another well known logging facade called jakarta commons logging (JCL). Logback is the successor of log4j logger API, in fact both projects have the same father, but logback offers some advantages over log4j, like better performance and less memory consumption, automatic reloading of configuration files, or filter capabilities, to cite a few features. Native implementation of slf4j is logback, thus using both as logger framework implies zero memory and computational overhead. First we are going to add slf4j and logback into pom as dependencies. <properties> <slf4j.version>1.6.4</slf4j.version> <logback.version>1.0.1</logback.version> </properties><dependencies><dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency><dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency><dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-core</artifactId> <version>${logback.version}</version> </dependency></dependencies>Note that three files are required, one for slf4j, and two for logback. The last two dependencies will change depending on you logging framework, if for example you want to still use log4j, instead of having logback dependencies we would have log4j dependency itself and slf4j-log4j12. Next step is creating the configuration file. Logback supports two formats of configurations files, the traditional way, using XML or using a Groovy DSL style. Let’s start with traditional way, and we are going to create a file called logback.xml into classpath. File name is mandatory, but logback-test.xml is also valid. In case that both files are found in classpath the one ended with -test, will be used. <?xml version="1.0" encoding="UTF-8"?><configuration><appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern> </encoder> </appender><logger name="com.lordofthejars.foo" level="INFO" additivity="false"> <appender-ref ref="STDOUT" /> </logger><!-- Strictly speaking, the level attribute is not necessary since --> <!-- the level of the root level is set to DEBUG by default. --> <root level="DEBUG"> <appender-ref ref="STDOUT" /> </root> </configuration>In general file is quite intuitive, we are defining the appender (the output of log messages), in this case to console, a pattern, and finally root level logger (DEBUG) and a different level logger (INFO) for classes present in foo package. Obviously this format is much readable than typical log4j.properties. Recall on additivity attribute, the appender named STDOUT is attached to two loggers, to root and to com.lordofthejars.foo. because the root logger is the ancestor of all loggers, logging request made by com.lordofthejars.foo logger will be output twice. To avoid this behavior you can set additivity attribute to false, and message will be printed only once. Now let’s create to classes which will use slf4j. First class called BarComponent is created on com.lordofthejars.bar: public class BarComponent {private static final Logger logger = LoggerFactory.getLogger(BarComponent.class);public void bar() {String name = "lordofthejars";logger.info("Hello from Bar.");logger.debug("In bar my name is {}.", name);}}Note two big differences from log4j. The first one is that is no longer required the typical if construction above each log call. The other one is a pair of ‘{}’. Only after evaluating whether to log or not, logback will format the message replacing ‘{}’ with the given string value. The other one called FooComponent is created on com.lordofthejars.foo: public class FooComponent {private static final Logger logger = LoggerFactory.getLogger(FooComponent.class);public void foo() {String name = "Alex";logger.info("Hello from Foo.");logger.debug("In foo my name is {}.", name);}}And now calling foo and bar method, with previous configuration, the output produced will be: 13:49:59.586 [main] INFO c.l.b.BarComponent - Hello from Bar. 13:49:59.617 [main] DEBUG c.l.b.BarComponent - In bar today is 5/3/2012 13:49:59.618 [main] INFO c.l.f.FooComponent - Hello from Foo.Notice that debug lines in foo method are not shown. This is ok, because we have set to be in this way. Next step we are going to take is configuring logback, but instead of using xml approach we are going to use groovy DSL approach. Logback will give preference to groovy configuration over xml configuration, so keep in mind it if you are mixing configuration approaches. So first thing to do is add groovy as dependency. <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy</artifactId> <version>${groovy.version}</version> <scope>runtime</scope> </dependency>And then we are going to create the same configuration created previously but in groovy format. import ch.qos.logback.classic.encoder.PatternLayoutEncoder import ch.qos.logback.core.ConsoleAppenderimport static ch.qos.logback.classic.Level.DEBUG import static ch.qos.logback.classic.Level.INFOappender("STDOUT", ConsoleAppender) { encoder(PatternLayoutEncoder) { pattern = "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} Groovy - %msg%n" } }logger("com.lordofthejars.foo", INFO) root(DEBUG, ["STDOUT"])You can identify the same parameters of xml approach but as groovy functions. I wish you have found this post useful, and in next project, if you can, use slf4j in conjunction with logback, your application will run faster than logging with log4j. Download Code Reference: Using slf4j with logback tutorial from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Using Delayed queues in practice

Often there are use cases when you have some kind of work or job queue and there is a need not to handle each work item or job immediately but with some delay. For example, if user clicks a button which triggers some work to be done, and one second later user realizes he / she was mistaken and job shouldn’t start at all. Or, f.e. there could be a use case when some work elements in a queue should be removed after some delay (expiration). There are a lot of implementations out there, but one I would like to describe is using pure JDK concurrent framework classes: DelayedQueue and Delayed interface. Let me start with simple (and empty) interface which defines the work item. I am skipping the implementation details like properties and methods as those are not important.package com.example.delayed;public interface WorkItem { // Some properties and methods here }The next class in our model will represent the postponed work item and implement Delayed interface. There are just few basic concepts to take into account: the delay itself and the actual time the respective work item has been submitted. This is how expiration would be calculated. So let’s do that by introducing PostponedWorkItem class.package com.example.delayed;import java.util.concurrent.Delayed; import java.util.concurrent.TimeUnit;public class PostponedWorkItem implements Delayed { private final long origin; private final long delay; private final WorkItem workItem;public PostponedWorkItem( final WorkItem workItem, final long delay ) { this.origin = System.currentTimeMillis(); this.workItem = workItem; this.delay = delay; }@Override public long getDelay( TimeUnit unit ) { return unit.convert( delay - ( System.currentTimeMillis() - origin ), TimeUnit.MILLISECONDS ); }@Override public int compareTo( Delayed delayed ) { if( delayed == this ) { return 0; }if( delayed instanceof PostponedWorkItem ) { long diff = delay - ( ( PostponedWorkItem )delayed ).delay; return ( ( diff == 0 ) ? 0 : ( ( diff < 0 ) ? -1 : 1 ) ); }long d = ( getDelay( TimeUnit.MILLISECONDS ) - delayed.getDelay( TimeUnit.MILLISECONDS ) ); return ( ( d == 0 ) ? 0 : ( ( d < 0 ) ? -1 : 1 ) ); } }As you can see, we create new instance of the class and save the current system time in internal origin property. The getDelayed method calculates the actual time left before work item gets expired. The delay is external setting which comes as constructor parameter. The mandatory implementation of Comparable<Delayed> is required as Delayed extends this interface. Now, we are mostly done! To complete the example, let’s make sure that same work item won’t be submitted twice to the work queue by implementing equals and hashCode (implemenation is pretty trivial and should not require any comments).public class PostponedWorkItem implements Delayed { ...@Override public int hashCode() { final int prime = 31;int result = 1; result = prime * result + ( ( workItem == null ) ? 0 : workItem.hashCode() );return result; }@Override public boolean equals( Object obj ) { if( this == obj ) { return true; }if( obj == null ) { return false; }if( !( obj instanceof PostponedWorkItem ) ) { return false; }final PostponedWorkItem other = ( PostponedWorkItem )obj; if( workItem == null ) { if( other.workItem != null ) { return false; } } else if( !workItem.equals( other.workItem ) ) { return false; }return true; } }The last step is to introduce some kind of manager which will scheduled work items and periodically polls out expired ones: meet WorkItemScheduler class.package com.example.delayed;import java.util.ArrayList; import java.util.Collection; import java.util.concurrent.BlockingQueue; import java.util.concurrent.DelayQueue;public class WorkItemScheduler { private final long delay = 2000; // 2 secondsprivate final BlockingQueue< PostponedWorkItem > delayed = new DelayQueue< PostponedWorkItem >();public void addWorkItem( final WorkItem workItem ) { final PostponedWorkItem postponed = new PostponedWorkItem( workItem, delay ); if( !delayed.contains( postponed )) { delayed.offer( postponed ); } }public void process() { final Collection< PostponedWorkItem > expired = new ArrayList< PostponedWorkItem >(); delayed.drainTo( expired );for( final PostponedWorkItem postponed: expired ) { // Do some real work here with postponed.getWorkItem() } } }Usage of BlockingQueue guarantees thread safety and high level of concurrency. The process method should be run periodically in order to drain work items queue. It could be annotated by @ Scheduled annotation from Spring Framework or by EJB’s @Schedule annotation from JEE 6. Enjoy! Reference: Using Delayed queues in practice from our JCG partner Andriy Redko at the Andriy Redko {devmind} blog....

What additional features does Java EE 6 have to move from Spring?

I am a senior java developer who has to work on the technologies chosen by the application architect. At the maximum I can express my opinion on a particular technology, I can’t make/influence technology selection decision. So I don’t have a choice of moving from Spring to JavaEE6 or from JavaEE6 to Spring on my official projects. I strongly believe that as a Java developer I have to keep updated on (at least few) latest technologies. So I(many java developers) generally follow java community websites or blogs to have an idea on whats going on in java community. Specifically I do follow updates from some Java Champions or well known popular authors because they might have better vision on what is next big thing in Java space. Few years back I have seen so many people talking about Spring. Then I started learning Spring and still I just love it. I have been using JavaEE5 for a couple of years and I didn’t find any feature which Spring is not providing. But recently I am seeing so many articles on “Moving from Spring to JavaEE6″ for every couple of days. So I thought of giving it a try, I installed NetBeans7.1, Glassfish3.1 and did a simple POC. Its wonderful, I am able to write a simple app in just 10 min. Yes, JavaEE6 improved a lot over it predecessors. But again I am not seeing anything new which I can’t do with Spring.  OK, let me share my thoughts on the criteria that is chosen by “Moving from Spring to JavaEE6″ article authors. 1. So many Jars in WEB-INF/lib Spring application has its dependencies in WEB-INF/lib and JavaEE6 app will have in server lib. Even for Spring app, we don’t need to go and manually download all those Jars, we can use Maven/Ivy or even we can start with an archetype template with all dependencies configured. And its only onetime Job. I am not sure will there be any performance improvement by having jars in server lib instead of WEB-INF/lib. If that is the case we can place Spring app dependencies in server lib. What I am missing here? 2. Type-safe Dependency Injection From Spring 2.5 we have annotation based DI support using @Autowired and if you are still saying Spring is XML based please take a look at Spring 3.x. If you want to give a custom-name to spring bean(in case of multiple implementation for same Interface), you can. How is it different from JavaEE6’s CDI @Injext and @Named? 3. Convention Over Configuration EJB3 methods are transactional by default, just slap it with @Stateless. In Spring we can create a custom StereoType, say @TransactionalServe, like @Service @Transactional public @interface TransactionalServe { } and we can achieve Convention Over Configuration. Did I miss anything here? 4. Spring depends on JavaEE Of course Spring depends on JavaSE and JavaEE. Spring is just making the development easier. You can always use JavaEE APIs like JSF, JPA, JavaMail etc with Spring in easier way. Did anybody said Spring came to completely vanish JavaEE?? No. 5. Standards based, App Server Support, License blah blah blah. These are the things that developers don’t have much(any) control. From a developer perspective, we love whatever makes development easier. So I am not seeing any valid reason to migrate an existing Spring app to JavaEE6. Till now I didn’t find one thing which CDI can do and Spring can’t do. For green field projects just to have depency injection we might not need Spring as we already have CDI in-built in JavaEE6. Does JavaEE6 address any of the following: 1. Batch Processing: Almost all the big enterprises have some batch jobs to run. Does JavaEE6 have any support for implementing them. Do you suggest to use Spring Batch or start from scratch in vanilla JavaEE6. 2. Social Network Integration: These days it became very common requirement for web apps to integrate with Social Network sites. Again what do you have in JavaEE6 for this? 3. Environment Profiles: In Spring I can have my mock services enabled in Testing profile and my real services in Production profile. I am aware of @Alternative, but can we configure more than 2 Alternatives without using String based injection? 4. Web application Security: What is Spring-security’s counter part in JavaEE6? 5. What about integration with NoSQL, Flex, Mobile development etc? JavaEE6 got CDI now, so suddenly Spring become legacy!!!! Conclusion: Yeah JavaEE6 has cool stuff now(lately??) but it is not going to replace Spring anyway. Long live Spring. Reference: What additional features do JavaEE6 have to move from Spring? from our JCG partner Siva Reddy at the My Experiments on Technology blog....

End of ERP as we know it?

A friend of mine on Facebook drew my attention to this blog post, ‘End of ERP’ by Tien Tzuo on Forbes.com. With the professional lives of millions tied to ERP in some way, I can imagine the buzz this post must be creating. SAP, being the The biggest ERP software maker in the world and the parent company of my employer, I read this with interest. So as to not be influenced by others’ arguments, I haven’t read any responses to this post yet. If you haven’t already, you can read the original post by Tien Tzuo here. To get your opinion on this matter, I have created a short survey of only 5 questions that you can access by clicking here. I will publish the results of the survey soon. A link to the survey also appears at the bottom of this post for your convenience. In my opinion, this notable (reputation derived from the fact that it appeared on Forbes) post is way biased, as many posts often are. Could Tien’s earlier job at SalesForce.com as a marketing officer be the reason? Predicting the end of something epic or a most trusted technology, is sure to generate a lot of buzz, which is what bloggers often set out to do. The post would have been a lot better and valuable had he compared ERP’s strengths and weaknesses and explained why the weaknesses are so glaring that ERP customers would be willing to walk away from ERP, something so crucial to their existence. There is no success for a case that lacks even a semblance of honest acknowledgment of the other side of the argument. In support of his argument, Tien mentions some key changes in consumer behavior and consumption patterns. The change in the ways customers engage with a company is driving ERP to its inevitable death. This is the main theme in ‘End of ERP’. The services based consumption is rapidly increasing, but it can be applied only to so many things. By focusing on this alone, isn’t Tien forgetting the business processes around other product segments? Like food, energy, health and vehicles, there are simply too many things we cannot subscribe to and consume remotely. All standard functions of an ERP are still required for those sectors, aren’t they? A customer may stop buying cars and instead rent from Zipcar, but cars will still have to be made, sold and bought. How would companies manage their businesses and have consolidated views of them without ERP?ERP modules – Credit((http://www.abouterp.com/)Tien also mentions companies like SalesForce.com and touts their successes as the proof that companies are moving away from ERP. SalesForce doesn’t offer anything other than CRM, does it? Does it provide finance, HR or materials management modules of ERP? I guess not. You can’t just run a big company effectively by mish- mashing different services from ten different vendors. That’s why ERP exists and will keep it’s market share in the enterprise segment. I do agree, however, that cloudification (I know, I know it’s not a word in the English dictionary) of business functions is an irreversible trend. Oracle and SAP’s acquisitions of Taleo and SuccessFactors, respectively, are an indication of their grudging acceptance of this fact. The key to their success is not the demand for ERP in the cloud, which is ever present, but their ability to integrate acquired companies and their products to provide the same kind of comprehensive tool set as ERP. “End of ERP” concludes by highlighting some key business requirements that according to Tien, are not met by ERP today. Without going in to details, it suffices to say that ERP is not meant to be a silver bullet for all business problems. It does what it does while ERP providers and its ecosystem try to find solutions to the unresolved business problems. Doesn’t business intelligence (BI) software aim to solve the kind of issues he mentions? The case in point is, there are a number of ways to mine information that you need. The importance of BI is undeniable and that’s what vendors are investing millions in. The enormous response to SAP’s in-memory analytics appliance HANA is just an example of how innovative products will meet the business requirements of today. While the business problems mentioned in the post may be genuine, they simply highlight opportunities for ERP’s improvement and do not in any way spell doom for it. Make your voice heard?  Take the Survey Reference: End of ERP as we know it? from our JCG partner Mahesh Gadgil at the Simple yet Practical blog. ...

Lazy JSF Primefaces Datatable Pagination – Part 2

The page code is very simple and there is no complication. Check the “index.xhtml” code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui"> <h:head> </h:head> <h:body> <f:view> <h:form> <p:dataTable id="lazyDataTable" value="#{playerMB.allPlayers}" var="player" paginator="true" rows="10" selection="#{playerMB.player}" selectionMode="single" paginatorTemplate="{CurrentPageReport} {FirstPageLink} {PreviousPageLink} {PageLinks} {NextPageLink} {LastPageLink} {RowsPerPageDropdown}" rowsPerPageTemplate="5,10,15" style="width: 80%;margin-left: 10%;margin-right: 10%;"><p:ajax event="rowSelect" update=":playerDialogForm" oncomplete="playerDetails.show()" /><p:column> <f:facet name="header">Name</f:facet> <h:outputText value="#{player.name}" /> </p:column> <p:column> <f:facet name="header">Age</f:facet> <h:outputText value="#{player.age}" /> </p:column> </p:dataTable> </h:form><p:dialog widgetVar="playerDetails" header="Player" modal="true"> <h:form id="playerDialogForm"> <h:panelGrid columns="2"> <h:outputText value="Id: " /> <h:outputText value="#{playerMB.player.id}" /> <h:outputText value="Name: " /> <h:outputText value="#{playerMB.player.name}" /> <h:outputText value="Age: " /> <h:outputText value="#{playerMB.player.age}" /> </h:panelGrid> </h:form> </p:dialog> </f:view> </h:body> </html>We got a lazy datatable that will display a selected value in a dialog. In our Managed Bean we have a simpler code than the page: package com.mb;import java.io.Serializable;import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped;import org.primefaces.model.LazyDataModel;import com.model.Player;@ViewScoped @ManagedBean public class PlayerMB implements Serializable {private static final long serialVersionUID = 1L; private LazyDataModel<Player> players = null; private Player player;public LazyDataModel<Player> getAllPlayers() { if (players == null) { players = new PlayerLazyList(); }return players; }public Player getPlayer() { if(player == null){ player = new Player(); }return player; }public void setPlayer(Player player) { this.player = player; } }We got a get/set to the Player entity and a get to the an object of the LazyDataModel type. Check bellow the implementation of the PlayerLazyList code package com.mb;import java.util.List; import java.util.Map;import org.primefaces.model.LazyDataModel; import org.primefaces.model.SortOrder;import com.connection.MyTransaction; import com.dao.PlayerDAO; import com.model.Player;public class PlayerLazyList extends LazyDataModel<Player> {private static final long serialVersionUID = 1L;private List<Player> players;private MyTransaction transaction;private PlayerDAO playerDAO;@Override public List<Player> load(int startingAt, int maxPerPage, String sortField, SortOrder sortOrder, Map<String, String> filters) { try { try { transaction = MyTransaction.getNewTransaction(); playerDAO = new PlayerDAO(transaction);transaction.begin();// with datatable pagination limits players = playerDAO.findPlayers(startingAt, maxPerPage);// If there is no player created yet, we will create 100!! if (players == null || players.isEmpty()) { playerDAO.create100Players();// we will do the research again to get the created players players = playerDAO.findPlayers(startingAt, maxPerPage); } } finally { transaction.commit(); } } catch (Exception e) { e.printStackTrace(); }// set the total of players if(getRowCount() <= 0){ setRowCount(playerDAO.countPlayersTotal()); }// set the page dize setPageSize(maxPerPage);return players; }@Override public Object getRowKey(Player player) { return player.getId(); }@Override public Player getRowData(String playerId) { Integer id = Integer.valueOf(playerId);for (Player player : players) { if(id.equals(player.getId())){ return player; } }return null; } }About the code above:The load method: the Primefaces will invoke this method every time that the pagination is fired. It will have all parameters with valid values; with these parameters you will be able to do a query in the database getting only for the needed data. If you want to sort your query by a field you can use the sortField attribute that will have the column datatable value (it will be null if the user do not order); the sortOrder will indicate if the user wants ascending or descending. The getRowKey method: this method return an id to each line, the Primefaces will invoke this method when needed. The getRowData method: will return a selected Player in the datatable. When you run this application the first time it will persist 100 players in the database. In a real application this would not be necessary. A last configuration need to be added in the “web.xml” file: <persistence-context-ref> <persistence-context-ref-name>JSFPU</persistence-context-ref-name> <persistence-unit-name>JSFPU</persistence-unit-name> </persistence-context-ref>We will use this configuration to do the JNDI Lookup. Running our application Now we just need to start up the application. To access the application you can use the link: http://localhost:8080/DatatableLazyPrimefaces/Click here to download the source code of this post. Reference: Lazy JSF Datatable Pagination (Primefaces) from our JCG partner Hebert Coelho at the uaiHebert blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: