Featured FREE Whitepapers

What's New Here?

javascript-logo

Java EE 7 with Angular JS – Part 1

Today’s post will show you how to build a very simple application using Java EE 7 and Angular JS. Before going there let me tell you a brief story: I have to confess that I was never a big fan of Javascript, but I still remember the first time I have used it. I don’t remember the year exactly, but probably around mid 90′s. I had a page with 3 frames (yes frames! remember those? very popular around that time) and I wanted to reload 2 frames when I clicked a link on the 3rd frame. At the time, Javascript was used to do some fancy stuff on webpages, not every browser have Javascript support and some even required you to turn it on. Fast forwarding to today the landscaped changed dramatically. Javascript is a full development stack now and you can develop entire applications written only in Javascript. Unfortunately for me, sometimes I still think I’m back in the 90′s and don’t give enough credit to Javascript, so this is my attempt to get to know Javascript better. Why Java EE 7? Well, I like Java and the new Java EE version is pretty good. Less verbose and very fast using Wildfly or Glassfish. It provides you with a large set of specifications to suit your needs and it’s a standard in the Java world. Why Angular JS? I’m probably following the big hype around Angular here. Since I don’t have much experience with Javascript I don’t know the offers very well, so I’m just following advice of some friends and I have also noticed a big acceptance of Angular in the last Devoxx. Every room with an Angular talk was full, so I wanted to give it a try and found out for myself. The Application For the application, it’s a simple list with pagination and a REST service that feeds the list data. Every time I start a new enterprise project it’s usually the first thing we code: create a table, store some data and list some random data, so I think it’s appropriate. The SetupJava EE 7 Angular JS ng-grid UI Bootstrap WildflyThe Code (finally!) Backend – Java EE 7 Starting with the backend, let’s define a very simple Entity class (some code is omitted for simplicity): Person.java @Entity public class Person { @Id private Long id;private String name;private String description;} If you’re not familiar with Java EE JPA specification, this will allow to model an object class into a database table by using the annotation @Entity to connect to the database table with the same name and the annotation @Id to identify the table primary key. Following by a persistence.xml: persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="myPU" transaction-type="JTA"> <properties> <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/> <property name="javax.persistence.schema-generation.create-source" value="script"/> <property name="javax.persistence.schema-generation.drop-source" value="script"/> <property name="javax.persistence.schema-generation.create-script-source" value="sql/create.sql"/> <property name="javax.persistence.schema-generation.drop-script-source" value="sql/drop.sql"/> <property name="javax.persistence.sql-load-script-source" value="sql/load.sql"/> </properties> </persistence-unit> </persistence> Two of my favourite new features on Java EE 7: now you can run sql in a standard way by using the properties javax.persistence.schema-generation.* and it also binds you to a default datasource if you don’t provide one. So for this case, it’s going to use the internal Wildfly H2 database for our application. Finally, to provide the list data we need to query the database and expose it as a REST service: PersonResource.java @Stateless @ApplicationPath("/resources") @Path("persons") public class PersonResource extends Application { @PersistenceContext private EntityManager entityManager;private Integer countPersons() { Query query = entityManager.createQuery("SELECT COUNT(p.id) FROM Person p"); return ((Long) query.getSingleResult()).intValue(); }@SuppressWarnings("unchecked") private List<Person> findPersons(int startPosition, int maxResults, String sortFields, String sortDirections) { Query query = entityManager.createQuery("SELECT p FROM Person p ORDER BY " + sortFields + " " + sortDirections); query.setFirstResult(startPosition); query.setMaxResults(maxResults); return query.getResultList(); }public PaginatedListWrapper<Person> findPersons(PaginatedListWrapper<Person> wrapper) { wrapper.setTotalResults(countPersons()); int start = (wrapper.getCurrentPage() - 1) * wrapper.getPageSize(); wrapper.setList(findPersons(start, wrapper.getPageSize(), wrapper.getSortFields(), wrapper.getSortDirections())); return wrapper; }@GET @Produces(MediaType.APPLICATION_JSON) public PaginatedListWrapper<Person> listPersons(@DefaultValue("1") @QueryParam("page") Integer page, @DefaultValue("id") @QueryParam("sortFields") String sortFields, @DefaultValue("asc") @QueryParam("sortDirections") String sortDirections) { PaginatedListWrapper<Person> paginatedListWrapper = new PaginatedListWrapper<>(); paginatedListWrapper.setCurrentPage(page); paginatedListWrapper.setSortFields(sortFields); paginatedListWrapper.setSortDirections(sortDirections); paginatedListWrapper.setPageSize(5); return findPersons(paginatedListWrapper); } } The code is exactly as a normal Java POJO, but using the Java EE annotations to enhance the behaviour. @ApplicationPath("/resources") and @Path("persons") will expose the REST service at the url yourdomain/resources/persons, @GET marks the logic to be called by the http GET method and @Produces(MediaType.APPLICATION_JSON) formats the REST response as JSON format. Pretty cool with only a few annotations. To make it a little easier to exchange the needed information for the paginated list, I have also created the following wrapper class: PaginatedListWrapper.java public class PaginatedListWrapper<T> { private Integer currentPage; private Integer pageSize; private Integer totalResults;private String sortFields; private String sortDirections; private List<T> list; } And we are done with the backend stuff. UI – Angular JS To display the data we are going to use Angular JS. Angular extends the traditional HTML with additional custom tag attributes to bind data represented in Javascript variables by following a MVC approach. So, lets look to our html page: index.html <!DOCTYPE html> <!-- Declares the root element that allows behaviour to be modified through Angular custom HTML tags. --> <html ng-app="persons"> <head> <title></title> <script src="lib/angular.min.js"></script> <script src="lib/jquery-1.9.1.js"></script> <script src="lib/ui-bootstrap-0.10.0.min.js"></script> <script src="lib/ng-grid.min.js"></script><script src="script/person.js"></script><link rel="stylesheet" type="text/css" href="lib/bootstrap.min.css"/> <link rel="stylesheet" type="text/css" href="lib/ng-grid.min.css"/> <link rel="stylesheet" type="text/css" href="css/style.css"/> </head><body><br><div class="grid"> <!-- Specify a JavaScript controller script that binds Javascript variables to the HTML.--> <div ng-controller="personsList"> <!-- Binds the grid component to be displayed. --> <div class="gridStyle" ng-grid="gridOptions"></div><!-- Bind the pagination component to be displayed. --> <pagination direction-links="true" boundary-links="true" total-items="persons.totalResults" page="persons.currentPage" items-per-page="persons.pageSize" on-select-page="refreshGrid(page)"> </pagination> </div> </div></body> </html> Apart from the Javascript and CSS declarations there is very little code in there. Very impressive. Angular also have a wide range of ready to use components, so I’m using the ng-grid to display the data and UI Bootstrap that provides a pagination component. The ng-grid also have a pagination component, but I liked the UI Bootstrap pagination component more. There is something still missing. The Javascript file where everything happens: person.js var app = angular.module('persons', ['ngGrid', 'ui.bootstrap']); // Create a controller with name personsList to bind to the html page. app.controller('personsList', function ($scope, $http) { // Makes the REST request to get the data to populate the grid. $scope.refreshGrid = function (page) { $http({ url: 'resources/persons', method: 'GET', params: { page: page, sortFields: $scope.sortInfo.fields[0], sortDirections: $scope.sortInfo.directions[0] } }).success(function (data) { $scope.persons = data; }); };// Do something when the grid is sorted. // The grid throws the ngGridEventSorted that gets picked up here and assigns the sortInfo to the scope. // This will allow to watch the sortInfo in the scope for changed and refresh the grid. $scope.$on('ngGridEventSorted', function (event, sortInfo) { $scope.sortInfo = sortInfo; });// Watch the sortInfo variable. If changes are detected than we need to refresh the grid. // This also works for the first page access, since we assign the initial sorting in the initialize section. $scope.$watch('sortInfo', function () { $scope.refreshGrid($scope.persons.currentPage); }, true);// Initialize required information: sorting, the first page to show and the grid options. $scope.sortInfo = {fields: ['id'], directions: ['asc']}; $scope.persons = {currentPage : 1}; $scope.gridOptions = { data: 'persons.list', useExternalSorting: true, sortInfo: $scope.sortInfo }; }); The Javascript code is very clean and organised. Notice how everything gets added to an app controller, allowing you to have multiple separation of concerns on your business logic. To implement the required behaviour we just need to add a few functions to refresh the list by calling our REST service and monitor the grid data to refresh the view. This is the end result:Next Steps: For the following posts related with these series, I’m planning to:Implement filtering Implement detail view Implement next / prev browsing Deploy in the cloud Manage Javascript dependenciesResources You can clone a full working copy from my github repository and deploy it to Wildfly. You can find instructions there to deploy it. Should also work on Glassfish. Java EE – Angular JS Source Update In the meanwhile I have updated the original code with the post about Manage Javascript dependencies. Please, download the original source of this post from the release 1.0. You can also clone the repo, and checkout the tag from release 1.0 with the following command: git checkout 1.0. I hope you enjoyed the post! Let me know if you have any comments about this.Reference: Java EE 7 with Angular JS – Part 1 from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
spring-logo

How to compose html emails in Java with Spring and Velocity

In this post I will present how you can format and send automatic emails with Spring and Velocity. Spring offers alone the capability to create simple text emails, which is fine for simple cases, but in typical enterprise application you wouldn’t want to do that for a number of reasons:creating HTML-based email content in Java code is tedious and error prone there is no clear separation between display logic and business logic changing the display structure of the email requires writing Java code, recompiling, redeploying etcTypically the approach taken to address these issues is to use a template library such as FreeMarker or Velocity to define the display structure of email content. For Podcastpedia I chose Velocity, which is a free open source Java-based templating engine from Apache. In the end my only coding task will be to create the data that is to be rendered in the email template and sending the email. I will base the demonstration on a real scenario from Podcastpedia.org Scenario On Podcastpedia.org’s Submit podcast page, we encourage our visitors and podcast producers to submit their podcasts to be included in our podcast directory. Once a podcast is submitted, an automatic email will be generated to notify me (adrianmatei [AT] gmail DOT com ) and the Podcastpedia personnel ( contact [AT] podcastpedia DOT org) about it. Let’s see now how Spring and Velocity play together: 1. Prerequisites 1.1. Spring setup “The Spring Framework provides a helpful utility library for sending email that shields the user from the specifics of the underlying mailing system and is responsible for low level resource handling on behalf of the client.”[1] 1.1.1. Library depedencies The following additional jars need to be on the classpath of your application in order to be able to use the Spring Framework’s email library.The JavaMail mail.jar library The JAF activation.jar libraryI load these dependencies with Maven, so here’s the configuration snippet from the pom.xml: Spring mail dependencies <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>jaf</groupId> <artifactId>activation</artifactId> <version>1.0.2</version> <scope>provided</scope> </dependency> 1.2. Velocity setup To use Velocity to create your email template(s), you will need to have the Velocity libraries available on your classpath in the first place. With Maven you have the following dependencies in the pom.xml file: Velocity dependencies in Maven <!-- velocity --> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity-tools</artifactId> <version>2.0</version> </dependency> 2. Email notification service I defined the EmailNotificationService interface for email notification after a successful podcast submission. It has just one operation, namely to notify the Podcastpedia personnel about the proposed podcast. The code bellow presents the EmailNotificationServiceImpl, which is the implementation of the interface mentioned above: Java code to send notification email package org.podcastpedia.web.suggestpodcast; import java.util.Date; import java.util.HashMap; import java.util.Map; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import org.apache.velocity.app.VelocityEngine; import org.podcastpedia.common.util.config.ConfigService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.mail.javamail.MimeMessageHelper; import org.springframework.mail.javamail.MimeMessagePreparator; import org.springframework.ui.velocity.VelocityEngineUtils; public class EmailNotificationServiceImpl implements EmailNotificationService { @Autowired private ConfigService configService; private JavaMailSender mailSender; private VelocityEngine velocityEngine; public void sendSuggestPodcastNotification(final SuggestedPodcast suggestedPodcast) { MimeMessagePreparator preparator = new MimeMessagePreparator() { @SuppressWarnings({ "rawtypes", "unchecked" }) public void prepare(MimeMessage mimeMessage) throws Exception { MimeMessageHelper message = new MimeMessageHelper(mimeMessage); message.setTo(configService.getValue("EMAIL_TO_SUGGEST_PODCAST")); message.setBcc("adrianmatei@gmail.com"); message.setFrom(new InternetAddress(suggestedPodcast.getEmail()) ); message.setSubject("New suggested podcast"); message.setSentDate(new Date()); Map model = new HashMap(); model.put("newMessage", suggestedPodcast); String text = VelocityEngineUtils.mergeTemplateIntoString( velocityEngine, "velocity/suggestPodcastNotificationMessage.vm", "UTF-8", model); message.setText(text, true); } }; mailSender.send(preparator); } //getters and setters omitted for brevity } Let’s go a little bit through the code now: 2.1. JavaMailSender and MimeMessagePreparator The org.springframework.mail package is the root level package for the Spring Framework’s email support. The central interface for sending emails is the MailSender interface, but we are using the org.springframework.mail.javamail.JavaMailSender   interface (lines 22, 42), which adds specialized JavaMail features such as MIME message support to the MailSender interface (from which it inherits). JavaMailSender also provides a callback interface for preparation of JavaMail MIME messages, called org.springframework.mail.javamail.MimeMessagePreparator (lines 26-42) . 2.2. MimeMessageHelper Another helpful class when dealing with JavaMail messages is the org.springframework.mail.javamail.MimeMessageHelper class, which shields you from having to use the verbose JavaMail API. As you can see by using the MimeMessageHelper, it becomes pretty easy to create a MimeMessage: Usage of MimeMessageHelper MimeMessageHelper message = new MimeMessageHelper(mimeMessage); message.setTo(configService.getValue("EMAIL_TO_SUGGEST_PODCAST")); message.setBcc("adrianmatei@gmail.com"); message.setFrom(new InternetAddress(suggestedPodcast.getEmail()) ); message.setSubject("New suggested podcast"); message.setSentDate(new Date()); 2.3. VelocityEngine The next thing to note is how the email text is being created: Create email text with Velocity template Map model = new HashMap(); model.put("newPodcast", suggestedPodcast); String text = VelocityEngineUtils.mergeTemplateIntoString( velocityEngine, "velocity/suggestPodcastNotificationMessage.vm", "UTF-8", model); message.setText(text, true);the VelocityEngineUtils.mergeTemplateIntoString method merges the specified template (suggestPodcastNotificationMessage.vm present in the velocity folder from the classpath) with the given model (model – “newPodcast”), which a map containing model names as keys and model objects as values. you also need to specify the velocityEngine you work with and, finally, the result is returned as a string2.3.1. Create velocity template You can see below the Velocity template that is being used in this example. Note that it is HTML-based, and since it is plain text it can be created using your favorite HTML or text editor. Velocity template <html> <body> <h3>Hi Adrian, you have a new suggested podcast!</h3> <p> From - ${newMessage.name} / ${newMessage.email} </p> <h3> Podcast metadataline </h3> <p> ${newMessage.metadataLine} </p> <h3> With the message </h3> <p> ${newMessage.message} </p> </body></html> 2.4. Beans configuration Let’s see how everything is configured in the application context: Email service configuration <!-- ********************************* email service configuration ******************************* --> <bean id="smtpSession" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="java:comp/env/mail/Session"/> </bean> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="session" ref="smtpSession" /> </bean> <bean id="velocityEngine" class="org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value> resource.loader=class class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> <bean id="emailNotificationServiceSuggestPodcast" class="org.podcastpedia.web.suggestpodcast.EmailNotificationServiceImpl"> <property name="mailSender" ref="mailSender"/> <property name="velocityEngine" ref="velocityEngine"/> </bean>the JavaMailSender has a JNDI reference to a smtp session. A generic example how to configure an email session with a google account can be found in the Jetty9-gmail-account.xml file the VelocityEngineFactoryBean is a factory that configures the VelocityEngine and provides it as a bean reference. the ClasspathResourceLoader is a simple loader that will load templates from the classpathSummary You’ve learned in this example how to compose html emails in Java with Spring and Velocity. All you need is mail, spring and velocity libraries, compose your email template and use those simple Spring helper classes to add metadata to the email and send it. Resources Source code – GitHub repositoriesPodcastpedia-weborg.podcastpedia.web.suggestpodcast.EmailNotificationService.java -Java Interface for email notification org.podcastpedia.web.suggestpodcast.EmailNotificationServiceImp.java – Java implementation of the interface main / resources / suggestPodcastNotificationMessage.vm – Velocity template src / main / resources / config / Jetty9-gmail-account.xml – example email session configuration for gmail accountPodcastpedia-commonsrc / main / resources / spring / pcm-common.xml – email related bean configuration in Spring application contextWebSpring Email integration  Apache Velocity ProjectReference: How to compose html emails in Java with Spring and Velocity from our JCG partner Adrian Matei at the Codingpedia.org blog....
software-development-2-logo

Applying S.T.O.P. To Software Development

The acronym STOP (or STOPP) is used by several organizations (United States Army, Hunter’s Ed, Mountain Rescue, Search and Rescue, Boy Scouts of America), often for describing how to cope with wilderness survival situations or other situations when one is lost (especially outdoors). The “S” typically stands for “Stop” (some say it stands for “Sit“), the “T” stands for “Think” (some say “Take a Breath”), the “O” stands for “Observe”, and the “P” stands for “Plan” (some say it stands for “Prepare”). When there is a second “P”, that typically stands for “Proceed.” In other words, the best approach to use for wilderness survival is to stop, think, observe, and plan before proceeding (taking action). Proceeding without a plan based on thinking and observation is rarely a good idea in for those in a survival situation. Our approaches to developing software and fixing issues with existing software can benefit from the general guidance STOP provides. In this, my 1000th blog post, I look at applying the principles of STOP to software development.Developing New Software For many of us who consider ourselves software developers, programmers, or even software engineers, it can be difficult to ignore the impulse to jump right in and write some code. This is especially true when we’re young and relatively inexperienced with the costs associated with that approach. These costs can include bad (no) overall design and spaghetti code. Code written with this approach often suffers from “stream of consciousness programming” syndrome, in which the code comes out in the way one is thinking it. The problem with “steam of consciousness” programming is that it may only be coherent to the author at that particular moment and not later when outside of that “stream of consciousness.” It is likely not to be coherent by anyone else. By first considering at least at a high level how to organize the code, the developer is more likely to build something that he or she and others will understand later. At some point, we all write lines of code based on our “stream of consciousness,” but that’s much more effective if it’s implementing a small number of lines in well-defined methods and classes. When implementing a new feature, the software developer generally benefits from taking the following general steps:Stop:Be patient and don’t panic. Don’t allow schedule pressure to force you into hasty decisions that may not save any time in the long-run and can lead to problematic code that you and others will have to deal with over the long run. Gather available facts such as what the desired functionality is (customer requirements or expressed desires).Think:Consider what portions of the desired new feature might already be provided. Consider alternative approaches that might be used to implement the desired feature. Consider which existing tools, libraries, and people are already available to might satisfy the need or help satisfy it. Consider design and architectural implications related to existing functionality and likely potential future enhancements.Observe:Confirm available existing tools and libraries that might be used to implement the new feature or which could be expanded to work with the new feature. If necessary, search for blogs, forums, and other sources of information on approaches, libraries, and tools that might be used to implement the new feature. Use others’ designs and code as inspiration for how to implement similar features to what they implemented (or, in some cases, how not to implement a similar feature).Plan:“Design” implementation. In simpler cases, this may simply be a mental step without any formal tools or artifacts. If adhering to test-driven development principles, plan the tests to write first. Even if not strictly applying TDD, make testability part of your consideration of how to design the software. Allocate/estimate the time needed to implement or effort needed to accomplish this, even if you call it by a fancy name such as Story Points.Proceed:Implement and test and implement and test functionality. Get feedback on implemented functionality from customers and other stakeholders and repeat cycle as necessary.The above are just some of the practices that can go into applying the STOP principle to new software development. There are more that could be listed. These steps, especially for simpler cases, might take just a few minutes to accomplish, but those extra few minutes can lead to more readable and maintainable code. These steps can also prevent pollution of an existing baseline and can, in some cases, be the only way to get to a “correct” results. More than once, I have found myself un-doing a bunch of stream of consciousness programming (or doing some significant code changing) because I did not apply these simple steps before diving into coding. Fixing and Maintaining Software When fixing a bug in the software, it is very easy to make the mistake of fixing a symptom rather than the root cause of the problem. Fixing the symptom might bring short-tem benefit of addressing an obviously wrong behavior or outcome, but often hides a deeper problem that may manifest itself with other negative symptoms or, even worse, might contribute to other undetected but significant problems. Applying STOP to fixing bugs can help address these issues.Stop:Be patient and don’t panic. Don’t allow schedule pressure to force you into merely covering up a potentially significant problem. Although a bad enough (in terms of financial loss or loss of life) problem may require you to quickly address the symptom, ensure that the root cause is addressed in a timely fashion as well.Think:Consider anything you or your team recently added to the baseline that may have introduced this bug or that may have revealed a pre-existing bug. Consider the effects/costs of this bug and determine where fixing the bug falls in terms of priority. Consider whether this bug could be related to any other issues you’re already aware of. Consider whether this “bug” is really a misunderstood feature or a user error before fixing something that wasn’t broken.Observe:Evaluate appropriate pieces of evidence to start determining what went wrong. These might be one or more of the following: reading code itself and thinking through its flows, logs, debugger, tools (IDEs warnings and hints and Java examples include VisualVM, jstack, jmap, JConsole), application output, defect description, etc. Building on the Thinking step:Ensure that unit tests and other tests ran without reporting any breakage or issue. Evaluate revision history in your configuration management system to see if anything looks suspicious in terms of being related to the bug (same class or dependency class changed, for example). Evaluate whether any existing bugs/JIRAs in your database seem to be related. Even resolved defects can provide clues as to what is wrong or may have been reintroduced.Plan:Plan new unit test(s) that can be written to find this type of defect in the future in case it is reintroduced at some point and as a part of your confirmation that you have resolved the defect. Plan/design solution for this defect. At this point, if the most thorough solution is considered prohibitively expensive, you may need to choose a cheaper solution, but you are doing so based on knowledge and a deliberate decision rather than just doing what’s easiest and sweeping the real problem under the rug. Document in the DR’s/JIRA’s resolution that this decision was made and why it was made. Plan for schedule time to implement and test this solution.Proceed:Implement and test and implement and test functionality. Get feedback on implemented functionality from customers and other stakeholders and repeat cycle as necessary.There are other tactics and methodologies that might be useful in resolve defects in our code as part of the STOP approach. The important thing is to dedicate at least a small amount of time to really thinking about the problem at hand before diving in and ending up in some cases “fixing” the problem multiple times until the actual and real problem (the root cause) is really fixed. Conclusion Most software developers have a tendency to dive right in and implement a new feature or fix a broken feature as quickly as possible. However, even a small amount of time applying S.T.O.P. in our development process can bring benefits of more efficiency and a better product. Stopping to think, observe, and plan before proceeding is as effective in software development as it is in wilderness survival. Although the stakes often aren’t as high in software development as they are in wilderness survival, there is no reason we cannot still benefit from remembering and adhering to the principles of S.T.O.P.Reference: Applying S.T.O.P. To Software Development from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
devops-logo

Caching Architecture (Adobe AEM) – Part 1

Cache (as defined by Wikipedia) is a component that transparently stores data such that future requests for data can be faster. I hereby presume that you understand cache as a component and any architectural patterns around caching and thereby with this presumption I will not go into depth of caching in this article. This article will cover some of the very basics of fundamentals of caching (wherever relevant) and then will take a deep dive into the point-of-view on the caching architecture with respect a Content Management Plan in context to Adobe’s AEM implementation.       Problem Statement Principles for high performance and high availability don’t change but for conversation sakes lets assume we have a website where we have to meet the following needs.1 Billion hits on a weekend (hit is defined by a call to the resource and includes static resources like CSS, JS, Images, etc.) 700 million hits in a day 7.2 million page views in a day 2.2 million page views in an hour 80K hits in a second 40K page views in a minute 612 page views in a second 24×7 site availability 99.99% uptime Content availability to consumers in under 5 minutes from the time editors publish contentWhile the data looks steep the use case is not uncommon one. In current world where everyone is moving to devices, and digital there will be cases when brands are running campaigns. When those campaigns are running there will be needs for support such steep loads. These loads don’t stay for long but when then come they come fast, they come thick and we will have to support them. For the record, this is not some random theory I am writing, I have had the opportunity of being on a project (I cant name) where we supported similar number. The use case I picked here is of a Digital Media Platform where we have a large portion of the content is static, but the principles I am going to talk here will apply to any other platform or application. The problems that we want to solve here are:Performance: Caching is a pattern that we employ to increase the overall performance the application by storing the (processed) data in a store that is a) closest to the consumer of the data and b) is accessible quickly Scalability: In cases when we need to make the same data-set available to various consumers of the system, caching as a pattern makes it possible for us to scale the systems much better. Caching as we discussed earlier allows us to have processed data which takes away the need to run the same processing time and again which facilitates scalability Availability: Building on similar principles as of scalability, caching allows us to put in place data in areas where systems/components can survive outages be it network or other components. While it may lead to surfacing stale data at points, the systems are still available to the end users.Adobe AEM Perspective Adobe AEM as an application container (let’s remember AEM is not built on top of a true application container, though you can deploy in one), have it’s own nuances with scalability. In this article I will not dive into the scalability aspects, but as you scale the AEM publishers horizontally it leads to increase in several concerns around operations, licensing, cost, etc. The OOTB architecture and components we get with Adobe AEM themselves tell you to make use of cache on the web servers (using dispatcher). However, when you have to support Non Functional Requirements (NFRs) I listed above, the standard OOTB architecture will need a massive infrastructure falls short. We can’t just setup CQ with some backend servers and an apache front-ending with local cache, throw hardware at it with capacity and hope it will come together magically. As I explained, there is no magic in this world and everything needs to happen via some sort of science. Let’s put in perspective the fact that a standard apache web servers can handle a few thousand requests in a second and when you need to handle 80K hits in a second which includes resources like HTML, JS, CSS, images, etc.; with a variety of sizes. Without going into the sizing aspects, it is pretty clear that you would need not just a cluster of servers but a farm of servers to cater to all that traffic. With a farm of servers, you get yourself a nightmare to setup an organization and processes around operations and maintenance to ensure that you keep the site up and running 24×7. Solution Cache, cache and cache Definitions Before we dive into the design here, we need to understand some key aspects around caching. These concepts will be talked about in the POV below and it is critial that you understand these terminologies clearly.Cache miss refers to a scenario when a process requests the data from a cache store and the object does not exist in the store Cache hit refers to a scenario when a process requests the data from a cache store the data is available in the store. This event can only happen only when a cache object for the requests has been primed CachePrimeis a term associated with the process where we fill up the caching storage with the data. You can do this in two waysinwhichthiscan be achievedPre-prime is the method where we run a process (generally at the startup of the application) proactively to load all the various objects whose state we are aware of we can cache. This can be achieved by either using an asynchronous process or by an event notification OnDemand-prime is a method where the cache objects are primes real-time i.e. when a process which needs the resource does not finds it in the cache store, the cache is primed up as the resource is served back to the process itselfExpiration is a mechanism that allows the cache controller to remove objects from memory based on either a time duration or an eventTTL known as Time To Live a method which defines a time duration for which a data can live on a computer. In caching world this is a common expiration strategy where a cache object is expired (flagged to be evicted if needed) based on a time duration provided when the cache object is created Event-based (I can’t find a standards naming convention for this) cache expiration method is one where we can fire an event to mark a cache object as expired so that if needed it can be evicted from memory to make way for new objects or re-primed on the next requestEviction is a mechanism when the cache objects is removed from the cache memory to optimize for spaceThe Design // Where Data should be Cached This point-of-view builds on top of an architecture that designed by a team which I was a part of (I was the load architect) for a digital media platform. The NFRs we spoke about earlier are the ones that we have successfully supported on the platform during a weekend as part of a campaign the brand was running. Also, since then we continue to support very high traffic week everytime there are such events and campaigns. The design that we have in places takes care of various layers of caching in the architecture. Local clients When we talk about such high traffic load, we must understand that this traffic has certain characteristics that work in our favor.All this traffic is generated by a smaller subset of people who access the site. In our case, when we say that we serve 1 billion hits on a single weekend, it is worthwhile to note that there are only 300,000 visitors on the site on that day who generate this much load A very large portion of all this traffic is static in nature (this is also one of the characteristics of a Digital Media Platform) // these construct of resources like javascript, css and also media assets like images and videos. These are files which once deployed seldom change or change with a release that doesn’t happen every day As the users interact with the site/platform, there is content and data which maps back to their profile and preferences and it does not necessarily changes frequently and also this is managed directly and only by the user themselvesWith these characteristics in play, there are things which off of the bat we can cache on the client’s machine i.e. the browser (or device). The mime-types which classify for client-side caching would be images, fonts, css, javascripts. Some of these can be cached infinitely while others should be cached for a medium’ish duration like a couple of days and what have you. Akamai (CDN) Content Delivery Networks (CDN) are service providers that enable serving content to end consumers with high performance and availability. To know more about CDN networks, you can read here. In the overall architecture CDN plays a very critical role. Akamai, AWS’s CoudFront and CloudFlare are some of the CDN providers with which we integrate very well. CDN for us over other things provides a highly available architecture. Some of these CDN provide you an ability to configure your environment such that if the origin servers (pointing to your data center) are unavailable they continue to serve content for a limited time from their local cache. This essentially means that while the backend services may be down, the consumer facing site is never down. Some aspects of the platform like content delivery and new publications are activated and in certain cases like breaking news we may have an impact on a SLA, but the consumers /end users never see your site as unavailable. In our architecture we use CDN to cache all the static content be it HTML pages, images or Videos. Once a static content is published via the Content Delivery Network, those pages are cached on the CDN for a certain duration. These durations are determined based on the refresh duration but the underlying philosophy is to break the content in tiers of Platinum, Gold, Silver and then assign duration for which each of these would be cached. In a platform like NFL where we are say pushing game feeds these need to be classified as Platinum and they had a TTL of 5 seconds, while content types like Home Page, News (not breaking news) etc. have a TTL of 10 minutes and has been classifies as Gold. Then on the same project we have TTL of 2 hours (or so) so sections like search and have been classified as Bronze. The intent was to identify and classify if not all most of the key sections and ensure that we leverage CDN cache effectively. We have observed that even for shorter TTLs like Platinum with increase/spike in traffic the Offload %age (defined as the number of hits served by CDN to humber of hits sent to backend) grows and touched a peak of 99.9% where the average offload %age is around 98%. Varnish Varnish is a web-accelerator which (if i may classify it as is as web server on steroids). If you are hearing its name for the first time, I strongly urge you to hop over here to get to know more about it. We had introduced Varnish as a layer to solve for the following:Boost Performance (reduce number of servers) - We have realized that Varnish bring an in-memory accelerator gives you a boost of anywhere between 5x-10x over using apache. This basically means that you can handle several times the load with Varnish sitting on top of Apache. We had done rigorous testing to prove these numbers out. The x-factor was mostly dependent on the page size aka the amount of content we loaded over the network Avoid DoS attacks – we had realized that if there are cases where you see a lot of influx of traffic coming into your server (directed and intentional or arbitrary) and if you cant to block all such traffic, your chances of successfully blocking the traffic on varnish without bringing down the server when compared to doing same on apache increase many fold. We also use Varnish as a mechanism to block any traffic that we don’t want to hit our infrastructure and those could be spiders and bots from markets and regions not targeted by campaigns we run Avoid Dog-pile effect – if you have heard this term the first time, then hop over here hype-free: Avoiding the dogpile effect. In high traffic situations and even when you have CDN networks setup, it is quite normal for your infrastructure to be hit by a dog-pile as cache expired. Chances of dog-pile effect to increase, increase as you hit lower TTL. Using varnish we have setup something that we call a Grace Configuration where we don’t allow requests for same URLs to pass through. These are queued and after a certain while if the primary request is still not getting through consequent requests are served off of stale cache.Apache If you haven’t heard about Apache WebServer, you might have heard about httpd. If none of these ring a bell, then this (Welcome! – The Apache HTTP Server Project) will make explain things. AEM’s answer to scale is what sits on this layer and is famously known as Dispatcher. This is a neat little module which can be installed on the Apache HTTP server and acts as a reverse proxy with a local disk cache. This module only supports one model of cache eviction which is event based. We can configure either the authoring or the publishing systems to send events of deleting and invalidating cache files on these servers in which case the next call on this server will be passed back to the publisher. The simplest of the model in AEM world and also recommended by one of the Adobe is to let everything invalidate (set statlevelfile = 0 or 1). This design simplifies the page/component design as now we don’t have to figure out any inter-component dependencies. While this is the simplest of the things to do, when we have to support such complex needs it calls for some sophistication in design and setups. I would recommend that is not the right way to go as it reduces the cache usage. We made sure that the site hierarchy is such that when a content is published we would never invalidate the entire site hierarchy and only relevant and contextual content is what gets evicted (invalidated in case of dispatcher). Publisher AEM publishing layer which is the last layer in this layer cake, seems like something which should be simplest and all figured out. That’s not the case. This is where you can be hit most (and it will be below the belt). AEM’s architecture is designed to work in a specific way and if you deviate from it, you are bound to fall into this trap. There are 2 things you need to be aware ofWhen we start writing components that are heavily dependent on queries it will eventually lead to system crumbling. You should be very careful with AEM Queries (which is dependent on underlying AEM’s Lucene implementation). This article tells us that we have about 4 layers of caching before anything should hit publisher. This means that the number of calls that should ever hit this layer is only a minuscule number. From here on, you need to establish how many calls your servers will receive in a second/minute. We have seen in cases where we have used search heavily AEM’s supported TPS takes a nose dive. I have instances across multiple projects where this number is lower then 5 transactions per second.The answer is to build for some sort of an application cache which we used to do in a typical JEE application. This will solve this issues, assuming that content creation either manually by authors or via ingestion is limited which means the load we put on search can be reduced significantly. The caveat you should be aware of is that we are adding one more layer of cache which is difficult to mange and if you have cluster of publishers this is one layer which will have distributed cache across servers and can lead to old pages cached on dispatcher. The chances of that happening will increase as the number of calls coming into publishers increase or as the number of servers in the cluster increase.Reference: Caching Architecture (Adobe AEM) – Part 1 from our JCG partner Kapil Viren Ahuja at the Scratch Pad blog....
java-logo

JSR 303 loading messages from an I18N property file

Overview This article will illustrate how to adapt the JSR 303 validation API to load messages from an I18n property file, and this by conserving all benefits of internationalisation and support for multiple languages. To achieve this we are going to implement a custom MessageInterpolator which will be based upon Spring API for managing I18N messages.       Dependencies Below the required maven dependencies to make this work, the Javax validation and Hibernate validation are not listed in here : <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.0.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.webflow</groupId> <artifactId>spring-binding</artifactId> <version>2.3.2.RELEASE</version> </dependency> </dependencies> Configuration of MessageSource The first step is the configuration of the MessageSource bean which is responsible of scanning and indexing the content of properties files. <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="defaultEncoding" value="UTF-8"/> <property name="basenames"> <list> <value>com.myproject.i18n.MyMessages</value> <value>com.myproject.i18n.ErrorMessages</value> </list> </property> </bean> MyMessages and ErrorMessages are the properties files we wanted to scan, the name of the files support the conventions for multiple language. For example if our application must support english and french then we should have : MyMessages_en.properties and MyMessages_fr.properties. Custom MessageInterpolator In this custom MessageInterpolator we redefine the way JSR 303 resolve messages to display, we provide a custom implementation which uses Spring MessagesSource and the MessageBuild to search and prepare for the message to be displayed. import java.util.Locale;import javax.validation.MessageInterpolator;import org.springframework.binding.message.MessageBuilder; import org.springframework.context.MessageSource;public class SpringMessageInterpolator implements MessageInterpolator { @Autowired private MessageSource messageSource,@Override public String interpolate(String messageTemplate, Context context) { String[] params = (String[]) context.getConstraintDescriptor().getAttributes().get("params");MessageBuilder builder = new MessageBuilder().code(messageTemplate); if (params != null) { for (String param : params) { builder = builder.arg(param); } }return builder.build().resolveMessage(messageSource, Locale.FRANCE).getText(); }@Override public String interpolate(String messageTemplate, Context context, Locale locale) { String[] params = (String[]) context.getConstraintDescriptor().getAttributes().get("params");MessageBuilder builder = new MessageBuilder().code(messageTemplate); if (params != null) { builder = builder.args(params); }return builder.build().resolveMessage(messageSource, local).getText(); } } Usage on a custom JSR 303 Let say that we create a new JSR 303 validation annotation, which validate will check if a field is not blank. To use the custom Spring message interpolator, we need to declare a message on one of the properties files loaded by the Spring Message source, lets declare that on the ErrorMessages.properties: {com.myproject.validation.NotBlank} Mandatory field Best practice is to name the key of the message like the complete classe name of our validation annotation, you are free to choose any key name you want but it must be between the brackets {} to work. Our custom annotation will look like below : @Target({ElementType.METHOD, ElementType.FIELD, ElementType.ANNOTATION_TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented @Constraint(validatedBy = NotBlankValidator.class) public @interface NotBlank { String message() default "{com.myproject.validation.NotBlank";Class<?>[] groups() default {};String[] params() default {};Class<? extends Payload>[] payload() default {}; } Please verify that the default value of the message attribute is the same as the one you put on the property file. Thats it, now you can use the annotation normally like you do, and if you don’t provide a hardcoded message it will get loaded from the property file if is declared there.Reference: JSR 303 loading messages from an I18N property file from our JCG partner Idriss Mrabti at the Fancy UI blog....
apache-activemq-logo

Mule ESB, ActiveMQ and the DLQ

In this post I will show a simple Mule ESB flow to see the DLQ feature of Active MQ in action. I assume you have a running Apache ActiveMQ instance available (if not you can download a version here). In this example I make use of Mule ESB 3.4.2 and ActiveMQ 5.9.0. We can create a simple Mule project based on the following pom file:       <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion> <groupId>net.pascalalma.demo</groupId> <artifactId>activemq-test-flow</artifactId> <packaging>mule</packaging> <name>${project.artifactId}</name> <version>1.0.0-SNAPSHOT</version> <properties> <mule.version>3.4.2</mule.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <jdk.version>1.7</jdk.version> <junit.version>4.9</junit.version> <activemq.version>5.9.0</activemq.version> </properties> <dependencies> <!-- Mule Dependencies --> <dependency> <groupId>org.mule</groupId> <artifactId>mule-core</artifactId> <version>${mule.version}</version> </dependency> <!-- Mule Transports --> <dependency> <groupId>org.mule.transports</groupId> <artifactId>mule-transport-jms</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>org.mule.transports</groupId> <artifactId>mule-transport-vm</artifactId> <version>${mule.version}</version> </dependency> <!-- Mule Modules --> <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-module-client</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-module-scripting</artifactId> <version>${mule.version}</version> </dependency> <!-- for testing --> <dependency> <groupId>org.mule.tests</groupId> <artifactId>mule-tests-functional</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-client</artifactId> <version>${activemq.version}</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>${jdk.version}</source> <target>${jdk.version}</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.5</version> <configuration> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> <plugin> <groupId>org.mule.tools</groupId> <artifactId>maven-mule-plugin</artifactId> <version>1.9</version> <extensions>true</extensions> <configuration> <copyToAppsDirectory>false</copyToAppsDirectory> </configuration> </plugin> </plugins> </build> </project> There is not much special here. Besides the necessary dependencies I have added the maven-mule-plugin so I can create a ‘mule’ packaging type and run Mule from my IDE. With this Maven pom in place we can create the following two Mule configurations. One for the Mule flow to test our transaction: <?xml version="1.0" encoding="UTF-8"?> <mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting" version="EE-3.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/current/mule-scripting.xsd"> <flow name="MainFlow"> <inbound-endpoint ref="event-queue" /> <logger category="net.pascalalma.demo.MainFlow" level="INFO" message="Received message from activeMQ" /> <scripting:component> <scripting:script engine="Groovy"> throw new Exception('Soap Fault Response detected') </scripting:script> </scripting:component> <outbound-endpoint ref="result-queue" /> </flow> </mule> In this flow we receive a message from the inbound endpoint, log a message and throw an exception before the message is put on the next queue. As we can see I didn’t add any exception handler. The configuration of the endpoints and connectors look like this: <?xml version="1.0" encoding="UTF-8"?><mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns:spring="http://www.springframework.org/schema/beans" version="EE-3.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd"><spring:bean id="redeliveryPolicy" class="org.apache.activemq.RedeliveryPolicy"> <spring:property name="maximumRedeliveries" value="5"/> <spring:property name="initialRedeliveryDelay" value="500"/> <spring:property name="maximumRedeliveryDelay" value="10000"/> <spring:property name="useExponentialBackOff" value="false"/> <spring:property name="backOffMultiplier" value="3"/> </spring:bean> <!-- ActiveMQ Connection factory --> <spring:bean id="amqFactory" class="org.apache.activemq.ActiveMQConnectionFactory" lazy-init="true"> <spring:property name="brokerURL" value="tcp://localhost:61616" /> <spring:property name="redeliveryPolicy" ref="redeliveryPolicy" /> </spring:bean> <jms:activemq-connector name="activeMqConnector" connectionFactory-ref="amqFactory" persistentDelivery="true" numberOfConcurrentTransactedReceivers="2" specification="1.1" /> <jms:endpoint name="event-queue" connector-ref="activeMqConnector" queue="event-queue" > <jms:transaction action="ALWAYS_BEGIN" /> </jms:endpoint> <jms:endpoint name="result-queue" connector-ref="activeMqConnector" queue="result-queue" > <jms:transaction action="ALWAYS_JOIN" /> </jms:endpoint> </mule> I defined a Spring bean for an ActiveMQ connection factory and one for the redelivery policy of this factory. With this redelivery policy we can configure how often Mule should retry to process a message from the queue when the original attempt failed. A nice feature in the redelivery policy is the ‘backOffMultiplier’ and ‘useExponentialBackOff’ combination. With these options you can have the period between two redelivery attempts increase exponentially until ‘maximumRedeliveryDelay’ is reached. In that case Mule will wait the ‘maximumRedeliveryDelay’ for the next attempt. So with these configurations we can create a Mule test class and run it. The test class would look something like this: package net.pascalalma.demo;import org.junit.Test; import org.mule.DefaultMuleMessage; import org.mule.api.MuleMessage; import org.mule.module.client.MuleClient; import org.mule.tck.junit4.FunctionalTestCase;public class TransactionFlowTest extends FunctionalTestCase {@Override protected String getConfigResources() { return "app/test-flow.xml, app/test-endpoints.xml"; }@Test public void testError() throws Exception { MuleClient client = new MuleClient(muleContext); MuleMessage inMsg = new DefaultMuleMessage("<txt>Some message</txt>", muleContext); client.dispatch("event-queue", inMsg);// Give Mule the chance to redeliver the message Thread.sleep(4000); } } If we run this test you will see messages in the logging like: Exception stack is: 1. "Message with id "ID:Pascals-MacBook-Pro-2.local-59158-1406440948059-1:1:3:1:1" has been redelivered 3 times on endpoint "jms://event-queue", which exceeds the maxRedelivery setting of 0 on the connector "activeMqConnector". Message payload is of type: ActiveMQTextMessage (org.mule.transport.jms.redelivery.MessageRedeliveredException) org.mule.transport.jms.redelivery.JmsXRedeliveryHandler:87 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/jms/redelivery/MessageRedeliveredException.html) If we now switch to the ActiveMQ console which can be reached at http://localhost:8161 for the default local installation we can see the following queues:  As expected we see two queues being created, the event-queue which is empty and the default ActiveMQ.DLQ which contains our message:  As you can image it might be handy to have a specific DLQ for each queue instead of one DLQ which will contain all kinds of undeliverable messages. Luckily this is easy to configure in ActiveMQ. Just put the following in the ‘activemq.xml’ file that can be found in ‘$ACTIVEMQ_HOME/conf’ folder. <!-- Set the following policy on all queues using the '>' wildcard --> <policyEntry queue=">"> <deadLetterStrategy> <individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true" /> </deadLetterStrategy> </policyEntry> If we now restart ActiveMQ, remove the existing queues and rerun our test we see the following result:  So with this setup each queue has its own DLQ. For more options regarding these ActieMQ settings see here. With the Mule flow created in this post it is easy to test and play with these settings.Reference: Mule ESB, ActiveMQ and the DLQ from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
eclipse-logo

Developing Eclipse plugins

Recently I started working with a team on an Eclipse plugin. The team had developed an awesome plugin that does the intended purpose. Thus I checked out the source and tried building it. The project source contained all the required libraries and it could only be build in Eclipse. In today’s world of continuous delivery, this is a major impediment as such a project can not be built on Jenkins. The project not only contained the required libraries, but the complete eclipse settings were kept as part of source, so I thought of improving this first. I created a POM.xml in the Project and deleted the settings and libs. The build worked fine but as soon as I opened the project in eclipse it was a mess. Nothing worked there! It took sometime to realize that Eclipse and Maven are two different worlds that do not converge easily. Even the smallest of the things like the artifact-version and Bundle version do not converge easily. In maven anything can be the version e.g. 21-snapshot. But in eclipse there are standards, it has to be named [number].[number].[number].qualifier  e.g. 1.1.21.qualifier. Eclipse-Tycho In order to bridge the gap between the two worlds Sonatype have contributed Tycho to the Eclipse ecosystem. Add the plugin with the eclipse repository : <repository> <id>juno</id> <layout>p2</layout> <url>http://download.eclipse.org/releases/juno</url> </repository><plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>tycho-versions-plugin</artifactId> <version>0.18.1</version> </plugin><plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>target-platform-configuration</artifactId> <version>0.18.1</version> <configuration> <pomDependencies>consider</pomDependencies> <environments> <environment> <os>linux</os> <ws>gtk</ws> <arch>x86_64</arch> </environment> </environments> </configuration> </plugin>There are few points to note here:If the plugin is for a specific eclipse platform, the repository of the same should be added. The plugin could use dependencies from POM or MANIFEST.MF. If the dependencies are used from POM, then set pomDependenciesThe  Tycho plugin also brings along a set of plugins for version update, surefire tests etc. The plugins can be invoked individually to perform different goals e.g. the versions plugin can be used in the following manner to set versions: mvn tycho-versions:set-version -DnewVersion=1.1.1-SNAPSHOT This will set the 1.1.1-SNAPSHOT version in POM and 1.1.1.qualifier in the MANIFEST.MF While the plugins offer a lot, there are a few limitations as well. The plugin can not generate proper eclipse settings for PDE. Thus if we do not keep these settings we need to generate these again. Few other limitations are listed on the plugin page. After this now we were able to bridge the two worlds in some sense. Maven builds which generate Eclipse plugin were possible. Plugin Classloaders In eclipse PDE, there are plugins and fragments. Plugins are complete modules that offer a functionality and fragments is a module which attaches itself to a parent plugin then enhancing its capability. Thus a plugin can attach n number of fragments, enhancing it during runtime. We had a base plugin, which offered some basic features and a fragment was built on top to use Hadoop 1.x in the plugin.  After sometime the requirement came to support Hadoop 2.x as well. Now the two libraries are not compatible with each other. Thus some workaround was required to enable this Fortunately Eclipse being OSGI based has a different mechanism of loading class as compared to other java applications. Usually there is a single/hierarchy classloader(s) which load the complete application. Now in such a case if two incompatible jars are bundled, only one will be loaded. But in eclipse each plugin has its own classloader which can load its own classes. Now this offers couple of opportunities like supporting different versions of the same library. This feature is extended to plugin only and not fragments. Fragments do not have their own classloaders and use the parent plugin classloaders. We could have used plugin classloader support but the hadoop libs were loaded by fragment instead of plugin. We converted the fragment into a plugin, which required a complete task of refactoring the existing codebase. After the hadoop 1.x based plugin was formed. We could make more plugins for hadoop 2.x. Each plugin loads its own set of classes. Now the only requirement is to have more PermGem space as the complete plugin can not be loaded into the default PermGem space.Reference: Developing Eclipse plugins from our JCG partner Rahul Sharma at the The road so far… blog blog....
enterprise-java-logo

Smart Auto-PPR Change Event Policy

There is a common belief among ADF developers that setting the iterator binding change event policy to ppr  is not a good thing in terms of performance because this policy forces the framework to refresh all attribute bindings that are bound to this iterator on each request. That’s not true! The framework refreshes only attributes that have been changed during the request and attributes that depend on the changed attributes. Let’s consider a simple use-case. There is a form:      The iterator’s change event policy is set to ppr, which is default in JDeveloper 11gR2 and 12c. The “First Name” and the “Last Name” fields are auto-submitted. The “Full Name” field is going to be calculated by concatenation of the first and last names. So, in the setters of the first and last names we have a corresponding method call: public void setLastname(String value) {   setAttributeInternal(LASTNAME, value);  setFullname(getFirstname() + " " + getLastname()); } Let’s have a look at the response content generated by the framework once the “Last Name” has been inputted:In response to the modified last name the framework is going to partially refresh only two input components – the last name and the full name. The full name is going to be refreshed because its value has been changed during the request. The rest of the components on the form don’t participate in the partial request. Let’s consider a bit more complicated use case.We are going to show value of the “Title” field as a label of the “Full Name” field on the form: <af:inputText label="#{bindings.Title.inputValue}"               value="#{bindings.Fullname.inputValue}"               required="#{bindings.Fullname.hints.mandatory}"               columns="#{bindings.Fullname.hints.displayWidth}"               maximumLength="#{bindings.Fullname.hints.precision}"               shortDesc="#{bindings.Fullname.hints.tooltip}" id="itFullName"> </af:inputText> So, the label of the “Full Name” should be updated every time we make a selection of the title. For sure, the “Title” field is auto-submitted. And let’s have a look at the response content:Despite the value of the “Full Name” has not been changed during the request the input component is going to be refreshed because its label property points to the value of a changed field. And again only these two fields are going to be refreshed during the partial request. That’s it!Reference: Smart Auto-PPR Change Event Policy from our JCG partner Eugene Fedorenko at the ADF Practice blog....
scala-logo

Test your Dependencies with Degraph

I wrote before about (anti)patterns in package dependencies. And of course the regular reader of my blog knows about Degraph, my private project to provide a visualization for package dependencies which can help a lot when you try to identify and fix such antipatterns. But instead of fixing a problem we all probably prefer preventing the problem in the first place. Therefore in the latest version Degraph got a new feature: A DSL for testing Dependencies. You can write tests either in Scala or in Java, whatever fits better into your project. A typical test written with ScalaTest looks like this:   classpath // analyze everything found in the current classpath .including("de.schauderhaft.**") // only the classes that start with "de.schauderhaft." .withSlicing("module", "de.schauderhaft.(*).**") // use the third part of the package name as the module name, and make sure the modules don't have cycles .withSlicing("layer", ("persistence","de.schauderhaft.legacy.db.**"), // consider everything in the package de.schauderhaft.legacy.db and subpackages as part of the layer "persistence" "de.schauderhaft.*.(*).**") // for everything else use the fourth part of the package name as the name of the layer ) should be(violationFree) // check for violations (i.e. dependency circles)The equivalent test code in Java and JUnit looks like this: assertThat( classpath() // analyze everything found in the current classpath .including("de.schauderhaft.**") // only the classes that start with "de.schauderhaft." .withSlicing("module", "de.schauderhaft.(*).**") // use the third part of the package name as the module name, and make sure the modules don't have cycles .withSlicing("layer", new NamedPattern("persistence","de.schauderhaft.legacy.db.**"), // consider everything in the package de.schauderhaft.legacy.db and subpackages as part of the layer "persistence" "de.schauderhaft.*.(*).**") // for everything else use the fourth part of the package name as the name of the layer ), is(violationFree()) );You can also constrain the ways different slices depend on each other. For example: … .withSlicing("module", "de.schauderhaft.(*).**").allow(oneOf("order", "reporting"), "customer", "core") …Means:stuff in de.schauderhaft.order may depend on de.schauderhaft.customer and de.schauderhaft.core the same is true for de.schauderhaft.reporting de.schauderhaft.customer may depend on de.schauderhaft.core all other dependencies between those packages are disallowed packages from and to other packages are allowedIf you also want to allow dependencies between the order slice and the reporting slice replace oneOf with anyOf. If you want to disallow dependencies from reporting or order tocore you can replace allow with allowDirect. See the official documentation for more details, especially all the options the DSL offers, the imports needed and how to set up Degraph for testing. I’m trying to get Degraph into maven central to make usage inside projects easier. I also have some changes to the testing DSL on my to-do list. And finally I’m working on a HTML5 based front end. So stay tuned.Reference: Test your Dependencies with Degraph from our JCG partner Jens Schauder at the Schauderhaft blog....
mongodb-logo

Replica Set Members in Mongodb

In the previous articles we have discussed many aspects of replica set in mongodb. And in those articles we have talked many things about members. So, what are these members? What is their purpose? Let us discuss about these things in this article. What are members in mongodb? In short terms the members in mongodb are the mongod processes which need to be executed in replica set. Now, in general there are only 2 members:       Primary: As per mongodb.org the primary members receive all write operations. Secondary: Secondaries replicate operations from the primary to maintain an identical data set. Secondaries may have additional configurations for special usage profiles. We can use maximum of 12 members in a replica set. From which only 7 can vote. So now the question arises: why do members need to vote? Selection for primary member: Whenever a replica set is initiated or a primary member is unreachable. Or in simple terms if there is no primary member present then the election is commenced to choose a primary member from the secondary members. Although there are a few types of members than before 2, we will talk about them later. Primary member: The primary member is the only member in the replica set that receives write operations. Mongodb applies write operations on the primary and then records the operations on the primary’s oplog. All members of the replica set can accept read operations. But, by default an application directs its read operations to the primary member. The replica set can have at most one primary. In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the oplog to apply to their data sets.Secondary member: A secondary member maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s oplog to its own data set in an asynchronous process. A replica set can have one or more secondary member. Data can’t be written to secondary, but data can be read from secondary members. In case of the primary member’s absence a secondary member can be primary through election. In the following three-member replica set, secondary member copies the primary member.Hidden members: Except the before two members, there are other members that comes into a replica set. One of them is a hidden member. Hidden members cannot become primary and are invisible to client applications. Hidden members do vote in elections. Hidden members are good for workloads with different usage patterns from the other members in the replica set. Also they must always be priority 0 members and so they cannot become primary. The most common use of hidden nodes is to support delayed members. To configure a secondary member as hidden, set its priority value to 0 and set its hidden value to true in its member configuration. To configure a hidden member, use the following sequence in a mongo shell connected to the primary, specifying the member to configure by its array index in the members array: c.members[0].priority = 0 c.members[0].hidden = true Delayed member: Another member is delayed member. Delayed members also copies data from the dataset. But as the name suggests the copied dataset is delayed than actual timing. As for example we can say that if we have an application to determine the current time. Then if the current time is 09:00 and a member has a delay of an hour, the delayed member has no operation more recent than 08:00. Because delayed members are a “rolling backup” or a running “historical” snapshot of the data set, they may be of help to recover from various kinds of human error. Delayed members apply operations from the oplog on a delay. Must be is equal to or greater than your maintenance windows. Must be smaller than the capacity of the oplog. For more information on oplog size. To configure a delayed secondary member, set its priority value to 0, its hidden value to true, and its slaveDelay value to the number of seconds to delay. c.members[0].priority = 0 c.members[0].hidden = true c.members[0].slaveDelay = 1200 Non-voting members: There is a lot of talk about election in replica sets. So, in the election few of the members participate and give votes to determine primary member. But there are also a few members, who do not participate in voting. These members are called non-voting members. Non-voting members allow you to add additional members for read distribution beyond the maximum seven voting members. To configure a member as non-voting, set its votes value to 0. c.members[5].votes = 0 c.members[4].votes = 0 Priority in members: These are the basic few members that we have to keep in mind for replica sets. To get here we have seen many operations as priority. Priority is indeed a very important thing to discuss about. The priority settings of replica set members affect the outcomes of elections for primary. The value of the member’s priority setting determines the member’s priority in elections. The higher the number, the higher the priority. Configuring member priority: To modify priorities, we have to update the members array in the replica configuration object. The value of priority can be any floating point number between 0 and 1000. The default value for the priority field is 1.Adjust priority during a scheduled maintenance window. Reconfiguring priority can force the current primary to step down, leading to an election. To block a member from seeking election as primary, assign the priority of that member to 0. We can complete configuring priority in simple 3 steps. Let us look at them:Copy the replica set configuration to a variable.In the mongo shell, use rs.conf() to retrieve the replica set configuration and assign it to a variable. For example: c = rs.conf()Change each member’s priority value.Change each member’s priority value, as configured in the members array. c.members[0].priority = 0.5 c.members[1].priority = 2 This sequence of operations modifies the value of c to set the priority for the first two members defined in the members array.Assign the replica set the new configuration.Use rs.reconfig() to apply the new configuration. rs.reconfig(c) In this article we have discussed about members in replica set. We can say now that members are the very basis of replica sets. The more we will get to know about the members, our control will be swifter in handling data in mongodb.Reference: Replica Set Members in Mongodb from our JCG partner Biswadeep Ghosh at the Phlox Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close