Featured FREE Whitepapers

What's New Here?

apache-openoffice-logo

Apache OpenOffice just graduated from the Incubator

Apache OpenOffice has just made it out of the Incubator and is now an official Apache Software Foundation project. “What?”, might some people ask now, “wasn’t it official before a year or so?”. No, it wasn’t! When Oracle decided to donate OpenOffice.org to the Apache Software Foundation, it entered the so called Incubatorfirst. That was back in June 2011. And as an incubating project it was not-yet official. Actually it was hard work to make it an official ASF project. Let me explain what happened.What does happen when a project incubates? When a project want to join the Apache Software Foundation, there are many open questions. Who wrote the code? Does the project really own all the intellectual property of it? Which license does the code use? Is there a working community? Usually there are a couple of long term Apache activists who join the project as mentors. In the case of OpenOffice, there were a couple of well known and respected community members involved. Like for example Jim Jagielski (ASF President), Sam Ruby (who has so many roles at the ASF that it is being said Sam Ruby does not refer to a person but a whole team), Ross Gardler (actually on the ASF board too), Shane Curcuru (ASF Trademark Expert), Joe Schaefer (one of the ASF Infra Gurus), Danese Cooper (better read her Wikipedia entry) and Noirin Plunkett, who is also an Officer to the ASF. Oh, and me. Me – the only one without a Wikipedia entry. You can imagine how excited I was to see so many experienced people joining as Mentors. Of course you can learn much of them and this is what I did. As a Mentor you have not only the chance to look at the gory details of an incubation – you have the duty to do so. Finally only when the project is “running” like an Apache project – often referred as the Apache way, which describes core values like “being open” – it will graduate out from incubator and become an official top level project. You can be assured that licensing problems are no longer there and the project has a clean IP. OpenOffice and some of its issues The Mentors will look at all the questions and advise the project to solve them. Mentors usually say things like:”you cannot use dependency $a, because it uses license $x. These are not compatible.” They say it, because the Apache Software Foundation only release code licensed with the Apache License. Oracles OpenOffice.org has had a lot of dependencies and some where GPL’ed. GPL is a different philosophy and unfortunately these two licenses are not fully compatible. One of the first hurdles was to make sure everything which will be published by the OpenOffice project is compatible to the Apache License. If you were coding on huge projects in your life, you know how painful it can be to look at every single dependency you might use. Mentors also look at the community. In the case of OpenOffice, there was a totally different style of “project management”. It was – more or less – leadership based. But at the ASF there are no “real” leaders, or at there is not a role of a leader. There a people who do stuff, and when they do stuff, they somehow lead it. Finally the project agrees or disagrees with votes. We call that Do-cracy (or so). But there is never ever one person who can decide what will happen and when. The Apache style is not for everybody. But I am glad to say that many, many people at this project changed their way of working without much pain. The community of OpenOffice is huge. It was overwhelming huge. There are parts of OpenOffice which required some special thoughts. Like the official OpenOffice forums. These forums were once running more or less independently. But now the forums were about to be part of the project. In other terms: the people who were moderating/administrating the forum needed become Apache committers. Even when they would not write a single line code. It is often misunderstood that you would need to write code to join a project as a committer. But this is not true. Apache projects usually are glad about every contribution and will respect you for that. If you write docs, you are able to join. If you are active as supporter on the mailing lists you are also able to join. We had to do much work to integrate the forum people into the OpenOffice community and this community into the Apache community. There were language barriers and concerns. I mean: some folks just wanted to post in the forums as always. Why did they need to sign a CLA? Well, because we are concerned on the IP. Because we want them to join our community – fully. Besides: we have not had forums on the ASF before. How to operate them? But there were some great volunteers who succeeded with this job. This is the case with Apache: we are one community. Community over code, it is often said. With this incubation we had to bring a fully fledged community into ours. We needed to mentor without being arrogant. I hope it worked out that way (I doubt everybody will agree). But it was difficult. The folks of OpenOffice needed to bend more than we needed. We more or less changed some infrastructure things, like running the forums on our servers. But OpenOffice community needed to change the way they operate. Therefore I can just give all involved people my deepest respect. When two communities grow together and one community cannot move so far as the other, there are often misunderstandings and of course hurt feelings. But in just this little time (since june 2011!) it worked out. Here is a great quote from the official annoucement: “The OpenOffice graduation is the official recognition that the project is now able to self-manage not only in technical matters, but also in community issues,” said Andrea Pescetti, Vice President of Apache OpenOffice. “The ‘Apache Way’ and its methods, such as taking every decision in public with total transparency, have allowed the project to attract and successfully engage new volunteers, and to elect an active and diverse Project Management Committee that will be able to guarantee a stable future to Apache OpenOffice.” Yup, that’s it.The first release It was not only impressive to see the community grow. No, one of the most impressive things I ever seen was that OpenOffice people – surrounded by Nay-sayers and other destructive elements – simply made what they liked. They made a new release. With a complete new infrastructure. With brand new requirements. With mentors in their backs. And with a growing and successful LibreOffice community on the other side. But they kept on going and finally they made it. A project with size and this restrictions – I can just say:”wow guys, that was incredible.”. Check their releases out here: openoffice.apache.org. 20 million other people did so since the first release was out in May 2012!And what next? Incubation is over. My role at this project is done. OpenOffice is now self governing and they totally deserved it. Now they can say they are an official project and users can use software which is guaranteed to run under the permissive Apache License 2.0. This will make it possible to use in your own products. There will be some tasks to be done for post graduation. But actually these are just small steps. Graduation is important from a psychology point of view. From technical point of view: some redirections and then head on to the next release. However, I was glad to get such a great insight, even when it needed huge amount of my energy. Somehow I am glad to unsubscribe, but somehow I will miss this exciting project. In any way, thanks guys that I was allowed to learn so much. And I wish you all the best for the future. I now think it is a bright one.At our conference Did you know there are a couple of great OpenOffice talks at the ApacheCon EU? Reference: Apache OpenOffice just graduated from the Incubator from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....
spring-interview-questions-answers

Spring MVC for Atom Feeds

How to add feeds (Atom) to your web application with just two classes? How about Spring MVC? Here are my assumptions:you are using Spring framework you have some entity, say “News”, that you want to publish in your feeds your ‘News’ entity has creationDate, title, and shortDescription you have some repository/dao, say ‘NewsRepository’, that will return the news from your database you want to write as little as possible you don’t want to format Atom (xml) by handYou actually do NOT need to use Spring MVC in your application already. If you do, skip to step 3. Step 1: add Spring MVC dependency to your application With maven that will be: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>3.1.0.RELEASE</version> </dependency>Step 2: add Spring MVC DispatcherServlet With web.xml that would be: <servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:spring-mvc.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/feed</url-pattern> </servlet-mapping>Notice, I set the url-pattern to “/feed” which means I don’t want Spring MVC to handle any other urls in my app (I’m using a different web framework for the rest of the app). I also give it a brand new contextConfigLocation, where only the mvc configuration is kept. Remember that, when you add a DispatcherServlet to an app that already has Spring (from ContextLoaderListener for example), your context is inherited from the global one, so you should not create beans that exist there again, or include xml that defines them. Watch out for Spring context getting up twice, and refer to spring or servlet documentation to understand what’s happaning. Step 3. add ROME – a library to handle Atom format With maven that is: <dependency> <groupId>net.java.dev.rome</groupId> <artifactId>rome</artifactId> <version>1.0.0</version> </dependency>Step 4. write your very simple controller @Controller public class FeedController { static final String LAST_UPDATE_VIEW_KEY = 'lastUpdate'; static final String NEWS_VIEW_KEY = 'news'; private NewsRepository newsRepository; private String viewName;protected FeedController() {} //required by cglibpublic FeedController(NewsRepository newsRepository, String viewName) { notNull(newsRepository); hasText(viewName); this.newsRepository = newsRepository; this.viewName = viewName; }@RequestMapping(value = '/feed', method = RequestMethod.GET) @Transactional public ModelAndView feed() { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName(viewName); List<News> news = newsRepository.fetchPublished(); modelAndView.addObject(NEWS_VIEW_KEY, news); modelAndView.addObject(LAST_UPDATE_VIEW_KEY, getCreationDateOfTheLast(news)); return modelAndView; }private Date getCreationDateOfTheLast(List<News> news) { if(news.size() > 0) { return news.get(0).getCreationDate(); } return new Date(0); } } And here’s a test for it, in case you want to copy&paste (who doesn’t?): @RunWith(MockitoJUnitRunner.class) public class FeedControllerShould { @Mock private NewsRepository newsRepository; private Date FORMER_ENTRY_CREATION_DATE = new Date(1); private Date LATTER_ENTRY_CREATION_DATE = new Date(2); private ArrayList<News> newsList; private FeedController feedController;@Before public void prepareNewsList() { News news1 = new News().title('title1').creationDate(FORMER_ENTRY_CREATION_DATE); News news2 = new News().title('title2').creationDate(LATTER_ENTRY_CREATION_DATE); newsList = newArrayList(news2, news1); }@Before public void prepareFeedController() { feedController = new FeedController(newsRepository, 'viewName'); }@Test public void returnViewWithNews() { //given given(newsRepository.fetchPublished()).willReturn(newsList); //when ModelAndView modelAndView = feedController.feed(); //then assertThat(modelAndView.getModel()) .includes(entry(FeedController.NEWS_VIEW_KEY, newsList)); }@Test public void returnViewWithLastUpdateTime() { //given given(newsRepository.fetchPublished()).willReturn(newsList);//when ModelAndView modelAndView = feedController.feed();//then assertThat(modelAndView.getModel()) .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, LATTER_ENTRY_CREATION_DATE)); }@Test public void returnTheBeginningOfTimeAsLastUpdateInViewWhenListIsEmpty() { //given given(newsRepository.fetchPublished()).willReturn(new ArrayList<News>());//when ModelAndView modelAndView = feedController.feed();//then assertThat(modelAndView.getModel()) .includes(entry(FeedController.LAST_UPDATE_VIEW_KEY, new Date(0))); } }Notice: here, I’m using fest-assert and mockito. The dependencies are: <dependency> <groupId>org.easytesting</groupId> <artifactId>fest-assert</artifactId> <version>1.4</version> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.8.5</version> <scope>test</scope> </dependency>Step 5. write your very simple view Here’s where all the magic formatting happens. Be sure to take a look at all the methods of Entry class, as there is quite a lot you may want to use/fill. import org.springframework.web.servlet.view.feed.AbstractAtomFeedView; [...]public class AtomFeedView extends AbstractAtomFeedView { private String feedId = 'tag:yourFantastiSiteName'; private String title = 'yourFantastiSiteName: news'; private String newsAbsoluteUrl = 'http://yourfanstasticsiteUrl.com/news/';@Override protected void buildFeedMetadata(Map<String, Object> model, Feed feed, HttpServletRequest request) { feed.setId(feedId); feed.setTitle(title); setUpdatedIfNeeded(model, feed); }private void setUpdatedIfNeeded(Map<String, Object> model, Feed feed) { @SuppressWarnings('unchecked') Date lastUpdate = (Date)model.get(FeedController.LAST_UPDATE_VIEW_KEY); if (feed.getUpdated() == null || lastUpdate != null || lastUpdate.compareTo(feed.getUpdated()) > 0) { feed.setUpdated(lastUpdate); } }@Override protected List<Entry> buildFeedEntries(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception { @SuppressWarnings('unchecked') List<News> newsList = (List<News>)model.get(FeedController.NEWS_VIEW_KEY); List<Entry> entries = new ArrayList<Entry>(); for (News news : newsList) { addEntry(entries, news); } return entries; }private void addEntry(List<Entry> entries, News news) { Entry entry = new Entry(); entry.setId(feedId + ', ' + news.getId()); entry.setTitle(news.getTitle()); entry.setUpdated(news.getCreationDate()); entry = setSummary(news, entry); entry = setLink(news, entry); entries.add(entry); }private Entry setSummary(News news, Entry entry) { Content summary = new Content(); summary.setValue(news.getShortDescription()); entry.setSummary(summary); return entry; }private Entry setLink(News news, Entry entry) { Link link = new Link(); link.setType('text/html'); link.setHref(newsAbsoluteUrl + news.getId()); //because I have a different controller to show news at http://yourfanstasticsiteUrl.com/news/ID entry.setAlternateLinks(newArrayList(link)); return entry; }}Step 6. add your classes to your Spring context I’m using xml approach. because I’m old and I love xml. No, seriously, I use xml because I may want to declare FeedController a few times with different views (RSS 1.0, RSS 2.0, etc.). So this is the forementioned spring-mvc.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'><bean class='org.springframework.web.servlet.view.ContentNegotiatingViewResolver'> <property name='mediaTypes'> <map> <entry key='atom' value='application/atom+xml'/> <entry key='html' value='text/html'/> </map> </property> <property name='viewResolvers'> <list> <bean class='org.springframework.web.servlet.view.BeanNameViewResolver'/> </list> </property> </bean><bean class='eu.margiel.pages.confitura.feed.FeedController'> <constructor-arg index='0' ref='newsRepository'/> <constructor-arg index='1' value='atomFeedView'/> </bean><bean id='atomFeedView' class='eu.margiel.pages.confitura.feed.AtomFeedView'/> </beans> And you are done. I’ve been asked a few times before to put all the working code in some public repo, so this time it’s the other way around. I’ve describe things that I had already published, and you can grab the commit from the bitbucket. Reference: Atom Feeds with Spring MVC from our JCG partner Jakub Nabrdalik at the Solid Craft blog....
ZXing-logo

Generate QR Code image from Java Program

If you are tech and gadget savvy, then you must be aware of QR codes. You will find it everywhere these days – in blogs, websites and even in some public places. This is very popular in mobile apps, where you scan the QR code using a QR Code scanner app and it will show you the text or redirect you to the web page if it’s URL. I came across this recently and found it very interesting. If you want to know about QR Code, you can find a lot of useful information at Wikipedia QR Code Page.When I found these kind of images in so many websites then I started looking how to generate it using Java Code. I looked into some APIs available as open source in the market and found zxing to be the simplest and best to use. Here is the program you can use to create QR Code image with zxing API. package com.adly.generator;import java.awt.Color; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Hashtable;import javax.imageio.ImageIO;import com.google.zxing.BarcodeFormat; import com.google.zxing.EncodeHintType; import com.google.zxing.WriterException; import com.google.zxing.common.BitMatrix; import com.google.zxing.qrcode.QRCodeWriter; import com.google.zxing.qrcode.decoder.ErrorCorrectionLevel;public class GenerateQRCode {/** * @param args * @throws WriterException * @throws IOException */ public static void main(String[] args) throws WriterException, IOException { String qrCodeText = 'http://www.journaldev.com'; String filePath = 'D:\\Pankaj\\JD.png'; int size = 125; String fileType = 'png'; File qrFile = new File(filePath); createQRImage(qrFile, qrCodeText, size, fileType); System.out.println('DONE'); }private static void createQRImage(File qrFile, String qrCodeText, int size, String fileType) throws WriterException, IOException { // Create the ByteMatrix for the QR-Code that encodes the given String Hashtable hintMap = new Hashtable(); hintMap.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.L); QRCodeWriter qrCodeWriter = new QRCodeWriter(); BitMatrix byteMatrix = qrCodeWriter.encode(qrCodeText, BarcodeFormat.QR_CODE, size, size, hintMap); // Make the BufferedImage that are to hold the QRCode int matrixWidth = byteMatrix.getWidth(); BufferedImage image = new BufferedImage(matrixWidth, matrixWidth, BufferedImage.TYPE_INT_RGB); image.createGraphics();Graphics2D graphics = (Graphics2D) image.getGraphics(); graphics.setColor(Color.WHITE); graphics.fillRect(0, 0, matrixWidth, matrixWidth); // Paint and save the image using the ByteMatrix graphics.setColor(Color.BLACK);for (int i = 0; i < matrixWidth; i++) { for (int j = 0; j < matrixWidth; j++) { if (byteMatrix.get(i, j)) { graphics.fillRect(i, j, 1, 1); } } } ImageIO.write(image, fileType, qrFile); }} Here is the QR Code image file created by this program. You can use your mobile QR Code scanner app to test it. It should point to JournalDev Home URL.If you don’t have a mobile app to test it, don’t worry. You can test it with zxing API through command line too. I am on Windows OS and here is the command to test it. If you are on Unix/Linux/Mac OS then change it accordingly. D:\Pankaj\zxing>java -cp javase\javase.jar;core\core.jar com.google.zxing.client.j2se.CommandLineRunner D:\Pankaj\JD.png file:/D:/Pankaj/JD.png (format: QR_CODE, type: URI): Raw result:http://www.journaldev.comParsed result:http://www.journaldev.comFound 4 result points. Point 0: (35.5,89.5) Point 1: (35.5,35.5) Point 2: (89.5,35.5) Point 3: (80.5,80.5)Tip for Dynamic QR Code Generation If you want to generate QR code dynamically, you can do it using Google Charts Tools. For above scenario, URL will be https://chart.googleapis.com/chartchs=125×125&cht=qr&chl=http://www.journaldev.com Happy coding and don’t forget to share! Reference: Generate QR Code image from Java Program from our JCG partner Pankaj Kumar at the Developer Recipes blog....
career-logo

Top 7 tips for succeeding in a technical interview for software engineers

In this post I would like to write on how to succeed in a technical interview based on my experience as an interviewer. Most of the interviews follows some patterns. If you understand it and frame your response in the same way you can clear any interview. If you don’t know stuff this might not help you, but if you are prepared, this article will help you show of your full potential. If you are skillful the only reason you can loose an interview is by lack of preparation. You may know all the stuff but you still needs to prepare by reading books, article etc.. Theses may not teach you anything new but will help in organizing things that you already know. Once you have organized information it is really easy to access it. You should read not only for interviews, make it a practice and get better at your job. Most of the time interviewer is looking for a candidate who can work with him. The vacancy m ay be in other teams but they use this parameter to judge. Mostly this article contains general tips. These are targeted for 2 to 6 years experienced candidates. 1. Be honest and don’t bluff Answer what you know, confidently. If you have been asked a question that you don’t know, Start by telling ‘I am not sure, but I think It is …..’. Never tell a wrong answer confidently. That will make them doubt your correct answers also or may feel that they were guesses. You can’t use this technique for every question, but I would think 25% is a good amount. Most importantly this shows your ability to think and a never die attitude. No one wants to work with people says ‘I can’t do this’. Try to do some thing about all the questions. 2. Be ready to write Code If you are been asked to write some code, be careful and follow some basic standards. I heard people telling me ‘I forgot the syntax…’ and this for the syntax of a for loop. No one expect you to remember everything but basics like looping, if conditions, main method, exceptions are never to be forgotten. If you did, brush them up. Always write the code with good indentation using lots of white spaces. That might make up for your bad handwriting!! 3. Get ready to explain about your project As engineers you have to understand the business before you start code it. So you should be able to explain what is being done in your project. Write down 3-4 lines that will explain the project in high level. By hearing the lines some one out side your team should get an idea about it. Because we always works inside on features, most of the time it is difficult to frame these. Check your client’s internal communications how they are marketing and get some clue from it. Practice what your are going to say with friends make make sure you are on to the point. Once you have explained about the business needs then you will be asked about the technical architecture of the project. You have to be prepared with a architecture diagram that shows how the interaction of components in your project. It don’t have to be in any specific UML format, but make sure you can explain stuff relating to the diagram you have drawn. For example if you are working in a web application show how the data is flow from UI to DB. You can show different layers involved, technologies used etc.. The most important part is you should be clear in your mind about what you are currently working on. 4. Convert arguments to conversation Even if you know that that person is wrong do not argue and try to continue the conversation saying ‘Ok, But I am not so sure if that is correct, I will check that out’. This keeps the person in good terms. Be an active listener during the interview use reference to your experience when you are answering. 5. Be prepared for the WHY question Good interviews focus on the question ‘Why?’. It might start with ‘What’ but will end in ‘Why?’. For example in Java typical question would be ‘What is the difference between String and StringBuffer?’. A follow-up why question will be like ‘Why is String has so-and-so’ or ‘How is it done..?’. Be ready to give inside information by answering ‘How?’ and ‘Why’ parts of he question. 6. Tell about your best achievement During your work there might be something that you consider as your best achievement. It is important to describe it in such a way that interviewer feels that you have did something extraordinary there. So, prepare a believable story on how your abilities helped you complete that task. It is important to prepare this because it takes time to dig your memory and find situations. 7. Do you have any questions for me? This question gets repeated in every single interview. Here you don’t actually care about the answers; but you should make yourselves look good by asking ‘smart’ questions. This article will help you in this. Reference: Top 7 tips for succeeding in a technical interview for software engineers from our JCG partner Manu PK at the The Object Oriented Life blog....
aspectj-logo

Clean code with aspects

In my previous post I’ve described the alphabet conversion, and I’ve mentioned that we used AspectJ to resolve that task, but i did not mention how AspectJ works and what are aspects generaly. So in the next few lines i will explain:what is Aspect Oriented Programming and why we need it what is AspectJ using AspectJ with Spring (configuring AspectJ and spring to work together) and i will explain aspects on the example from previous post.What Is Aspect Oriented Programing and why we need it During software development we can use different programming paradigms such as OOP (object oriented programing) or POP(procedural oriented programing). Today most of us use Object Oriented Programming methodologies for resolving real life problems, during software development process. But during our work, we are constantly meeting with some code which crossing through our code base and breaking its modularity and making it dirty. This part of code usually don’t have business values, but we need them to resolve our problems. For example we can take a look on database transactions. Transactions are very important for our software, because they take care about data consistency. The code which start and handle transaction are very important for our application but it is used for technical stuff (starting, committing and rolling back transactions). This things make it difficult to understand what is real meaning of code (to see real business value of code). Of course i will not make any example of how to handle transactions using aspects because there is a lot of frameworks which will take care about transactions instead of us. I’ve just mentioned transactions because you probably know how to insert data into database using a plain JDBC API. So to make our code cleaner we will use a design patterns, which is a good approach for the problem solving. But also sometimes a usage of design patterns will not lead us to easy solution, and most of us will resort to the easier solution what will produced “dirty” code. In this situation we should give a chance to Aspect Oriented approach for the problem solving. When we think about AOP we should not think about something totally new for us, we should think about AOP as a complements of OOP. AOP is there to make easier code modularisation, make code cleaner, and provide us with easier and faster understand what some part of application should do. AOP introduce a few new concepts which will allow us easier code modulation. If we want to efficiently use Aspects we need to know its basics principe and terminology. When we start with using AOP we will meet a new termines:Crosscutting concerns, it is code which should be moved in separate module (i.e. code for handling transactions). Aspect, it is a module which contains concerns. Pointcut, we can look at it as pointer which will instruct when corresponding code should be run Advice, it contains a code which should be run when some join point is reached. Inner-type declaration, allow modification of class structure. Aspect-weaving, is mechanism which coordinate the integration with the rest of the system.I will show at the end w hat are they and how to use them within an example. What is AspectJ AspectJ is an extension of Java programing language which allow usage of AOP concepts within Java programing language. When you use AspectJ you do not need to make any changes in your existing code. AspectJ extend java with a new construct called aspect, and after AspectJ 5 you can use annotation based development style. AspectJ and Spring Spring framework already provide its own implementation of AOP. Spring AOP is simpler solution than AspectJ but it is not so robust as AspectJ. So if you want to use aspects in your spring application you should be familiar with possibilities of Spring AOP before choosing AspectJ to do work. Before we see the example of using aspect i will show you how to integrate AspectJ with Spring and how to configure Tomcat to be able to run AspectJ application with Spring. In this example I’ve used LTW (load time weaving) of aspects. So I will start explaining first how to do it from Spring. It is easy, just add next line in your application configuration file: <context:load-time-weaver aspectj-weaving="autodetect"/>That is all what is needed to be done with in spring configuration. The next step is configuration of Tomcat. We need to define new class loader for application. This class loader need to be able to do load time weaving so we use: <loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" />The loader need to be in Tomcat classpath before you can use it. Of course, in order to make this work, we need to create aop.xml file. This file contains instruction which will be used by class loader during class transformation process. Here is example of aop.xml file which I’ve used for alphabet convertion. <aspectj> <weaver options="-Xset:weaveJavaxPackages=true"> <!-- only weave classes in our application-specific packages --> <include within="ba.codecentric.medica.model.*" /> <include within="ba.codecentric.medica..*.service.*" /> <include within="ba.codecentric.medica.controller..*" /> <include within="ba.codecentric.medica.utils.ModelMapper" /> <include within="ba.codecentric.medica.utils.RedirectHelper" /> <include within="ba.codecentric.medica.aop.aspect.CharacterConvertionAspect" /> <include within="ba.codecentric.medica.security.UserAuthenticationProvider" /> <include within="ba.codecentric.medica.wraper.MedicaRequestWrapper"/> </weaver> <aspects> <!-- weave in just this aspect --> <aspect name="ba.codecentric.medica.aop.aspect.CharacterConversionAspect" /> </aspects> </aspectj>This last xml file is most interesting for all of you which are willing to try AspectJ. It instruct AspectJ weaving process. The weaver section contains information about what should be weaved. So this file will include all Classes inside:ba.codecentric.medica.model.* ba.codecentric.medica..*.service.* ba.codecentric.medica.controller..* ba.codecentric.medica.utils.ModelMapper ba.codecentric.medica.utils.RedirectHelper ba.codecentric.medica.aop.aspect.CharacterConvertionAspect ba.codecentric.medica.security.UserAuthenticationProvider ba.codecentric.medica.wraper.MedicaRequestWrapperSo the first line including all classes inside of model package. The second one include all Classes which are part of services sub packages inside of ba.codecentric.medica package (i.e. ba.codecentric.medica.hospitalisation.service). The third one include everything belowe controller package. And rest of the lines include specified classes. Options attribute define addition option which should be used during weaving process. So in this example -Xset:weaveJavaxPackages=true instruct AspectJ also to weave java packages. Aspects section contain list of aspects which will be used during weaving process. For more information about configuration with xml you can see AspectJ documentation. Example of usage AspectJI prefer usage of Annotation so the next example will show you how to use AspectJ with annotation. Annotation driven programming with AspectJ is possible from version AspectJ 5. Here is some code of a complete aspect which contains concerns used for alphabet conversion. package ba.codecentric.medica.aop.aspect; import java.util.List; import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.Signature; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import ba.codecentric.medica.utils.CharacterConverter; import ba.codecentric.medica.utils.ContextHelper; import ba.codecentric.medica.utils.LanguageHelper; /** * Aspect used for transformation characters from one alphabet to another. * * @author igor * */ @Aspect public class CharacterConvertionAspect { private static Log LOG = LogFactory.getLog(CharacterConvertionAspect.class); public int getConvertTo() { return getLanguageHelper().getConvertTo(); } protected LanguageHelper getLanguageHelper() { return ContextHelper.getBean("languageHelper"); } public CharacterConvertionAspect() { LOG.info("Character converter aspect created"); } @SuppressWarnings("rawtypes") @Around("execution(public java.lang.String ba.codecentric.medica.model..*.get*(..)) && !cflow(execution(* ba.codecentric.medica.controller..*.*(..))) && !cflow(execution(public void ba.codecentric.medica..*.service..*.*(..))) && !cflow(execution(* ba.codecentric.medica.security.UserAuthenticationProvider.*(..)))") public Object convertCharacters(ProceedingJoinPoint pjp) throws Throwable { LOG.info("Character conversion trigered"); Object value = pjp.proceed(); if (value instanceof String) { LOG.info("Convert:" + value); Signature signature = pjp.getSignature(); Class type = signature.getDeclaringType(); String methodName = signature.getName(); Map<Class, List<string&lgt;&lgt; skipConvertionMap = getBlackList(); if(skipConvertionMap.containsKey(type)){ List<string&lgt; list = skipConvertionMap.get(type); if(list == null || list.contains(methodName)){ LOG.info("Value will not be converted because it is on blacklist"); return value; } } return getConverter().convertCharacters((String) value, getConvertTo()); } LOG.info("Convertion will not be performed (" + value + ")"); return value; } @Around("execution(public void ba.codecentric.medica.model..*.set*(java.lang.String))") public Object convertCharactersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.getArgs()[0]; LOG.info("Converting value:" + value + ", before persisting"); if (value instanceof String){ value= getConverter().convertCharacters((String)value, CharacterConverter.TO_LAT); } return pjp.proceed(new Object[]{value}); } /** * Convert parameter to Latin alphabet * * @param pjp * @return * @throws Throwable */ @Around("execution(public * ba.codecentric.medica.wraper.MedicaRequestWrapper.getParameter*(..))") public Object convertParametersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.proceed(); return getConverter().convert(value, CharacterConverter.TO_LAT); } /** * If result of the invocation is String, it should be converted to chosen alphabet. * * @param jp * @return converted value * @throws Throwable */ @Around("execution(* ba.codecentric.medica.controller..*.*(..))") public Object procedWithControllerInvocation(ProceedingJoinPoint jp) throws Throwable { Object value = jp.proceed(); return getConverter().convert(value, getConvertTo()); } public CharacterConverter getConverter() { return ContextHelper.getBean("characterConverter"); } @SuppressWarnings("rawtypes") public Map<Class,List<string&lgt;&lgt; getBlackList(){ return ContextHelper.getBean("blackList"); } }First of all we can see that class is annotated with @Aspect annotation. This indicate that this class is actually an aspect. Aspect is a construction which contains similar cross-cutting concerns. So we can look at it as a module which contains cross cutting code and define when which code will be used and how. @Around("execution(public void ba.codecentric.medica.model..*.set*(java.lang.String))") public Object convertCharactersToLat(ProceedingJoinPoint pjp) throws Throwable { Object value = pjp.getArgs()[0]; LOG.debug("Converting value:" + value + ", before persisting"); if (value instanceof String) { value = getConverter().convertCharacters((String) value, CharacterConverter.TO_LAT); } return pjp.proceed(new Object[] { value }); }This is a method which is annotated with @Around annotation. The around annotation is used to represent around advice. I have already mentioned that, advice is the place which contains a cross-cutting code. In this example I’ve only used “around” advice, but except that there is also before,after,after returning and after throwing advice. All advice except around should not have a return value. The content inside of around annotation define when code from advice will be weaved. This also can be done when we define pointcuts. In this example I did’t use pointcuts for defining join points because it’s simple aspect. With pointcut annotations you can define real robust join points. In this case advice will be executed during setting values of entity beans which have only one parameter of type String. ProcidingJoinPoint pjp, in the example above, present the join point, so for this example it is setter method of entity bean. Value of object send to the entity setter method will be first converted and then the setter method will be called with a converted value. If I didn’t use aspects, my code could look like: public void setJmbg(String jmbg) { this.jmbg = getConverter().convertCharacters(jmbg, CharacterConverter.TO_LAT); }I’ve already said that for this example I use LTW. So in the next few lines I will try to explain weaving process briefly. Weaving is process in which the class is transformed with defined aspect. In the next picture you can see illustration of weaving process.For better understanding of weaving, you can consider it as code injection around the calling method, in this case. ConclusionSo in this example I’ve just covered some basic principles of aspect programming with AspectJ. This aspect helped me to keep the code clean. The result of using aspect is clean separation of crossing-cut code and the code of real business value. The controllers, services and entity beans stayed clean and technical code is extracted in separate module which allow you to easier understand and maintain your code more easily. For more details information about defining pointcuts and general about AspectJ project you can see on the project page. Happy coding and don’t forget to share! Reference: Clean code with aspects from our JCG partner Igor Madjeric at the Igor Madjeric blog....
java-logo

Factory Design Pattern Case Study

I had a job to check our project code quality. And have to report it back to my team leader for any obstacle that i found in the project. I found a lot of leaks and i think would be good to be discussed on the blog. Not to mock the author, but to learn and improve ourselves together. Like this code, this is the part that i found in our code. public ContactInfoBean(final Reseller resellerInfo) {switch(resellerInfo.getType()) {case PROGRAM_CONTACT:readExecutiveInfo(resellerInfo);break;case FILE_CONTACT:readOperationalInfo(resellerInfo);break;default:break;}}The code works fine, and do its job pretty well. But some problem will appear by using this code-style. This class will grow tailing the biz changes, as usual, the bigger one class, the “merrier” to maintain it is. And most likely this class, will be having more than one purpose, can be called low-cohesion. Better OOP Approach Well the better approach for the case above would be using the Factory Design Pattern. We can let the factory of READER to generate every single instance according to their type. It would be easier to grow the instance type, since we just need to create a new class and do a little modification in the Factory class. The caller class, wont grow and will stand still at its current shape. public interface InfoReader {public void readInfo();} public class ExecutiveReader implements InfoReader {public void readInfo() {// override}} public class OperationalReader implements InfoReader {public void readInfo() {// override}}And The Factory public class InfoReaderFactory {private static final int PROGRAM_CONTACT = 1;private static final int FILE_CONTACT = 2;public static InfoReader getInstance(Reseller resellerInfo) {InfoReader instance = null;switch (resellerInfo.getType()) {case PROGRAM_CONTACT:instance = new ExecutiveReader();break;case FILE_CONTACT:instance = new OperationalReader();break;default:throw new IllegalArgumentException('Unknown Reseller');}return instance;}}And now The Caller InfoReader reader = InfoReaderFactory.getInstance(resellerInfo);reader.readInfo();The Benefits With the Factory Design Pattern to handle this case, we can achieve some benefits,Specifying a class for one task, means, easier to maintain since one class is for one purpose only (modularity/High Cohesion). i.e: Operational Reader is only to read data for Operational only, no other purpose. Just in case, one day in the future we need another Reader (say: NonOperationalReader). We just need create a new Class that extends (or implements) the InfoReader class and then we can override our own readInfo() function. This Caller class will have no impact. We just need to do some modification in the Factory code.public class InfoReaderFactory {private static final int PROGRAM_CONTACT = 1;private static final int FILE_CONTACT = 2;private static final int NEW_READER = 3;public static InfoReader getInstance(ResellerInfo resellerInfo) {InfoReader instance = null;switch (resellerInfo.getType()) {case PROGRAM_CONTACT:instance = new ExecutiveReader();break;case FILE_CONTACT:instance = new OperationalReader();break;case NEW_READER:instance = new NonOperationalReader();break;default:throw new IllegalArgumentException('Unknown Reseller');}return instance;}}Higher Reusability of Parent’s Component (Inheritance): Since we have parent class (InfoReader), we can put common functions and thingies inside this InfoReader class, and later all of the derivative classes (ExecutiveReader and OperationalReader) can reuse the common components from InfoReader . Avoid code redundancy and can minimize coding time. Eventhough this one depends on how you do the code and cant be guaranteed.But, It’s Run Perfectly, Should We Change It? Obviously the answer is big NO. This is only the case study and for your further experience and knowledge. OOP is good, do it anywhere it’s applicable. But the most important thing is, if it’s running, dont change it. It would be ridiculous if you ruin the entire working code just to pursue some OOP approach. Dont be naive also, no one can achieve the perfect code. The most important is we know what is the better approach. Reference: Case Study: Factory Design Pattern from our JCG partner Ronald Djunaedi at the Naming Exception blog....
apache-solr-logo

Setting up and playing with Apache Solr on Tomcat

A while back a had a little time to play with Solr, and was instantly blown away by the performance we could achieve on some of our bigger datasets. Here is some of my initial setup and configuration learnings to maybe help someone get it up and running a little faster. Starting with setting both up on windows. Download and extract Apache Tomcat and Solr and copy into your working folders. Tomcat Setup If you want tomcat as a service install it using the following: bin\service.bat install Edit the tomcat users under conf.: <role rolename="admin"/> <role rolename="manager-gui"/> <user username="tomcat" password="tomcat" roles="admin,manager-gui"/>If you are going to query Solr using international characters (>127) using HTTP-GET, you must configure Tomcat to conform to the URI standard by accepting percent-encoded UTF-8. Add: URIEncoding=’UTF-8′ <connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" />to the conf/server.xml Copy the contents of the example\solr your solr home directory D:\Java\apache-solr-3.6.0\home create the code fragment on $CATALINA_HOME/conf/Catalina/localhost/solr.xml pointing to your solr home. <?xml version="1.0" encoding="UTF-8"?> <context docBase="D:\Java\apache-tomcat-7.0.27\webapps\solr.war" debug="0" crossContext="true" > <environment name="solr/home" type="java.lang.String" value="D:\Java\apache-solr-3.6.0\home" override="true" /> </Context>Startup tomcat, login, deploy the solr.war. Solr Setup It should be available at http://localhost:8080/solr/admin/ To create a quick test using SolrJ the creates and reads data: Grab the following Maven Libs: <dependency> <groupid>org.apache.solr</groupId> <artifactid>apache-solr-solrj</artifactId> <version>3.6.0</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpclient</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpcore</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.james</groupId> <artifactid>apache-mime4j</artifactId> <version>0.6.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.apache.httpcomponents</groupId> <artifactid>httpmime</artifactId> <version>4.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>org.slf4j</groupId> <artifactid>slf4j-api</artifactId> <version>1.6.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>commons-logging</groupId> <artifactid>commons-logging</artifactId> <version>1.1.1</version> <scope>compile</scope> </dependency> <dependency> <groupid>junit</groupId> <artifactid>junit</artifactId> <version>4.9</version> <scope>test</scope> </dependency>JUnit test: package za.co.discovery.ecs.solr.test; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.net.MalformedURLException; import java.net.URISyntaxException; import java.util.ArrayList; import java.util.Collection; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.SolrServer; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.impl.HttpSolrServer; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.SolrDocument; import org.apache.solr.common.SolrDocumentList; import org.apache.solr.common.SolrInputDocument; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class TestSolr { private SolrServer server; /** * setup. */ @Before public void setup() { server = new HttpSolrServer("http://localhost:8080/solr/"); try { server.deleteByQuery("*:*"); } catch (SolrServerException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } /** * Test Adding. * * @throws MalformedURLException error */ @Test public void testAdding() throws MalformedURLException { try { final SolrInputDocument doc1 = new SolrInputDocument(); doc1.addField("id", "id1", 1.0f); doc1.addField("name", "doc1", 1.0f); doc1.addField("price", 10); final SolrInputDocument doc2 = new SolrInputDocument(); doc2.addField("id", "id2", 1.0f); doc2.addField("name", "doc2", 1.0f); doc2.addField("price", 20); final Collection<solrinputdocument> docs = new ArrayList<solrinputdocument>(); docs.add(doc1); docs.add(doc2); server.add(docs); server.commit(); final SolrQuery query = new SolrQuery(); query.setQuery("*:*"); query.addSortField("price", SolrQuery.ORDER.asc); final QueryResponse rsp = server.query(query); final SolrDocumentList solrDocumentList = rsp.getResults(); for (final SolrDocument doc : solrDocumentList) { final String name = (String) doc.getFieldValue("name"); final String id = (String) doc.getFieldValue("id"); //id is the uniqueKey field System.out.println("Name:" + name + " id:" + id); } } catch (SolrServerException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } catch (IOException e) { e.printStackTrace(); Assert.fail(e.getMessage()); } } }Adding data directly from the DB Firstly you need to add the relevant DB libs to the add classpath. Then create data-config.xml as below, if you require custom fields, those can be specified under the fieldstag in the schema.xml shown below the dataconfig.xml <dataconfig> <datasource name="jdbc" driver="oracle.jdbc.driver.OracleDriver" url="jdbc:oracle:thin:@localhost:1525:DB" user="user" password="pass"/> <document name="products"> <entity name="item" query="select * from demo"> <field column="ID" name="id" /> <field column="DEMO" name="demo" /> <entity name="feature" query="select description from feature where item_id='${item.ID}'"> <field name="features" column="description" /> </entity> <entity name="item_category" query="select CATEGORY_ID from item_category where item_id='${item.ID}'"> <entity name="category" query="select description from category where id = '${item_category.CATEGORY_ID}'"> <field column="description" name="cat" /> </entity> </entity> </entity> </document> </dataConfig>A custom field in the schema.xml: <fields> <field name="DEMO" type="string" indexed="true" stored="true" required="true" /> </fieldsAdd in the solrconfig.xml make sure to point the the data-config.xml, the handler has to be registered in the solrconfig.xml as follows <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">data-config.xml</str> </lst> </requestHandler>Once that is all setup a full import can be done with the following: http://localhost:8080/solr/admin/dataimport?command=full-import Then you should be good to go with some lightning fast data retrieval. Reference: Setting up and playing with Apache Solr on Tomcat from our JCG partner Brian Du Preez at the Zen in the art of IT blog....
android-logo

Turbo-charge your Android emulator for faster development

I came across an article, which claims to boost the Android emulator’s performance using Intel’s Hardware Accelerated Execution Manager (HAXM) driver. It got me excited and I decided to verify this claim. This blog entry is my story. My tools:Android SDK r20.0.3 Intellij Community Edition 11.1.3Basically, the special ‘enhancement’ provided by Intel is a special x86 Atom system image which utilizes the HAXM driver that enables better emulator performance. I’ll not repeat the technical details here, you can access the links below for more info. Caveat: This trick only works on Intel hardware and with the Virtualization Technology for Directed I/O (VT-d) enabled (usually via BIOS). Also, Intel x86 system images are currently (as of this blog posting) available for Android versions 2.3.3 (Gingerbread), 4.0.3 (ICD), and 4.1 (Jelly Bean) only. To avoid headaches, set the environment variable ANDROID_SDK_HOME to point to your Android SDK root folder before proceeding. High-level steps:  1. Download & install relevant packages via Android SDK Manager 2. Create Android Virtual Devices (AVD) 3. Create an Android Module project in IntelliJ CE 4. Test launching the Android application using the AVDs 1. Download relevant packages via Android SDK Manager Launch the SDK Manager and ensure the following is installed:Intel x86 Atom System Images (shown below is for Android 2.3.3) Intel x86 Emulator Accelerator (HAXM)Next, you’ll need to install the HAXM driver manually. Go to the Android SDK root folder and navigate to extras\intel\Hardware_Accelerated_Execution_Manager. Execute file IntelHaxm.exe to install. 2. Create Android Virtual Devices (AVD) Launch the AVD Manager and create 2 AVDs with the same options but different Target:DefaultAVD233 – Android 2.3.3 – API Level 10 IntelAVD233 – Intel Atom x86 System Image (Intel Corporation) – API Level 103. Create an Android Module project in IntelliJ CE In IntelliJ, create a new project of type ‘Android Module’, as shown:Under ‘Android SDK’, select the appropriate Android platform. You’ll need to point to your Android SDK root folder in order to choose the appropriate build target. As shown below, ‘Android 2.3.3′ is chosen:Ensure that the ‘Target Device’ option is set to Emulator, then click ‘Finish’ to complete the project creation. 4. Test launching the Android application using the AVDs Ok, we’ll test using the default Android 2.3.3 AVD first. At the IntelliJ menubar, select ‘Run’ > ‘Edit Configurations…’. Go to the ‘Target Device’ section. At the ‘Prefer Android Virtual Device’ option, select ‘DefaultAVD233′. Then Run the Android application. After a while, you should see the emulator window with the ‘Hello World’ message. To run with the Intel AVD, choose the ‘IntelAVD233′ instead. What’s most exciting is the speed of the emulator launch (timed from clicking ‘Run’ in IntelliJ up to the ‘Hello World’ message is shown in the emulator). The rough timings recorded using my notebook (Intel i3 380M, 3GB RAM):DefaultAVD233 – 1m 7s IntelAVD233 – 35sWow, that’s fast (~50% faster), without tuning other parameters to speed things up even further. Reference: Turbo-charge your Android emulator for faster development from our JCG partner Allen Julia at the YK’s Workshop blog....
java-logo

Java 7: Meet the Fork/Join Framework

JSR-166(y) is the official name of this new feature which is included in Java 7. If you notice there is a ‘y’ in the name, this is because JSR-166 (Concurrency Utilities) is being added since Java 5, but it wont stop here as there are already plans to add new classes in Java 8 under the JSR-166(e). Check this page maintained by Doug Lea, the creator of JSR-166, for more information. According to Wikipedia, Parallelism is the ‘simultaneous execution of some combination of multiple instances of programmed instructions and data on multiple processors’ and Java has classes and interfaces to achieve this (sort of…) since DAY 1. You may know them as: java.lang.Thread, java.lang.Runnable, etc… What Concurrency Utilities ( java.util.concurrent package) does is simplify the way we code concurrent tasks, so our code is much simpler and cleaner. As developers we haven’t had to do anything when running our applications in machines with higher processing resources, obviously, the performance of our applications will improve, but are we really using the processing resources to the maximum? The answer is big NO. This post will show you how the Fork/Join framework will help us in using the processing resources to the maximum when dealing with problems that can be divided into small problems and all the solutions to each one of those small problems produce the solution of the big problem (like recursion, divide and conquer). What you need NetBeans 7+ or any other IDE that supports Java 7 JDK 7+ Blur on an image, example from OracleThe Basics The Fork/Join framework focuses on using all the processing resources available in the machine to improve the performance of the applications. It was designed to simplify parallelism in Divide and Conquer algorithms. The magic behind the Fork/Join framework is its work-stealing algorithm in which work threads that are free steal tasks from other busy threads, so all threads are working at all times. Following are the basics you should know in order to start using the framework:Fork means splitting the task into subtasks and work on them. Join means merging the solution of every subtask into one general solution. java.lang.Runtime use this class in order to obtain the number of processors available to the Java virtual machine. Use the method +availableProcessors():int in order to do so. java.util.concurrent.ForkJoinPool Main class of the framework, is the one that implements the work-stealing algorithm and is responsible for running the tasks. java.util.concurrent.ForkJoinTask Abstract class for the tasks that run in a java.util.concurrent.ForkJoinPool. Understand a task as a portion of the whole work, for example, if you need to to do something on an array, one task can work on positions 0 to n/2 and another task can work on positions (n/2) +1 to n-1, where n is the length of the array.java.util.concurrent.RecursiveAction Subclass of the abstract task class, use it when you don’t need the task to return a result, for example, when the task works on positions of an array, it doesn’t return anything because it worked on the array. The method you should implement in order to do the job is compute():void, notice the void return. java.util.concurrent.RecursiveTask Subclass of the abstract task class, use it when your tasks return a result. For example, when computing Fibonacci numbers, each task must return the number it computed in order to join them and obtain the general solution. The method you should implement in order to do the job is compute():V, where V is the type of return; for the Fibonacci example, V may be java.lang.Integer.When using the framework, you should define a flag that indicates whether it is necessary to fork/join the tasks or whether you should compute the work directly. For example, when working on an array, you may specify that if the length of the array is bigger than 500_000_000 you should fork/join the tasks, otherwise, the array is small enough to compute directly. In essence, the algorithm you should follow is shown next: if(the job is small enough) { compute directly } else { split the work in two pieces (fork) invoke the pieces and join the results (join) } OK, too much theory for now, let’s review an example. The Example Blurring an image requires to work on every pixel of the image. If the image is big enough we are going to have a big array of pixels to work on and so we can use fork/join to work on them and use the processing resources to the maximum. You can download the source code from the Java™ Tutorials site. Once you download the source code, open NetBeans IDE 7.x and create a new project:Then select Java Project with Existing Sources from the Java category in the displayed pop-up window:              Select a name and a project folder and click Next >Now, select the folder where you downloaded the source code for the Blur on an image example:And select the file ForkBlur.java then click finish:The source code will be imported and a new project will be created. Notice that the new project is shown with erros, this is because Java 7 is not enable for default:To fix this, right click on the project name and select the option Properties. On the pop-up dialog, go to Libraries and select JDK 1.7 from the Java Platform ComboBox:Now, go to the option Sources and select JDK 7 from the Source/Binary Format ComboBox:Last but not least, increase the memory assigned to the virtual machine when running this application as we’ll be accessing a 5 million positions array (or more). Go to the option Run and insert -Xms1024m -Xmx1024m on the VM Options TextBox:Click OK and your project should be compiling with no errors. Now, we need to find an image bigger enough so we can have a large array to work on. After a while, I found some great images (around 150 MB) from planet Mars, thanks to the curiosity robot, you can download yours from here. Once you download the image, past it on the project’s folder. Before we run the example, we need to modify the source code in order to control when to run it using the Fork/Join framework. In the ForkBlur.java file, go to line 104 in order to change the name of the image that we are going to use: //Change for the name of the image you pasted //on the project's folder. String filename = 'red-tulips.jpg'; Then, replace lines 130 to 136 with the following piece of code: ForkBlur fb = new ForkBlur(src, 0, src.length, dst); boolean computeDirectly = true;long startTime = System.currentTimeMillis(); if (computeDirectly) { fb.computeDirectly(); } else { ForkJoinPool pool = new ForkJoinPool(); pool.invoke(fb); } long endTime = System.currentTimeMillis(); Notice the computeDirectly flag. When true, we’ll NOT be using the fork/Join Framework, instead we will compute the task directly. When false, the fork/join framework will be used. The compute():void method in the ForkBlur class implements the fork/join algorithm. It’s based on the length of the array, when the length of the array is bigger than 10_000, the task will be forked, otherwise, the task will be computed directly. Following you can see my 2 processors when executing the Blur on an image example without using the Fork/Join framework ( computeDirectly = true), it took about 14s to finish the work:You can see that the processors are working, but not to the maximum. When using the Fork/Join framework ( computeDirectly = false) you can see them working at 100% and it took almost 50% less time to finish the work:This video shows the complete process: I hope you can see how useful this framework is. Of course, you cannot use it all around your code, but whenever you have a task that can be divided into small tasks then you know who to call. Reference: Java 7: Meet the Fork/Join Framework from our JCG partner Alexis Lopez at the Java and ME blog....
apache-flume-logo

Distributed Apache Flume Setup With an HDFS Sink

I have recently spent a few days getting up to speed with Flume, Cloudera‘s distributed log offering. If you haven’t seen this and deal with lots of logs, you are definitely missing out on a fantastic project. I’m not going to spend time talking about it because you can read more about it in the users guide or in the Quora Flume Topicin ways that are better than I can describe it. But I will tell you about is my experience setting up Flume in a distributed environment to sync logs to a HDFS sink.Context I have 3 kinds of servers all running Ubuntu 10.04 locally: hadoop-agent-1: This is the agent which is producing all the logs hadoop-collector-1: This is the collector which is aggregating all the logs (from hadoop-agent-1, agent-2, agent-3, etc) hadoop-master-1: This is the flume master node which is sending out all the commands To add the CDH3 repository: Create a new file /etc/apt/sources.list.d/cloudera.list with the following contents: deb http://archive.cloudera.com/debian <RELEASE>-cdh3 contrib deb-src http://archive.cloudera.com/debian <RELEASE>-cdh3 contrib where: is the name of your distribution, which you can find by running lsb_release -c. For example, to install CDH3 for Ubuntu Lucid, use lucid-cdh3 in the command above. (To install a different version of CDH on a Debian system, specify the version number you want in the -cdh3 section of the deb command. For example, to install CDH3 Update 0 for Ubuntu Maverick, use maverick-cdh3u0 in the command above.) (Optionally) add a repository key. Add the Cloudera Public GPG Key to your repository by executing the following command: $ curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add - This key enables you to verify that you are downloading genuine packages Initial Setup On both hadoop-agent-1 and hadoop-collector-1, you’ll have to install flume-node (flume-node contains the files necessary to run the agent or the collector). sudo apt-get update sudo apt-get install flume-node On hadoop-master-1: sudo apt-get update sudo apt-get install flume-master First let’s jump onto the agent and set that up. Tune the hadoop-master-1 and hadoop-collector-1 variables appropriately, but change your /etc/flume/conf/flume-site.xml to look like: <configuration> <property> <name>flume.master.servers</name> <value>hadoop-master-1</value> <description>This is the address for the config servers status server (http)</description> </property><property> <name>flume.collector.event.host</name> <value>hadoop-collector-1</value> <description>This is the host name of the default 'remote' collector.</description> </property><property> <name>flume.collector.port</name> <value>35853</value> <description>This default tcp port that the collector listens to in order to receive events it is collecting.</description> </property><property> <name>flume.agent.logdir</name> <value>/tmp/flume-${user.name}/agent</value> <description> This is the directory that write-ahead logging data or disk-failover data is collected from applications gets written to. The agent watches this directory. </description> </property> </configuration> Now on to the collector. Same file, different config. <configuration> <property> <name>flume.master.servers</name> <value>hadoop-master-1</value> <description>This is the address for the config servers status server (http)</description> </property><property> <name>flume.collector.event.host</name> <value>hadoop-collector-1</value> <description>This is the host name of the default 'remote' collector.</description> </property><property> <name>flume.collector.port</name> <value>35853</value> <description>This default tcp port that the collector listens to in order to receive events it is collecting.</description> </property><property> <name>fs.default.name</name> <value>hdfs://hadoop-master-1:8020</value> </property><property> <name>flume.agent.logdir</name> <value>/tmp/flume-${user.name}/agent</value> <description> This is the directory that write-ahead logging data or disk-failover data is collected from applications gets written to. The agent watches this directory. </description> </property><property> <name>flume.collector.dfs.dir</name> <value>file:///tmp/flume-${user.name}/collected</value> <description>This is a dfs directory that is the the final resting place for logs to be stored in. This defaults to a local dir in /tmp but can be hadoop URI path that such as hdfs://namenode/path/ </description> </property><property> <name>flume.collector.dfs.compress.gzip</name> <value>true</value> <description>Writes compressed output in gzip format to dfs. value is boolean type, i.e. true/false</description> </property><property> <name>flume.collector.roll.millis</name> <value>60000</value> <description>The time (in milliseconds) between when hdfs files are closed and a new file is opened (rolled). </description> </property> </configuration> Web Based Setup I chose to do the individual machine setup via the master web interface. You can get to this pointing your web browser at http://hadoop-master-1:35871/ (replace hadoop-master-1 with public/private DNS IP of your flume master or setup /etc/hosts for a hostname). Ensure that the port is accessible from the outside through your security settings. At this point, it was easiest for me to ensure all hosts running flume could talk to all ports on all other hosts running flume. You can certainly lock this down to the individual ports for security once everything is up and running. At this point, you should go to hadoop-agent-1 and hadoop-collector-1 run /etc/init.d/flume-node start. If everything goes well, then the master (whose IP is specified in their configs) should be notified of their existence. Now you can configure them from the web. Click on the config link and then fill in the text lines as follows (use what is in bold): Agent Node: hadoop-agent-1 Source: tailDir(“/var/logs/apache2/”,”.*.log”) Sink: agentBESink(“hadoop-collector-1?,35853) Note: I chose to use tailDir since I will control rotating the logs on my own. I am also using agentBESink because I am ok with losing log lines if the case arises. Now click Submit Query and go back to the config page to setup the collector: Agent Node: hadoop-collector-1 Source: collectorSource(35853) Sink: collectorSink(“hdfs://hadoop-master-1:8020/flume/logs/%Y/%m/%d/%H00?,”server”) This is going to tell the collector that we are sinking to HDFS with the with an initial folder of ‘flume’. It will then log to sub-folders with “flume/logs/YYYY/MM/DD/HH00? (or 2011/02/03/1300/server-.log). Now click Submit Query and go to the ‘master’ page and you should see 2 commands listed as “SUCCEEDED” in the command history. If they have not succeeded, ensure a few things have been done (there are probably more, but this is a handy start: Always use double quotes (“) since single quotes (‘) aren’t interpreted correctly. UPDATE: Single quotes are interpreted correctly, they are just not accepted intentionally (Thanks jmhsieh) In your regex, use something like “.*\\.log” since the ‘.’ is part of the regex. In your regex, ensure that your blackslashes are properly escaped: “foo\\bar” is the correct version of trying to match “foo\bar”. Additionally, there are also tables of Node Status and Node Configuration. These should match up with what you think you configured. At this point everything should work. Admittedly I had a lot of trouble getting to this point. But with the help of the Cloudera folks and the users on irc.freenode.net in #flume, I was able to get things going. The logs sadly aren’t too helpful here in most cases (but look anyway cause they might provide you with more info than they provided for me). If I missed anything in this post or there is something else I am unaware of, then let me know. Reference: Distributed Apache Flume Setup With an HDFS Sink from our JCG partner Evan Conkle at the Evan Conkle’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close