Featured FREE Whitepapers

What's New Here?


Class diagram generation from Java source

UMLGraph allows the declarative specification and drawing of UML class and sequence diagrams. The specification is done in text diagrams, that are then transformed into the appropriate graphical representations.UMLGraph is implemented as a javadoc doclet (a program satisfying the doclet API that specifies the content and format of the output generated by the javadoc tool). Furthermore, the output of UMLGraph needs to be post-processed with the Graphviz dot program. Therefore, to draw class diagrams with UMLGraph class you will need to have javadoc and Graphviz installed on your computer. Maven plugin details UMLGraph can be easily integrated with existing maven based application. Below is the maven plugin details which needs to be configured: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <doclet>org.umlgraph.doclet.UmlGraphDoc</doclet> <docletArtifact> <groupId>org.umlgraph</groupId> <artifactId>doclet</artifactId> <version>5.1</version> </docletArtifact> <additionalparam>-horizontal -attributes -enumconstants -enumerations -operations -types -visibility -inferrel -inferdep -hide java.* -inferrel -collpackages java.util.*</additionalparam> <show>public</show> </configuration> </plugin> UMLGraph depends upon Graphviz which must be already installed on the machine. Also in the above maven setting if you configure ‘GRAPHVIZ_HOME’ environment variable you need not specify the docletpath in plugin details. Steps to configure UMLGraphDownload and install Graphviz Set GRAPHVIZ_HOME environment variable. Add the above plugin details in your POM.xml, configure additionalparams per your need. execute ‘mvn javadoc:javadoc’.Sample  Below is the sample generated using above configuration over our Pizza entity pizza_class_diagram.pngMore configuration You can configure this diagram per your needs. Please refer UMLGraph class diagram options for more configuration. Reference: Class diagram generation from Java source from our JCG partner Abhishek Jain at the NS.Infra blog....

Eclipse Community Survey 2012

Each year we survey the Eclipse community to gather some insight into what developers are doing with Eclipse and open source. We have published the results and the detailed data is available [xls] [ods]. Embedded version of the report is at the end of this post. Each year there are always some key trends shown in the results [2011 results]. Here are some insights that appeared for me: 1. Git Momentum Continues to Grow Git definitely has the momentum in the source code management market. Git/Github usage increased from 13% (2011) to 27% (2012). Subversion continues to decline but is still the most popular. For the first time this year we broke out Git and Github. I was surprised to see the vast majority of people specify Git (23%) and only 4.5% specify GitHub. This seems to show a lot of internal Git usage. Potentially a great opportunity for tool providers. 2. Maven Usage Accelerating Maven usage increased from 31% (2011) to 42% (2012). This might be a reflection on better integration with Eclipse and Maven. If so, kudos to the m2eclipse project team and Tycho. 3. Spring and EJBs continue to be popular server frameworks. Equinox and OSGi increasing too. Both Spring and EJBs continue to be the most popular frameworks for people doing server side development. Spring continues to be the most popular but EJBs gain some ground in 2012. It was great to see Equinox and OSGi runtimes almost double their usage from 6.8% (2011) to 12.3% (2012) 4. Mobile computing = Android + iOS Not surprisingly, mobile computing is dominated by Android and iOS. More people have deployed mobile applications, 43% have developed internal or external applications, compared to 35% in 2011. Android and Apple iOS continue to dominate as the key platforms. It is a bit surprising that more developers are not using cross platform frameworks. 60% claim to use only the Mobile OS SDK. jQuery Mobile (28.6%) and PhoneGap (17.9%) are the most popular mobile frameworks. 5. What motivates a developer? This year we asked some questions to explore what motivates a developer to participate in open source and spend their free time building applications Motivation to participate in open source projects seems to be driven by 1) sense of responsibility – 54% stated they participate to ‘give back and support’ and 36% due to their belief in the FOSS ethos, 2) learning – 36% claim it is a great way to learn new technologies, and 3) improving the project – 33% claim they participate due to a needed feature or bug fix. Somewhat surprisingly only 11% claimed it was due to being paid to contribute and 6% was an effective way to promote consulting business. We also asked how many developers build software/applications in their free time, outside of work. I was a bit surprised that 84% claimed to spend some amount of personal time developing software. The key reason is to learn new technologies, 74% answered they ‘enjoy programming and learning new technologies’ and 71% ‘keep my skills sharp’. An important lesson for anyone in the software industry that is targeting developers: Make it easy for developer to learn your technology. 6. Corporate policies towards open source becoming more positive Each year we ask what is the corporate policy towards open source participation. It is nice to see we are seeing movement towards more positive policies towards contributions and participation. 61% reported their corporate policies allowed them to actively participate in open source projects compared to 58% in 2011. We definitely need to get more companies to allow active participation but at least we are moving in the right direction. Thank you to everyone that participate in the survey. I always enjoy seeing the results. Please feel free to leave a comment on what you find interesting in the results. Eclipse survey 2012 report [final] View more from IanSkerrett Reference: Eclipse Community Survey Result for 2012 from our JCG partner Ian Skerrett at the Ian Skerrett’s blog blog....

Two Years of Experience Doesn’t make you “Senior”

Two years of experience doesn’t make you “senior”. Except maybe in high school. I don’t mean this in a negative sort of way. I mean it in a trying-to-help-you-out sort of way. I’ve worked for a relatively small number of companies in my twenty-plus years of professional life. Small by the software industry’s standards, anyway. I’ve been involved in the hiring process in every job I’ve had. In most cases, I’ve been involved in the full process: assembling the job description, pruning through cover letters and resumes, interviewing, and making recommendations to hire. In my opinion, pruning through cover letters and resumes is the hardest part. There have been times when I’ve literally received more than a thousand applications for a single job posting. In general, the first step is prune that list down to a manageable number (say a dozen or so) of people that you can talk to on the phone. From that list, you hope to narrow it down to a short list (e.g. four or five) of people that you can bring in for a face-to-face interview. You can’t really get to know somebody from a resume and cover letter; they’re used by an employer to sort out who they want to get to know. Winnowing a thousand applications into a dozen or so requires some tricks. I tend to look for two things in an applicant: do they have the skills, and do they pay attention to detail. I don’t care if a resume is printed on cobalt blue paper. I don’t care if it uses a fancy font (though I do care if the selected font makes it difficult to read). I don’t care if it’s presented in some neat-o origami. I don’t care if you won an Olympic gold medal. Actually, I do care about the Olympic gold medal: that’s pretty cool, but it’s still not enough to get you to the next round. The cover letter and resume must highlight relevant skills. I expect that an application for a job lists at least most of the skills required to do that job. The cover letter and resume should be grammatically correct and all words should be spelled correctly. I can read in both correct and American English. Pick one. On the topic of detail, let me return to the title of this post: Two Years of Experience Doesn’t make you “Senior”. Do not tell me that you graduated from college or university two years ago and have been working as a “senior” anything in the field. With two years of experience and a little luck, you may wind up as a “lead” developer; but you’re not senior. You need a few more years of real industry experience before you can call yourself senior. If you’re a young person just starting out in this business, I give you this advice: don’t oversell yourself, represent yourself honestly, pay attention to the details, and do a little research on the companies you’re applying to. The software industry values potential. Reference: Two Years of Experience Doesn’t make you “Senior” from our JCG partner Wayne Beaton at the Eclipse Hints, Tips, and Random Musings blog....

Are Agile plans Better because they are Feature-Based?

In Agile Estimating and Planning, Mike Cohn quotes Jim Highsmith on why Agile projects are better: “One of the things I keep telling people is that agile planning is “better” planning because it utilizes features (stories, etc.) rather than tasks. It is easy to plan an entire project using standard tasks without really understanding the product being built. When planning by feature, the team has a much better understanding of the product.” In the original post on a Yahoo mailing group,Highsmith also says “Sometimes key people in the agile movement have exaggerated at times to get noticed, I’ve done it myself at times–gone a little overboard to make a point. People then take this up and push it too far.” This is clearly one of those times. Activity-based Planning vs. Feature-based Planning The argument runs like this. Activity-based plans described in a WBS and Gantt charts are built up from “standardized tasks”. These tasks or activities are not directly tied to the features that the customer wants or needs – they just describe the technical work that the development team needs to do, work that doesn’t make sense to the other stakeholders. According to Highsmith, you can build a plan like this without understanding what the software that you are building is actually supposed to do, and without understanding what is important to the customer. An Agile plan, working from a feature backlog, is better because it “forces the team to think about the product at the right level – the features”. I don’t think that I have worked on a software development project, planned any which way, where we didn’t think about and plan out the features that the customer wanted. Where we didn’t track and manage the activities needed to design, build and deliver software with these features, including the behind-the-scenes engineering work and heavy lifting: defining the architecture, setting up the development and build and test environments and tools, evaluating and implementing or building frameworks and libraries and other plumbing, defining APIs and taking care of integration with other systems and databases, security work and performance work and operations work and system and integration testing and especially dealing with outside dependencies. Some of this work won’t make sense to the customer. It’s not the kind of work that is captured in a feature list. But that doesn’t mean that you should pretend that it doesn’t need to be done and it doesn’t mean that you shouldn’t track it in your plans. Good project planning makes explicit the features that the customer cares about and when they will be worked on, and the important technical work that needs to get done. It has to reflect how the team thinks and works. Activity-Based Planning is so Wrong in so very many Ways In one of the first chapters, “Why Planning Fails”, Cohn enumerates the weaknesses of activity-based planning. First, most activity-based planners don’t bother to prioritize the work that needs to be done by what the customer wants or needs. This is because they assume that everything in the scope needs to be, and will be, done. So activities are scheduled in a way that is convenient for the development team. Which means that when the team inevitably realizes that they are over budget and won’t hit their schedule, they’ll have to cut features that are important to the customer – more important than the work that they’ve already wasted time working on. Maybe. But there’s nothing stopping teams using activity-based planning from sequencing the work by customer priority and by technical dependencies and technical risk – which is what all teams, including Agile teams, have to do. This is how teams work when they follow a Spiral lifecycle, and teams that deliver work in incremental releases using Phased/Staged Delivery, or teams that Design and Build to Schedule, making sure that they get the high-priority work done early in order to hit a hard deadline. All of these are well-known software project planning and delivery approaches which are described in Steve McConnell’s Rapid Development and other books. Everyone that I know who delivers projects in a “traditional, plan-driven” way follows one of these methods, because they know that that a pure, naïve, plan-everything-upfront serial Waterfall model doesn’t work in the real world. So we can stop pretending otherwise. Another criticism of activity-based planning is that it isn’t possible to accurately and efficiently define all of the work and all of the detailed dependencies for a software development project far in advance. Of course it isn’t. This is what Rolling Wave planning is for – lay out the major project milestones and phases and dependencies, and plan the next release or next few months/weeks/whatever in detail as you move forward. Although Cohn does a good job of explaining Rolling Wave planning in the context of Agile projects, it’s been a generally-recognized good planning practice for any kind of project for a long time now. Agile plans aren’t better because they are Feature-Based These, and the other arguments against activity-based planning in this book, are examples of a tired rhetorical technique that Glen Alleman describes perfectly as: “Tell a story of someone doing dumb things on purpose and then give an example of how to correct the outcome using an agile method”. Sure, a lot of Waterfall projects are badly run. And, yeah, sure, an Agile project has a better chance of succeeding over a poorly-planned, badly-managed, serial Waterfall project. But it’s not because Agile planning is feature-based or because activity-based planning is wrong. People can do dumb things no matter what approach they follow. The real power in Agile planning is in explicitly recognizing change and continuously managing uncertainty and risk through short iterations. Fortunately, that’s what the rest of Cohn’s book is about. Reference: Are Agile plans Better because they are Feature-Based? from our JCG partner Jim Bird at the Building Real Software blog....

Testing Abstract Classes and Template Method Pattern

From wikipedia “A template method defines the program skeleton of an algorithm. One or more of the algorithm steps can be overridden by subclasses to allow differing behaviors while ensuring that the overarching algorithm is still followed”. Typically this pattern is composed by two or more classes, one that is an abstract class providing template methods (non-abstract) that have calls to abstract methods implemented by one or more concrete subclasses. Often template abstract class and concrete implementations reside in the same project, but depending on the scope of the project, these concrete objects will be implemented into another project. In this post we are going to see how to test template method pattern when concrete classes are implemented on external project, or more general how to test abstract classes. Let’s see a simple example of template method pattern. Consider a class which is responsible of receiving a vector of integers and calculate the Euclidean norm. These integers could be received from multiple sources, and is left to each project to provide a way to obtain them. The template class looks like: public abstract class AbstractCalculator {public double euclideanNorm() {int[] vector = this.read();int total = 0;for(int element:vector) { total+= (element*element); }return Math.sqrt(total); }public abstract int[] read(); }Now another project could extend previous class and make an implementation of abstract calculator by providing an implementation of read() method . public class ConsoleCalculator extends AbstractCalculator {public int[] read() {int [] data = new int[0];Scanner scanner = new Scanner(System.in);//data = read requried data from consolereturn data;}}Developer that has written a concrete implementation will test only read() method, he can “trust” that developer of abstract class has tested non-abstract methods. But how are we going to write unit tests over calculate method if class is abstract and an implementation of read() method is required? The first approach could be creating a fake implementation: public class FakeCalculator extends AbstractCalculator {private int[] data;public FakeCalculator(int[] data) { this.data = data; }public int[] read() { return this.data; }}This is not a bad approach, but has some disadvantages:Test will be less readable, readers should know the existence of these fake classes and must know exactly what are they doing. As a test writer you will spend time in implementing fake classes, in this case it is simple, but your project could have more than one abstract class without implementation, or even with more than one abstract method. Behaviour of fake classes are “hard-coded”.A better way is using Mockito to mock only abstract method meanwhile implementation of non-abstract methods are called. public class WhenCalculatingEuclideanNorm {@Test public void should_calculate_correctly() {AbstractCalculator abstractCalculator = mock(AbstractCalculator.class, Mockito.CALLS_REAL_METHODS);doReturn(new int[]{2,2}).when(abstractCalculator).read(); assertThat(abstractCalculator.euclideanNorm(), is(2.8284271247461903));}@Test public void should_calculate_correctly_with_negative_values() {AbstractCalculator abstractCalculator = mock(AbstractCalculator.class, Mockito.CALLS_REAL_METHODS);doReturn(new int[]{-2,-2}).when(abstractCalculator).read(); assertThat(abstractCalculator.euclideanNorm(), is(2.8284271247461903));}}Mockito simplifies the testing of abstract classes by calling real methods, and only stubbing abstract methods. See that in this case because we are calling real methods by default, instead of using the typical when() then() structure, doReturn schema must be used. Of course this approach can be only used if your project does not contain a concrete implementation of algorithm or your project will be a part of a 3rd party library on another project. In the other cases the best way of attacking the problem is by testing the implemented class. Download sourcecode Reference: Testing Abstract Classes (and Template Method Pattern in Particular) from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Software Principles are like some Life Principles

Software principles are useful tools for design and implementation and they help us produce quality products. However, software principles can be compromised at times. They don’t always have to be followed as there are exceptions to the rule. In some ways, they are similar to some life principles, and this blog explores that idea.Software and LifeIn life we have ethics and morals that we live by. Ethics and morals manifest themselves as life principles. They give us a framework to become better people, respect one another, and ultimately improve our quality of life.In the software industry we have software design principles. They are rules we operate under in order to make the products we develop elegant, easy to understand, and maintainable. Software products run our economy or make our day to day lives easier, and software principles play a large role in allowing that to happen.However, software design principles are not meant to be dogmatic. They are not meant to be strictly adhered to. The use of software principles should be evaluated within the prism of trade offs. Software principles are essentially rules of thumbs and can be broken if it’s the most pragmatic thing to do.Software principles are like some life principles . . . but unlike others. To illustrate my point, let’s consider some life principles that can be considered absolute, i.e, should never be broken no matter what the circumstances.Don’t Cheat, and Be NiceTake the rule “Don’t cheat.” Under no circumstances would I teach my son that it is permissible to cheat. It’s not OK to cheat on a test at school. It’s not OK to cheat on your taxes, and its not OK to cheat on a board game at home. No matter what the context, big or small, cheating is not beneficial. It only hurts others and ultimately your self. Software principles aren’t like the cheating principle.Another example would be “Never treat a person as a means to an end.” It is unethical to use a person strictly as a means to an end with disregard to their humanity. People should be treated as human beings, not as objects. Under no circumstances, would I teach my child to “use” someone just for the sake of personal gain and devoid of respect. Software principles aren’t like ethical principles . . . we can break them if needed.So what are software principles like then, and what software principles am I talking about? Most life principles we live by are general rules of thumb, they are not absolutes. Software principles are like that. Here are a few examples of what I mean:DRYWe live by the principle of “Always tell the truth”, but this rule doesn’t always apply. Take for example white lies. If your wife asks you, “Do I look fat in this dress?”, you would be asinine to say yes. Most us would say “No honey, you look great!” Even though your beautiful wife may be slightly overweight (no big deal in my eyes, personally.)In software, we have the DRY principle: Don’t Repeat Yourself. This should be something you mostly do and can greatly contribute to clean code. But would you really want to create a full fledged Template or Strategy Method Pattern to save 1 or 2 lines of code? Sometimes violating the DRY principle can avoid excessive pattern usage, which can cripple a project and make code unintelligible. Evaluate the trade offs for DRY and make the best decision.Law Of DemeterHow about the life principle of “Always eat healthy”? Yes, we should eat healthy and watch our diet, in general, so we can live a quality life. But we are allowed to break the rule on holidays and eat the fried turkey and pecan pie. It is permissible to go out with the guys and down some beers and wings once in a while. It’s not going to kill you. If eating unhealthy is the exception, it is fine.The Law of Demeter is a software principle that enables loose coupling and limits the knowledge of one component to another. Following this rule keeps your code understandable and limits dependencies. But even though its called a “law”, it should be viewed more as a guideline. If you are dealing with an anemic object, and just need to get some data, it is permissible to for a client to dig through objects to get what it needs. The alternative would be to blow up the API with several needless methods, which is a documented disadvantage of Law of Demeter.ConclusionWhen designing software, we should understand that software principles should be followed in order to produce quality code. However, use them in a pragmatic manner and don’t pursue a software principle so hard that it makes your life and code, miserable. Evaluate your design in terms of trade offs. After all, we certainly have life principles that we don’t always follow, and software principles are the same way. Do what’s best for value added effort and leave the dogma behind.Reference: Software Principles are like some Life Principles from our JCG partner Nirav Assar at the Assar Java Consulting blog....

A revolution with Business Activity Monitor (BAM) 2.0

Producing middle ware that is both lean and enterprise worthy is a difficult job. It’s either non-existent or requires innovative thinking (a lot of it) and a lot of going back and forth with your implementations. Very risky business, but if you get it right, it puts you far ahead of anyone else. It’s why we thought of re-writing WSO2 BAM from scratch and taking a leap rather than chugging away slowly by iterative fixing. If you prefer to hear me rather than reading this, please catch a webinar on this at http://bit.ly/xKxm8R.     Diagram coutesy of http://softwarecreation.org/2008/ideas-in-software-development-revolution-vs-evolution-part-1/When you try to monitor your business activities, you need to plug in to your servers and capture events. It sounds easy enough, so what’s the big deal? you may ask. Here’s a few road blocks we hit with our intial BAM 1.x version:Performance – We plug in to our ESBs and App Servers and all metrics were perfect. It nicely showed request counts, response times, etc. It was perfect as long as the load is low. If one server starts sending 1000 events/sec, things started getting ugly. Even worse, if we plug in to a few servers and start getting 1 billion events / day, well, that would have been a nightmare from the word go. We couldn’t even fathom what would happen at that type of scale. Scalability – We need to store events and process them. Sadly, we discovered the hard waye this would mean is we need to scale in many different ways.Event load – We need to scale in terms oh handling large amounts of events. We didn’t have a high performance server, but no matter how good our performance would be, there is still a breaking point. Afterwards, you need to scale. Storage – If you store 1000 events a day, your data will grow. And, all of us hate to delete off old email, to get more inbox space. So naturally, everyone wants to keep their events. Processing power – When you want to analyze events that you collect, a single server can only give you that much of processing power. You need to scale out your analytics. Another, ‘oh, so obvious’ thing that we learnt eventually.Customizability – We provided a lovely set of dashboards that showed all you wanted to know about your server and API metrics. But, no one is ever satisfied with what we they have. They want more. They want to monitor their metrics and analyze their data and put up their own graphs. And, of course, they want to do it now, not in 2 months.In May 2011, we decided to start a whole new initiative to re-write WSO2 BAM from scratch. We analyzed the problem made a few decisions. Here’s a few of them.Divide and conquer – We divided the problem. We have to aggregate, analyze and present data. So we built separate components for each, keeping in mind that we need to scale each individually. We mapped these into the event receiver, analyzer framework and a presentation layer. Data agents are the link between anyone who wants to send events and the BAM server. The WSO2 Carbon platform, allows us to easily uninstall a component from any server. This means we can take the BAM distro, uninstall other components just to make an Event Receiver BAM server. Or to make an Analyzer BAM server. It’s just a click of a button.The 3 main components of BAM 2.0Scalable and fast storage – We chose to use Apache Cassandra as our storage solution. I do not want to argue that it’s the best data store ever. But, it works for us well. It allows us to do fast writes to store a large amount of data, quickly. Also, it’s built to scale. Scaling up Cassandra, takes minutes, not weeks. And scaling up doesn’t mean it’s going to cost you. Also, it’s written in Java, and being a Java house, it allows us to hack around the code. Fast protocol – We chose to use Apache Thrift as our default protocol. There are many arguments against it, but it holds up well for us. It’s fast and it does it’s job. It allows us to maintain sessions, supports a bunch of languages. One key thing was Cassandra uses it as well, allowing us to gain more performance in streaming data into Cassandra without deserializing. Scalable analytics – We chose to write our own analytics language. But, if it doesn’t suit you, you can plugin your own java code. Hadoop is unavoidable when it comes to scaling analytics. So, we decided to have a Hadoop mode for large amounts of data and a non-Hadoop mode, so that anyone can just use BAM without worrying about any Hadoop cluster.  Gadget based dashboards/reports – Drag and drop visualizations are very attractive when you don’t want to spend weeks writing code to visualize. We developed a gadget generator so you can quickly visualize your analyzed data easily.After a couple of milestones, we were able to spin off an alpha. It’s available here: http://dist.wso2.org/products/bam/2.0.0-Alpha/wso2bam-2.0.0-ALPHA.zip. It is not the silver bullet and documentation is still WIP. But, if we haven’t already reached our destination, it’s within our reach now. Reference: A revolution with Business Activity Monitor (BAM) 2.0 from our JCG partner Mackie Mathew at the dev_religion blog....

MapReduce with MongoDB

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. You can read about MapReduce from here. MongoDB is an open source document-oriented NoSQL database system written in C++. You can read more about MongoDB from here. 1. Installing MangoDB. Follow the instructions from the MongoDB official documentation available here. In my case, I followed the instructions for OS X and it worked fine with no issues.I used sudo port install mongodb to install MongoDB and one issue I faced was regarding to the xcode version I had. Basically I installed xcode while I was in OS X Leopard and didn’t update the xcode to the latest after moving to Lion. Once I updated the xcode, I could install mongodb with MacPort with no issue. Another hint – sometime your xcode installation doesn’t work fine when you directly install it from the App Store – what you could do is, get xcode from the App Store and then go to the Launch Pad, find Install Xcode and install it from there. 2. Running MongoDB Starting MongoDB is simple..Just type mogod in the terminal or in your command console.By default this will start the MongoDB server on 27017 and will use the /data/db/ directory to store data – yes, that is directory that you created in step – 1.In case you want to change those default settings – you can do it while starting the server.mongod –port [your_port] –dbpath [your_db_file_path]You need to make sure that your_db_file_path exists and its empty when you start the server for the first time… 3. Starting MongoDB shell We can start MongoDB shell – to connect it to our MongoDB server and run commands from there.To start the MongoDB shell to connect to the MongoDB server running on the same machine with the default ports you only need to type mongo in the command line. If you are running MongoDB server on a different machine with a different port use the following.mongo [ip_address]:[port] e.g : mongo localhost:4000 4. Let’s create a Database first. In the MangoDB shell type the following… > use libraryThe above is supposed to create a database called ‘library’. Now to see whether your database been created, just type the following – which is supposed to list all the databases. > show dbs; You will notice that the database that you just created is not listed there. The reason is, MongoDB creates databases on-demand. It will get created only when we add something to it. 5. Inserting data to MongoDB. Let’s first create two books with the following commands. > book1 = {name : "Understanding JAVA", pages : 100} > book2 = {name : "Understanding JSON", pages : 200} Now, let’s insert these two books in to a collection called books. > db.books.save(book1) > db.books.save(book2) The above two statements will create a collection called books under the database library. Following statement will list out the two books which we just saved. > db.books.find();{ "_id" : ObjectId("4f365b1ed6d9d6de7c7ae4b1"), "name" : "Understanding JAVA", "pages" : 100 } { "_id" : ObjectId("4f365b28d6d9d6de7c7ae4b2"), "name" : "Understanding JSON", "pages" : 200 } Let’s add few more records. > book = {name : "Understanding XML", pages : 300} > db.books.save(book) > book = {name : "Understanding Web Services", pages : 400} > db.books.save(book) > book = {name : "Understanding Axis2", pages : 150} > db.books.save(book) 6. Writing the Map function Let’s process this library collection in a way that, we need to find the number of books having pages less 250 pages and greater than that. > var map = function() { var category; if ( this.pages >= 250 ) category = 'Big Books'; else category = "Small Books"; emit(category, {name: this.name}); }; Here, the collection produced by the Map function will have a collection of following members. {"Big Books",[{name: "Understanding XML"}, {name : "Understanding Web Services"}]); {"Small Books",[{name: "Understanding JAVA"}, {name : "Understanding JSON"},{name: "Understanding Axis2"}]); 7. Writing the Reduce function. > var reduce = function(key, values) { var sum = 0; values.forEach(function(doc) { sum += 1; }); return {books: sum}; }; 8. Running MapReduce against the books collection. > var count = db.books.mapReduce(map, reduce, {out: "book_results"}); > db[count.result].find(){ "_id" : "Big Books", "value" : { "books" : 2 } } { "_id" : "Small Books", "value" : { "books" : 3 } } The above says, we have 2 Big Books and 3 Small Books. Everything done above using the MongoDB shell, can be done with Java too. Following is the Java client for it. You can download the required dependent jar from here. import com.mongodb.BasicDBObject; import com.mongodb.DB; import com.mongodb.DBCollection; import com.mongodb.DBObject; import com.mongodb.MapReduceCommand; import com.mongodb.MapReduceOutput; import com.mongodb.Mongo;public class MongoClient {/** * @param args */ public static void main(String[] args) {Mongo mongo; try { mongo = new Mongo("localhost", 27017); DB db = mongo.getDB("library");DBCollection books = db.getCollection("books");BasicDBObject book = new BasicDBObject(); book.put("name", "Understanding JAVA"); book.put("pages", 100); books.insert(book); book = new BasicDBObject(); book.put("name", "Understanding JSON"); book.put("pages", 200); books.insert(book); book = new BasicDBObject(); book.put("name", "Understanding XML"); book.put("pages", 300); books.insert(book); book = new BasicDBObject(); book.put("name", "Understanding Web Services"); book.put("pages", 400); books.insert(book); book = new BasicDBObject(); book.put("name", "Understanding Axis2"); book.put("pages", 150); books.insert(book); String map = "function() { "+ "var category; " + "if ( this.pages >= 250 ) "+ "category = 'Big Books'; " + "else " + "category = 'Small Books'; "+ "emit(category, {name: this.name});}"; String reduce = "function(key, values) { " + "var sum = 0; " + "values.forEach(function(doc) { " + "sum += 1; "+ "}); " + "return {books: sum};} "; MapReduceCommand cmd = new MapReduceCommand(books, map, reduce, null, MapReduceCommand.OutputType.INLINE, null);MapReduceOutput out = books.mapReduce(cmd);for (DBObject o : out.results()) { System.out.println(o.toString()); } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } }Reference: MapReduce with MongoDB from our JCG partner Prabath Siriwardena at the Facile Login blog....

Spring & JSF integration: Select Items

With JSF, to use comboboxes, listboxes and checkboxes, you need to be aware of the javax.faces.model.SelectItem class. A SelectItem represents a single selectable option; it contains both the information needed for rendering, and the value that should be bound if the item is selected. Most of the time SelectItems are constructed with a value and a label: new SelectItem(Title.MISS, "Miss");Working with SelectItems before JSF 2.0 was often tedious as you needed to write code to adapt your domain objects into SelectItems. JSF 2.0 has improved things a lot and you can now dynamically construct SelectItems using EL expressions. For example: <h:selectOneMenu> <f:selectItems value="#{customerRepository.all}" var="customer" label="#{customer.name}"/> </h:selectOneMenu>This certainly helps to reduce the amount of boiler-plate code, however, I still think that there are things that we can do make SelectItems even easier to use, especially when working with Spring. With that in mind, I have been developing a <s:selectItems> component, intended as a drop-in replacement for <f:selectItems>. The first thing we can do, is help to reduce boiler-plate typing by removing the need to specify a var attribute. With <s:selectItems>, if the var attribute is not specified then it will default to item. So the code above can be written: <h:selectOneMenu> <s:selectItems value="#{customerRepository.all}" label="#{item.name}"/> </h:selectOneMenu>In the above example, the value is being bound to a repository Interface that returns a Collection of Customer entities. As with the standard <f:selectItems> components you can also bind to an Array or DataModel. In addition the new component also supports any comma separated String value. <h:selectOneMenu> <s:selectItems value="Java, Spring, JavaServer Faces"/> </h:selectOneMenu>The next thing that <s:selectItems> can help with is null values. It is quite common to need a “Please Select” option in drop downs to represent nulls. In vanilla JSF this can often mean additional mark-up for each component: <h:selectOneMenu> <f:selectItem label="--- Please Select ---" noSelectionOption="true" itemValue=""/> <s:selectItems value="'{items}"/> </h:selectOneMenu>Instead of needing this additional mark-up for each element, our component will automatically insert a “Please Select” option whenever it is linked to a UISelectOne component. You can use the includeNoSelectionOption attribute to override this behavior. The label used for the “no selection option” will default to “— Please Select —” but you can customize and internationalize this text easily by adding a org.springframework.context.MessageSource to your ApplicationContext that can resolve the code "spring.faces.noselectionoption". On the subject of MessageSource, the <s:selectItems> component will, whenever possible, try to create the label of the SelectItem using a org.springframework.springfaces.message.ObjectMessageSource. I have blogged in the past about how to convert Objects to messages and this component simply makes use of those ideas. The new component has helped us when creating the SelectItems to display, but what about dealing with form submit? How do you convert the submitted String option back to a real Object? in the initial example above, we are binding to JPA Customer entities; The values will display just fine, but when you submit a form a “Conversion Error” is displayed because JSF does not know how to get from the submitted String back to the Customer object. The usual answer here is to develop your own javax.faces.convert.Converter implementation, but this is often problematic. Often you your select item value will be some complex object that is hard to represent in its entirety as a String. There is an interesting technique that you can use when writing a Converter that will be used with a UISelectOne or UISelectMany component. You actually only need write code to convert from the Object to a String, conversion in the other direction can be accomplished by iterating the SelectItems and returning the single Object value that, when converted to a String, matches your submitted value. You can read more about the idea in this blog post by Arjan Tijms. Using this technique with the <s:selectItems> component is really easy, simply provide a itemConverterStringValue attribute that will be used to create the unique getAsString() value: <h:selectOneMenu> <s:selectItems value="#{customerRepository.all}" label="#{item.name}" itemConverterStringValue="#{item.id}"/> </h:selectOneMenu>In actual fact the itemConverterStringValue is optional. If you don’t specify it, the toString() method of the object will be used or, in the case of a JPA @Entity, the @ID field will automatically be used. You are still free to write and attach your own Converter if you need to, in such cases the itemConverterStringValue is ignored. Finally that is one more trick that the <s:selectItems> can perform. If you select component is bound to a Boolean or an Enum then the value attribute can be completely omitted. The select items will be built based on all possible options that the binding supports (“Yes”/”No” for Booleans or the complete set of Enum values). This also works with typed collections. For example, the following will display the options “Java”, “Spring” and “JavaServer Faces” (assuming you have an appropriate ObjectMessageSource) : public enum Technology { JAVA, SPRING, JAVASERVER_FACES }public class Bean implements Serializable { private Set<Technology> technologies = new HashSet<Technology>(); // ... getters and setters }<h:selectManyCheckbox value="#{bean.technologies}"> <s:selectItems/> </h:selectManyCheckbox>If you want to check out any of this code take a look at the org.springframework.springfaces.selectitems package from the GitHub Project. Reference: Integrating Spring & JavaServer Faces : Select Items from our JCG partner Phillip Webb at the Phil Webb’s Blog blog....

Give your developers prod access – it’s trust

This isn’t a new idea and plenty of companies already do this. I had a discussion with a co-worker about this last week and wanted to get my thoughts down here so I can laugh at them later on when I get burned, which I haven’t yet, but have been assured I will. There is an idea that there are two organizations in a web development company that have different and apparently opposing roles: Development and Operations. This quickly turns into a discussion about who does what? : Roles Developers:Add capabilities to the existing system (write code, evaluate new components, integrate 3rd party products, etc) Optimize previously added capabilities to the existing system (improve performance, fix bugs, design new architectures) Help turn business needs into business value (codify a business requirement into a deliverable product)OperationsAdd capabilities to the existing system (implement monitoring, configuration management, evaluate new applications & products) Optimize previously added capabilities to the existing system (Improve performance, fix problems, design new architecture) Help turn business needs into business value (reduce cost to deliver, improve availability, improve security) These groups do the same things in different ways. You could just as well have the two groups be “API Developers” and “GUI Developers” – they have sufficiently different goals to create conflict & have different points of view. What are both groups doing? They are building and operating a service — Period. But But But…. “But developers with access to production could get access to customer data”. “But developers think differently than Operations and they could cause outages” “But developers might go in and change something without telling anyone” “But developers might break something I have to fix and that would piss me off” All of the above have happened to me in one job or another – every one of them. In every single case you know who did it? An Operations team member. If I had $50 for every time a developer did it to me in environments where they had production access, I’d maybe have $50. This is my experience – yours may be completely different. You trust them to write your code – that’s the product that your company runs on. Do you put layer upon layer to make sure they aren’t inserting malicious code? Do you prevent them from walking out of your building with your entire codebase? I know in larger organizations this may be true – if that’s going through your head re-read the title of this blog. I don’t care about hamstrung behemoth companies. Do your ops folks have access to your code? You trust that they wont go break something in there, but you don’t trust that developers wont go break something in production? The reality Developers care about the products they build, just like Operations does. If they don’t care then you have bigger problems and giving them production access will only make those problems evident faster – which is good. Developers also write code with a certain understanding about how the production world works and when they don’t have production access, that understanding is often wrong. Misunderstanding, lack of data, and lack of an ability to predict the outcome of code in a production environment – in my opinion – is more often fatal than any stupid or malicious act from a developer. Yes, you can try to build a production-like environment for their testing but there is nothing like the real thing – there never will be. It will always be simulated, it will always fall short in areas, and it will never be viewed as a perfectly accurate representation of production. I know there are lots of arguments out there on both sides of this but I know where I fall. I also know this runs counter to many of the regulatory “requirements” out there. I’m not ignoring that, but I am choosing to challenge us to come up with a better way instead of giving ourselves a false sense of security by blocking access. The benefit So why would I want to give my developers access to production? Have you ever been given the master key to an office? How about being given your parents car keys for the first time? You may not have thought about it at the time, but there is tremendous pride and appreciation that comes from being trusted. All the silly team building games folks play – it’s about building trust. Another word for trust is respect. When you give your developers access to production you are saying a few things quietly but clearly:I value you as a team member and I value your contribution – I want to maximize what you can do for us. I have an expectation of you that you will learn about our production environment and leverage this access to write better code. I trust you, please do not violate that trust I think you are competent and professional and believe you’ll do the right thing.You can tell developers these things without giving them production access – but it’s much less convincing. The safety net So what happens that first time when a developer drops a database table in production thinking he was working on a development environment? You run a post-mortem, and the developers come to it (NOT just the one who caused the problem).Blameless, you are trying to understand what information & decisions led up to the event – NOT who is responsible for it Identify what the timeline was, what were peoples understandings about the situation that led to this decision. Identify gaps in communication – why did that developer think they were working on a development box? Identify gaps in your defense – you did have backups right? You were able to recover the db table right? You had a plan to communicate with customers during the outage right? Create a set of corrective actions that will protect against this in the future. For example – maybe we make sure production machines all have a bright red prompt so you know you are working on production. DO NOT use this as a reason to remove developer access to productionIf the problem repeats itself, your post-mortem process continues. If the same offenders keep doing the same thing, you have to ask yourself if they should work for you. This applies to Operations team members just as much as it does Developers. If you can’t manage the responsibility of production access, then you don’t belong at a company that gives production access to the whole team. Also, always keep in mind that your Operations teams can and will make the same mistakes you are worried about your Dev’s making. Except they’ll make them more often because they have this implicit trust that they have a “right” to work in that environment and that there are no environments like production in which they can test their change out. it’s no different – it just how you frame it. Reference: Give your developers prod access – it’s trust from our JCG partner Aaron Nichols at the Operation Bootstrap blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: