Featured FREE Whitepapers

What's New Here?


Sending emails with Java

I start writing this post as a simple “how to send an email” using Java, but later I found I need to briefly explain more things. So, here is this kind of all in one summary about sending emails with Java. Outside the Java SE platform, but included in JavaEE one, the JavaMail package provides a platform to build mail and messaging applications. Lets go with an example. Sending a simple text message // Common variables String host = "your_smtp_server"; String from = "from_address"; String to = "to_address";// Set properties Properties props = new Properties(); props.put("mail.smtp.host", host); props.put("mail.debug", "true"); // Get session Session session = Session.getInstance(props); try { // Instantiate a message Message msg = new MimeMessage(session); // Set the FROM message msg.setFrom(new InternetAddress(from)); // The recipients can be more than one so we use an array but you can // use 'new InternetAddress(to)' for only one address. InternetAddress[] address = {new InternetAddress(to)}; msg.setRecipients(Message.RecipientType.TO, address); // Set the message subject and date we sent it. msg.setSubject("Email from JavaMail test"); msg.setSentDate(new Date()); // Set message content msg.setText("This is the text for this simple demo using JavaMail."); // Send the message Transport.send(msg); } catch (MessagingException mex) { mex.printStackTrace(); }Alternatively, instead using: msg.setText("This is the text for this simple demo using JavaMail.");you can use next to set the message content: msg.setContent("This is the text for this simple demo using JavaMail.", "text/plain");Checking an email address Here is a little trick to check, using a regular expression, if an email address is well formed: Pattern rfc2822 = Pattern.compile("^[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?$"); if(rfc2822.matcher(EMAIL_ADDRESS).matches()) { // Well formed email }Multipart messages That’s fine, but usually you don’t send simple text messages. Instead you send nice HTML body messages with bold or italic text, images, and so on. NOTE: See below at references section to see about MIME format which extends the data you can attach to an email to allow multiparts, attachments, etc. When you write a multipart message the content is composed of different parts, for example one part is the message written as simple text and a second part with the same message written in an enhanced way using HTML. Then the client that reads the message is responsible to render the appropriate part depending on its capabilities. ... // Here create two parts and set as message contect // Create and fill first part MimeBodyPart part1 = new MimeBodyPart(); part1.setText("This is part one of this multipart message."); // Create and fill second part MimeBodyPart part2 = new MimeBodyPart(); part2.setText("This is part two of this multipart message."); // Create the Multipart. Multipart mp = new MimeMultipart(); mp.addBodyPart(part1); mp.addBodyPart(part2); // Set the message's content msg.setContent(mp); ...Sending attachments Terrific, we know how to send a plain text email and something more incredible like a multipart message with HTML content. Next step is to send an email attaching too some files. Create an email with attached file is similar to create a multipart message where one part can be the text of the message and another part is the attached file. The secret is in the next lines: ... // Create a new part for the attached file MimeBodyPart part3 = new MimeBodyPart(); // Put a file in the second part FileDataSource fds = new FileDataSource("THE_FILE_NAME"); part3.setDataHandler(new DataHandler(fds)); part3.setFileName(fds.getName()); // 'mp' is the previously created 'MimeMultipart' object mp.addBodyPart(part3); // 'msg' is the previously created 'Message' object msg.setContent(mp); ...HTML messages Create a message o multipart message with HTML content is really easy, simply specify the MIME type in the setContent method: ... MimeBodyPart htmlPart = new MimeBodyPart(); htmlPart.setContent("<h1>Sample</h1><p>This is a sample HTML part</p>", "text/html"); ...Attaching images within the HTML code If you write a rich message using HTML you can, of course, add images using the ‘img‘ tag. If the image is referenced from an external server there is no problem, but: how to attach an image to the message and render within the HTML message body? The idea is as follow:first you need to attach the image file and set an identifier and second you need to write your HTML code and reference the image identifier in the ‘img‘ tag.... // Create and fill html part MimeBodyPart htmlPart = new MimeBodyPart(); htmlPart.setContent("<h1>Sample</h1><p>This is a sample HTML part with an attached image</p>" + "<img src='cid:some_image_id'>", "text/html"); // Create a new part for the attached image and set the CID image identifier MimeBodyPart imagePart = new MimeBodyPart(); FileDataSource fds = new FileDataSource("THE_IMAGE_FILE_NAME"); imagePart.setDataHandler(new DataHandler(fds)); imagePart.setHeader("Content-ID", "some_image_id"); mp.addBodyPart(htmlPart); mp.addBodyPart(imagePart); ...Anything more to say? Arrived to this point you are almost a master of sending emails. You know how to send simple emails, multipart emails with richest HTML content and attach files and images on your message. What more can a programmer desire? Probably, a more easy to use API and that is what Apache Commons Email project offer you. See the ‘user guide‘ section http://commons.apache.org/email/userguide.html to understand what I say. It offers a more abstract API more close to humans than to protocols. ResourcesJavaMail – JavaMail project home page. Apache Commons Email – Apache Commons subproject to simplify the way to work with JavaMail API. See the ‘user guide‘ section. MIME (Multipurpose Internet Mail Extensions) – Description of MIME format for multipart emails.Reference: Sending emails with Java from our JCG partner Antonio Santiago at the “A Curious Animal” Blog. Related Articles :Spring, Quartz and JavaMail Integration Tutorial Sending e-mails in Java with Spring – GMail SMTP server example Spring MVC3 Hibernate CRUD Sample Application Spring MVC Development – Quick Tutorial Java Tutorials and Android Tutorials list...

DI in Scala: Cake Pattern pros & cons

I’ve been looking at alternatives for java-style DI and DI containers which would use pure Scala; a promising candidate is the Cake Pattern (see my earlier blog post for information on how the Cake Pattern works). FP enthusiast also claim that they don’t need any DI frameworks, as higher-order functions are enough. Recently Debasish Ghosh also blogged on a similar subject. I think his article is a very good introduction into the subject. Below are some problems I encountered with the Cake Pattern. (Higher-order functions are coming up in the next post.) If you have solutions to any of them, let me know! Parametrizing the system with a component implementation First of all, it is not possible to parametrize a system with a component implementation. Supposing I have three components: DatabaseComponent, UserRepositoryComponent, UserAuthenticatorComponent with implementations, the top-level environment/entry point of the system would be created as follows: val env = new MysqlDatabaseComponentImpl with UserRepositoryComponent with UserAuthenticatorComponentNow to create a testing environment with a mock database, I would have to do: val env = new MockDatabaseComponentImpl with UserRepositoryComponent with UserAuthenticatorComponentNote how much of the code is the same. This isn’t a problem with 3 components, but if there are 20? All of them but one have to be repeated just to change the implementation of one component. This clearly leads to quite a lot of code duplication. Component configuration Quite often a component needs to be configured. Let’s say I have a UserAuthenticatorComponent which depends on UserRepositoryComponent. However, the authenticator component has an abstract val encryptionMethod, used to configure the encryption algorithm. How can I configure the component? There are two ways. The abstract val can be concretized when defining the env, e.g.: val env = new MysqlDatabaseComponentImpl with UserRepositoryComponent with UserAuthenticatorComponent { val encryptionMethod = EncryptionMethods.MD5 }But what if I want to re-use a configured component? An obvious answer is to extend the UserAuthenticatorComponent trait. However if that component has any dependencies (which, in the Cake Pattern, are expressed using self-types), they need to be repeated, as self-types are not inherited. So a reusable, configured component could look like this: trait UserAuthenticatorComponentWithMD5 extends UserAuthenticatorComponent { // dependency specification duplication! this: UserRepositoryComponent => val encryptionMethod = EncryptionMethods.MD5 }If we don’t repeat the self-types, the compiler will complain about incorrect UserAuthenticatorComponent usage. No control over initialization order A problem also related to configuration, is that there is no type-safe way to assure that the components are initialized in the proper order. Suppose as above that the UserAuthenticatorComponent has an abstract encryptionMethod which must be specified when creating the component. If we have another component that depends on UserAuthenticatorComponent: trait PasswordEncoderComponent { this: UserAuthenticatorComponent => // encryptionMethod comes from UserAuthenticatorComponent val encryptionAlgorithm = Encryption.getAlgorithm(encryptionMethod) }and initialize our system as follows: val env = new MysqlDatabaseComponentImpl with UserRepositoryComponent with UserAuthenticatorComponent with PasswordEncoderComponent { val encryptionMethod = EncryptionMethods.MD5 }then at the moment of initialization of encryptionAlgorithm, encryptionMethod will be null! The only way to prevent this is to mix in the UserAuthenticatorComponentWithMD5 before the PasswordEncoderComponent. But the type checker won’t tell us that. Pros Don’t get me wrong that I don’t like the Cake Pattern – I think it offers a very nice way to structure your programs. For example it eliminates the need for factories (which I’m not a very big fan of), or nicely separates dependencies on components and dependencies on data (*). But still, it could be better ;). (*) Here each code fragment has in fact two types of arguments: normal method arguments, which can be used to pass data, and component arguments, expressed as the self type of the containing component. Whether these two types of arguments should be treated differently is a good question :). What are your experiences with DI in Scala? Do you use a Java DI framework, one of the approaches used above or some other way? Reference: DI in Scala: Cake Pattern pros & cons from our JCG partner Adam Warski at Blog of Adam Warski. Related Articles :Scala Tutorial – Scala REPL, expressions, variables, basic types, simple functions, saving and running programs, comments Scala Tutorial – Tuples, Lists, methods on Lists and Strings Scala Tutorial – conditional execution with if-else blocks and matching Scala Tutorial – iteration, for expressions, yield, map, filter, count Scala Tutorial – regular expressions, matching Scala Tutorial – regular expressions, matching and substitutions with the scala.util.matching API Scala Tutorial – Maps, Sets, groupBy, Options, flatten, flatMap Fun with function composition in Scala How Scala changed the way I think about my Java Code Testing with Scala What is Dependency Inversion? Is it IoC?...

Getters and Setters Are Not Evil

Every now and then some OOP purist comes and tells us that getters and setters are evil, because they break encapsulation. And you should never, ever use getters and setters because this is a sign of a bad design and leads to maintainability nightmares. Well, don’t worry, because those people are wrong. Not completely wrong of course, because getters and setters can break encapsulation, but in the usual scenario for regular business projects they don’t. What is the purpose of encapsulation? First, to hide how exactly an object performs its job. And to protect the internal data of an object, so that no external object can violate its state space. In other words, only the object knows which combination of field values is valid and which isn’t. Exposing fields to the outside world can leave the object in inconsistent state. For example what if you could change the backing array in an ArrayList, without setting the size field? The ArrayList instance will be inconsistent and will be violating its contract. So no getter and setter for the array list internal array. But the majority of objects for which people generate getters and setters are simple data holders. They don’t have any rules to enforce on their state, the state space consists of all possible combinations of values, and furthermore – there is nothing they can do with that data. And before you call me “anemic”, it doesn’t matter if you are doing “real OOP” with domain-driven design, where you have business logic & state in the same object, or you are doing fat service layer + anemic objects. Why it doesn’t matter? Because even in domain-driven projects you have DTOs. And DTOs are simply data holders, which need getters and setters. Another thing is that in many cases your object state is public anyway. Tools use reflection to make use of objects – view technologies use EL to access objects, ORMs use reflection to persist your entities, jackson uses reflection to serialize your objects to JSON, jasper reports uses reflection to get details from its model, etc. Virtually anything you do in the regular project out there requires data being passed outside of the application: to the user, to the database, to the printer, as a result of an API call. And you have to know what that data is. In EL you have ${foo.bar} – with, or without a getter, you consume that field. In an ORM you need to know what database types to use. In the documentation of your JSON API you should specify the structure (another topic here is whether rest-like services need documentation). The overall point here is that you win nothing by not having getters and setters on your data holder objects. Their internal state is public anyway, and it has to be. And any change in those fields means a change has to be made in other places. Change is something people fear – “you will have to change it everywhere in your project” .. well, yeah, you have, because it has changed. If you change the structure of an address from String to an Address class, it’s likely that you should revisit all places it is used and split it there as well. If you change a double to BigDecimal you’d better go and fix all your calculations. Another point – the above examples emphasized on reading the data. However, you must set that data somehow. You have roughly 3 options – constructor, builder, setters. A constructor with 15 arguments is obviously not an option. A builder for every object is just too verbose. So we use setters, because it is more practical and more readable. And that’s the main point here – setters and getters are practical when used on data holder objects. I have supported quite big projects that had a lot of setters and getters, and I had absolutely no problem with that. In fact, tracing “who sets that data” is the same as “where did this object (that encapsulates its data) came from”. And yes, in an ideal OO world you wouldn’t need data holders / DTOs, and there will be no flow of data in the system. But in the real world there is. To conclude – be careful with setters and getters on non-data-only objects. Encapsulation is a real thing. When designing a library, a component or some base frameworks in your project – don’t simply generate getters and setters. But for the regular data object – don’t worry, there’s no evil in that. Reference: Getters and Setters Are Not Evil from our JCG partner Bozhidar Bozhanov at Bozho’s tech blog. Related Articles :On domain-driven design, anemic domain models, code generation, dependency injection and more… Rich Domain Model with Guice Domain Driven Design with Spring and AspectJ On DTOs Using the State pattern in a Domain Driven Design Java Tutorials and Android Tutorials list...

Oracle WebLogic Java Cloud Service – Behind the scenes.

More on the Open World side of happenings one big and probably unexpected announcement was that Oracle is finally supporting the cloud movement and offering their own public cloud service. Beside the official announcements, some more or less content-less posts on The Aquarium (here and here) you don’t find a lot of information what exactly to expect from the offering. With this post I am trying to bring some light to it by interpreting the publicly disclosed information. As usual: I could be right or wrong. Watch out for some more posts from Reza Shafii. He recently started blogging about the “Java Cloud Service“. Larry: ‘if you need a cloud, you need a cloud’The famous quote of Larry Ellison during his keynote simply expresses what exactly the cloud move means for Oracle. Being on a complete private “own-your-own-exa-cloud” strategy since the announcement of the Exadata maschine this shift is a huge one. Seeing Larry presenting with slides that have the word “Java” on it over and over leaves the most of us wondering and could even scare the hell out of the rest of us. One could think, that the new Java EE 7 specification and it’s move toward PaaS and IaaS comes right in time for the new strategy. But before I am going to fire at will, let’s get back to the initial motivation of this post: What the hell are the guys running underneath and what could you expect as a developer or customer to find in the Oracle cloud? Oracle software and hardwareThe official specs of the Java Cloud Service give a very high level overview about what to expect. WebLogic 11g is the Java EE container of choice. Meaning, that you will probably be able to deploy Java EE 5 applications only with the first version. Even the supported Java EE spec versions strongly lead into that direction (EJB 2.1, 3.0; Servlet 2.5, JSP 2.1) The database on the other hand is a 11gR2. If you look back to some other slides presented at OOW, it’s not a too brave guess, that Oracle is running this stuff on a combination of Exalogic and Exadata. And I also guess, that you will be able to monitor and administrate your WebLogic Domains with the help of the new Enterprise Manager Cloud Control. Looking at the fact, that Oracle is promissing instant provisioning I also assume that they are using the virtual assembly builder in combination with some preconfigured templates to get your WebLogic Domain um and running. It would be nice to see a dedicated OVM instance for every single cloud account. The fact, that you have to target your application to a complete cluster indicates, that you will not be able to select the managed servers during deployment explicitly. Development for the Oracle Cloud But that will the development for the cloud be like? According to the public features, there will be a tight integration with JDeveloper, Eclipse and NetBeans. Seeing the command line interface together with ANT I believe that the first IDE integrations will have very limited capabilities and you would probably simply be able to deploy the stuff to your cloud. I assume that in any of the three IDEs you will have a new server configuration option which handles all needed configuration (host, port, user, pwd) accordingly and a simple “run on server” will start the deployment process. The spec also mentions a whitelist (check for supported APIs) and an SDK. So it’s fair to guess, that the IDEs also will do some pre-flight checks for your applications before putting them onto your cloud. Your applications obviously don’t need to implement proprietary Oracle APIs (like google requires for authorization or DB access) but again by looking at the specs it seems as if you will not be able to use the complete set of WebLogic and Java EE 5 APIs. Seeing the mention of EJB with explicitly “local interfaces” could indicate, that RMI will not be in it. What that means for failover and session replication is unclear as of today. Also it seems as if you shouldn’t think about deploying anything else than war and ear files. If this will include the WebLogic library mechanism is unclear. It seems as if anything else than http isn’t allowed to hit your applications. Not even inbound SOAP Webservices are possible according to the specs. Nice little Oracle addon is, that you can obviously take advantage of the full ADF stack (Faces, Bussiness Components). Seeing the ADF Web Services Data Controls with a separate mention let’s me believe, that there are other restrictions or versions about Data Controls in general. Conclusion This is the part of the post where I probably should be very excited and tell you, that this is the most open, best and single cloud offering available today. Maybe I am a little bit too early for a general conclusion, but let’s look at the plain facts as of the time writing this: Contra: - Only Java EE 5 (with restrictions) => That’s a few years old now, right? - Pricing => Unclear until now. They could screw up the whole thing instantly! - Only WebLogic => What about GlassFish? We need an ExaFish ! Pro: - WebLogic => that’s fine. Especially as I expect the license to be included with the subscription? - Running on Exa stuff => probably the finest hardware available. Under full control of the manufacturer. - Only Java EE => No additional, proprietary stuff needed. Portable. Standards based. Let’s lean back and relax a bit until the first official versions are available. I am very very looking forward getting my hands on this stuff. Reference: Oracle WebLogic Java Cloud Service – Behind the scenes from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” Blog. Related Articles :Java EE Past, Present, & Cloud 7 Developing and Testing in the Cloud Configuration Management in Java EE Leaked: Oracle WebLogic Server 12g Java Tutorials and Android Tutorials list...

If I had more time I would have written less code

In a a blatant rip-off of the T.S Eliot quote: “if I had more time, I would have written a shorter letter” I had a thought the other day, perhaps: If I had more time, I would have written less code It seems to me the more time time I spend on a problem, the less code I usually end up with; I’ll refactor and usually find a more elegant design – resulting in less code and a better solution. This is at the heart of the programmer’s instinct that good code takes longer to write than crap. But as Uncle Bob tells us: The only way to go fast is to go well How then do we square this contradiction? The way to go fast is to go well; but if I sacrifice quality I can go faster – so I’m able to rush crap out quickly. What makes us rush? First off – why do developers try to rush through work? It’s the same old story that I’m sure we’ve all heard a million times:There’s an immovable deadline: advertising has already been bought; or some external event – could be Y2K or a big sports event We need to recognise the revenue for this change in this quarter so we make our numbers Somebody, somewhere has promised this and doesn’t want to lose face by admitting they were wrong – so the dev team get to bend so those who over committed don’t look foolishWhat happens when we rush? The most common form of rushing is to skip refactoring. The diabolical manager looks at the Red, Green, Refactor cycle and can see a 33% improvement if those pesky developers will just stop their meddlesome refactoring. Of course, it doesn’t take a diabolical manager – developers will sometimes skip refactoring in the name of getting something functional out the door quicker. It’s ok though, we’ll come back and fix this in phase 2 How many times have we heard this shit? Why do we still believe it? Seriously, I think my epitaph should be “Now working on phase 2?. Another common mistake is to dive straight in to writing code. When you’re growing your software guided by tests, this probably isn’t the end of the world as a good design should emerge. But sometimes 5 minutes thought before you dive in can save you hours of headache later. And finally, the most egregious rush I’ve ever seen is attempting to start development from “high level requirements”. As the “inconsequential little details” are added in, the requirements look completely different. And all that work spent getting a “head start” is now completely wasted. You now either bin it and do-over, or waste more precious time re-working something you should never have written in the first place. Does rushing work? This is the heart of the contradiction – it feels like it does. That’s the reason we do it, isn’t it? When compelled by the customer to finish quicker, we rush – we cut corners to try and get things done quicker. It feels like I’m going quicker when I cut corners. If only it wasn’t for all those unexpected delays that wasted my time then I would have been finished quicker. Its not my fault those delays cropped up – I couldn’t have predicted that. It’s the product manager’s fault for not making the requirements clear It’s the architect’s fault for not telling me about the new guidelines Its just unlucky that a couple of bugs QA found were really hard to fix When you cut corners, you pay for it later. Time not spent discussing the requirements in detail leads to misunderstandings; if you don’t spot it later on, you might have to wait until the show and tell for the customer to spot it – now you’ve wasted loads of time. Time not spent thinking about the design can lead to you going down an architectural blind alley that causes loads of rework. Finally time not spent refactoring leaves you with a pile of crap that will be harder to change when you start work on the next story, or even have to make changes for this one. Of course, you couldn’t have seen these problems coming – you did your best, these things crop up in software don’t they? But what if you hadn’t rushed? What if rather than diving in, you’d spent another 15 minutes discussing the requirements in detail with the customer? You might have realised earlier that you had it all wrong and completely misunderstood what she wanted. What if you spent 30 minutes round a whiteboard with a colleague discussing the design? Maybe he would have pointed out the flaws in your ideas before you started coding. Finally, what if you’d spent a bit more time refactoring? That spaghetti mess you’re going to swear about next week and spend days trying to unravel will still be fresh in your mind and easier to untangle. For the sake of an hour’s work you could save yourself days. How much is enough? Of course, it’s all very well to say we will not write crap any more; but it’s not a binary distinction, is it? There’s a sliding scale from highly polished only-really-suitable-for-academics perfection; to sheer, unmitigated, what-were-you-thinking-you-brain-dead-fuck crapitude. If spending more time discussing requirements could save time later, why not spend years going through a detailed requirements gathering exercise? If spending more time thinking before coding could save time later, why not design the whole thing down to the finest detail before any coding is done? Finally if refactoring mercilessly will save time, why not spend time refactoring everything to be as simple as possible? Actually, wait, this last one probably is a good idea. Professional software development is always a balancing act. It’s a judgement call to know how much time is enough. How long does it take? When working through a development task I choose to expend a certain amount extra effort over and above how long it takes me to type in the code and test it; this extra time is spent discussing requirements, arguing about the design, refactoring the code etc etc. Ideally I want to expend just enough effort to get the job done as quickly as possible (the lazy and impatient developer). If I expend too little effort I risk being delayed later by unnecessary rework; if I expend too much effort I’ve wasted my time. However, I don’t know in advance what that optimum amount of effort is – it’s a guess based on experience. But I expect the amount of effort I put in to be within some range of the optimum – the lowest the point on the graph:All other things being equal, I’d expect the amount of time it actually takes to complete the task to be within some margin based on how close I got to the optimum amount of effort. Sometimes I do too little and waste time with rework. Other times I spend too long – e.g. too much detail in the design that didn’t add any value – so the extra time I spent is wasted.This is by no means all the uncertainty in an estimate; but this is the difference between me not doing quite enough and having to pay for it later (refactoring, debugging or plain rework); versus doing just enough at every stage so that no effort is wasted and there’s no rework I could have avoided more cheaply. In reality the time taken is likely to be somewhere in the middle: not to great but not too shabby. There’s an interesting exercise here to consider the impact experience has on this. When I was first starting out I’d jump straight into coding; my effort range would be to the left of the graph: I’d spend most of my time re-writing what I’d done so the actual time taken would be huge. Similarly, as I became more experienced I’d spend ages clarifying requirements, writing detailed designs and refactoring until the code was perfect; now my effort range would be to the right of the graph – I’d expend considerable upfront effort, much of which was unnecessary. Now, I’d like to think, I’m a bit more balanced and try to do just enough. I have no idea how you could measure this, though. Reducing effort to save time What happens when we rush? I think when we try to finish tasks quicker, we cut down on the extra effort – we’re more likely to accept requirements as-is than challenge them; we’re more likely to settle for the first design idea we think of; we’re more likely to leave the code poorly refactored. This pulls the effort range on the graph to the left.To try and get done more quickly, I’ve reduced the amount of effort by “shooting low”: I cut corners and expend less effort than I would have done otherwise. The trouble is this doesn’t make the best-case any better – I might still get the amount of effort bang on and spend the minimum amount of time possible on this task. However because I’m shooting low, there’s a danger now I spend far too little extra effort – the result is I spend even longer: I have to revisit requirements late in the day; make sweeping design changes to existing code; or waste time debugging or refactoring poorly written code. This is a classic symptom of a team rushing through work: simple mistakes are made that, in hindsight, are obvious – but because we skipped discussing requirements or skipped discussing the design we never noticed them. When I reduce the amount of extra effort I put in, rather than getting things done quicker, rushing may actually increase the time taken. This is the counter-intuitive world we live in – where aggressive deadlines may actually make things go more slowly. Perhaps instead I should have called this article: If I had more time I would have been done quicker Reference: If I had more time I would have written less code from our JCG partner David at the Actively Lazy blog. DesignStaminaHypothesis by Martin FowlerRelated Articles :Services, practices & tools that should exist in any software development house, part 1 Not doing Code Reviews? What’s your excuse? Code quality matters to the customers. A lot. Measuring Code Complexity Lessons in Software Reliability Things Every Programmer Should Know...

Spring MVC Interceptors Example

I thought that it was time to take a look at Spring’s MVC interceptor mechanism, which has been around for a good number of years and is a really useful tool. A Spring Interceptor does what it says on the tin: intercepts an incoming HTTP request before it reaches your Spring MVC controller class, or conversely, intercepts the outgoing HTTP response after it leaves your controller, but before it’s fed back to the browser. You may ask what use is this to you? The answer is that it allows you to perform tasks that are common to every request or set of requests without the need to cut ‘n’ paste boiler plate code into every controller class. For example, you could perform user authentication of a request before it reaches your controller and, if successful, retrieve some additional user details from a database adding them to the HttpServletRequest object before your controller is called. Your controller can then simply retrieve and use these values or leave them for display by the JSP. On the other hand, if the authentication fails, you could re-direct your user to a different page. The demonstration code shows you how to modify the incoming HttpServletRequest object before it reaches your controller. This does nothing more than add a simple string to the request, but, as I said above, you could always make a database call to grab hold of some data that’s required by every request… you could even add some kind of optimization and do some caching at this point. public class RequestInitializeInterceptor extends HandlerInterceptorAdapter {// Obtain a suitable logger. private static Log logger = LogFactory .getLog(RequestInitializeInterceptor.class);/** * In this case intercept the request BEFORE it reaches the controller */ @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { try {logger.info("Intercepting: " + request.getRequestURI());// Do some changes to the incoming request object updateRequest(request);return true; } catch (SystemException e) { logger.info("request update failed"); return false; } }/** * The data added to the request would most likely come from a database */ private void updateRequest(HttpServletRequest request) {logger.info("Updating request object"); request.setAttribute("commonData", "This string is required in every request"); }/** This could be any exception */ private class SystemException extends RuntimeException {private static final long serialVersionUID = 1L; // Blank } }In the code above, I’ve chosen the simplest implementation method by extending the HandlerInterceptorAdaptor class, overriding preHandle(..) method. My preHandle(…) method does the error handling, deciding what to do if an error occurs and returning false if one does. In returning false the interceptor chain is broken and your controller class is not called. The actual business of messing with the request object is delegated to updateRequest(request). The HandlerInterceptorAdaptor class has three methods, each of which are stubbed and, if desired, can be ignored. The methods are: prehandle(…), postHandle(…) and afterCompletion(…) and more information on these can be found in the Spring API documentation. Be aware that this can be somewhat confusing as the Handler Interceptor classes documentation still refer to MVC controller classes by their Spring 2 name of handlers. This point is easily demonstrated if you look at prehandle(…)’s third parameter of type Object and called handler. If you examine this in your debugger, you’ll see that it is an instance of your controller class. If you’re new to this technique, just remember that controller == handler. The next step in implementing an interceptor is, as always, to add something to the Spring XML config file: <!-- Configures Handler Interceptors --> <mvc:interceptors> <!-- This bit of XML will intercept all URLs - which is what you want in a web app --> <bean class="marin.interceptor.RequestInitializeInterceptor" /> <!-- This bit of XML will apply certain URLs to certain interceptors --> <!-- <mvc:interceptor> <mvc:mapping path="/gb/shop/**"/> <bean class="marin.interceptor.RequestInitializeInterceptor" /> </mvc:interceptor> --> </mvc:interceptors>The XML above demonstrates an either/or choice of adding an interceptor to all request URLs, or if you look at the commented out section, adding an interceptor to specific request URLs, allowing you to choose which URLs are connected to your interceptor class. The eagle eyed readers may have noticed that the interceptor classes use inheritance and XML config as its method of implementation. In these days of convention over configuration, this pattern is beginning to look a little jaded and could probably do with a good overhaul. One suggestion would be to enhance the whole lot to use annotations, applying the same techniques that have already been added to the controller mechanism. This would add extra flexibility without the complication of using all the interfaces and abstract base classes. As a suggestion, a future interceptor class implementation could look something like this: @Intercept(value = "/gb/en/*", method = RequestMethod.POST) public boolean myAuthenticationHandler(HttpServletRequest request, Model model) { // Put some code here }That concludes this look at Spring interceptors, it should be remembered that I’ve only demonstrated the most basic implementation. Reference: Using Spring Interceptors in your MVC Webapp from our JCG partner Roger Hughes at the Captain Debug’s blog. Related Articles :jqGrid, REST, AJAX and Spring MVC Integration SpringMVC 3 Tiles 2.2.2 Integration Tutorial Spring MVC3 Hibernate CRUD Sample Application Spring MVC Development – Quick Tutorial Spring, Quartz and JavaMail Integration Tutorial Spring Insight – Web Application Profiling  Java Tutorials and Android Tutorials list...

Java EE Past, Present, & Cloud 7

One prominent topic of the recent JavaOne 2011 was the next major Java EE 7 release. As stated in the keynotes, the work on it is well underway. It will contain the 28 specifications we already know from the forerunner plus a number of new specs. Nobody can tell you about the exact number at them moment because EE 7 will only accept the new specifications if they finish “in time”. Which means, the planned release date for EE 7 (Q3 2012 Final Release) is a superior goal which sets scope and pace. Candidates for inclusion include Jcache 1.0 (JSR 107), Concurrency Utilities 1.0 (JSR-236), State Management 1.0, Batch Processing 1.0 and JSON 1.0. This ambitious goal is only one thing that makes me wonder. But let’s start at the beginning.New Java EE 7 spec lead Linda DeMichiel (picture to the right) detailed the general approach in their keynote part. The big focus with Java EE 7 is getting Java apps into the cloud. With the steps taken from J2EE to Java EE the general service approach was integrated into the platform. Meaning, that developers are able to use services and have a declarative way of consuming them. Beginning with Java EE 7 the platform itself should become a service. In the meaning of providing sufficient ways of enabling PaaS (Platform as a Service) with Java EE application servers. Basically to enable customers and users of EE to leverage the complete range of clouds (public, private and hybrid). This should be reached by adding new platform roles, metadata and APIs which support the needed requirements like multi-tenancy, elasticity and scalability. Beside the new kids-on-the-block, also the proven specs need a bunch of updates to support these requirements. Looking at the bullet-point topics that are already there for the 9 “work-in-progress” specifications should bring a bit more light into how to achieve the “cloud goal”. JPA 2.1 (JSR 338) The first spec to include new features is the JPA 2.1. The new features can be described with the following short list: - Multi-Tenancy (Table discriminator) - Stored Procedures - Custom types and transformation methods - Query by Example - Dynamic PU Definition - Schema Generation (Additional mapping metadata to provide better standardization) JMS 2.0 (JSR 343) This could be considered as the most mature spec in general. It had a long 9 years to go since it’s last maintenance release (April 2002). - Modest scope - Ease of development - Pluggable JMS provider - Extensions to support “Cloud” EJB 3.2 (JSR 345) The goal of Enterprise JavaBeans 3.2 is to consolidate these advances and to continue to simplify the EJB architecture as well as to provide support for the Java EE platform-wide goal of further enabling cloud computing. The scope of EJB 3.2 is intended to be relatively constrained in focusing on these goals. - Incremental factorization (Interceptors) - Further use of annotations to simplify the EJB programming model - Proposed Optional: BMP/CMP - Proposed Optional: Web Services invocation using RPC CDI 1.1 (JSR 346) Since the final release of the CDI 1.0 specification a number of issues have been identified by the community and a update to the specification will allow these to be addressed. A list of proposed updates is provided here, however the EG will consider other issues raised as the JSR progresses. - Embedded mode - Lifecycle Events - Declarative package scanning - Global ordering of interceptors and decorators - Injection Static Variables Servlet 3.1 (JSR 340) In developing the servlet specification 3.1 the EG will take into consideration any requirements from the platform to optimize the Platform as a Service (PasS) model for web applications. Beside this, the following areas should be addressed. - Cloud support - NIO.2 async I/O - Leverage Java EE concurrency - Security improvements - Web Sockets support - Ease-of-Development JSF 2.2 (JSR 344) The new JSF JSR will be a significant feature update that builds on the advances from the previous JavaServer Faces versions. - Ease-of-Development - HTML 5 support (Forms, Headings, Metadata) - New components - Portlet Integration JAX-RS 2.0 (JSR 339) JAX-RS addresses most requested community features. To name a few: - Client API - Hypermedia - The primary API utilized for validation will be the Bean Validation API - Ease-of-Development Expression Language 3.0 (JSR 341) The Expression Language (EL), has been part of JSP specification since JSP 2.0. In Java EE 7 this will become a separate JSR. - Standalone JSR - Easier to use outside container - Criteria-based Collection selection - New operators - CDI events for expression eval Bean Validation 1.1 (JSR 349)Being a version 1.0, Bean Validation stayed on the conservative side feature wise. The community has expressed interest in additional features to enhance the work done in the first version of the specification. - Integration with other JSRs (JAXRS, JAXB, JPA, CDI, EJB, JSF) - Method-level validation - Constraint composition Cloud? Is that Rain?Looking at the proposals it’s clear that some of them have room for cloud enabling. Some don’t care at all. Searching for the cloud stuff is very little successful until now. Let’s look at the umbrella JSR 342. The official pages are public and can be found on http://java.net/projects/javaee-spec/. Very interesting is the Java EE 7 Platform and Support for the PaaS Model document (PDF), which describes the overall architecture for PaaS support in Java EE 7. And by comment is largely agreed to by the expert group. It summarizes needed roles (PaaS Product Vendor, PaaS Provider, PaaS Account Manager, PaaS Customer, Application Submitter, Application Administrator, End-user) and gives a couple of example scenarios in which they act in a PaaS environment. Further on you find some definition and terms: PaaS Application: “A discrete software artifact containing domain-specific code that can be uploaded to and deployed on the PaaS environment by a PaaS Customer. The artifact may consume PaaS resources and be distributed across multiple JVM instances according to QoS settings and/or an SLA. Depending on its terms of use, a PaaS application may subsequently be deployed on the PaaS environment by potentially any number of other PaaS Customers.” Tenant: “Since in the model described here a PaaS Customer corresponds to an isolation domain, we will use the term “Tenant” to avoid misunderstandings with other uses of the word “customer” in the business context.” Application Developer: “We will use term Application Developer to denote an application developer in the common sense. In the traditional Java EE terminology, this role is split between Application Component Provider and Application Assembler.” Additionally you find the mandatory statement about “protecting” investments: It is a goal of Java EE 7 to add to the platform support for the PaaS model as well as a limited form of the SaaS model while preserving as much as possible the established Java EE programming model and the considerable investments made by customers, vendors, and system integrators into the Java EE ecosystem. (Source: The Java EE 7 Platform and Support for the PaaS Model) Looking at the qcon London slides by Jerome Dochez you quickly notice, that there is a lot more stuff to take care of, than what the expert group covers with the public documentation: - Better Packaging for the Cloud (Modular Applications) - Versioning - Deployment Model - SLA Monitoring - Billing And I am sure, you could come up with even more. The GlassFish / Java EE Strategy & Roadmap (PDF) presented by Adam Leftik & John Clingan at Oracle Openword paints a more detailed picture of the future by looking at: - Dynamic Service Provisioning - Iaas Management - Elasticity using Auto-scaling - Monitoring - Hypervisor Abstraction Until now this seems like the most complete and concrete approach to Java EE in the cloud. Seeing the GlassFish team doing a presentation with the latest GF 4.0 release candidate you can imagine how far work is completed. (Even if I assume that there is still plenty of work to do :)) No Rain but it will be cloudy for some time A lot is evolving at the moment. This is what was expected with changing direction in a mature specification. Looking at the new Oracle cloud offering and the continuing cutting-edge work the GlassFish team is doing I believe, that the ambitious goal could be meet because we have enough business value behind that. What I fear is, that single specifications could deny the inclusion of “needed” cloud stuff in favor of bug fixes or community requests. This is the first time the umbrella specification is emerging into a complete different direction, than the containing children. On the other side, as we know from the past, the umbrella itself is a comparable small specification which specifies on a very general level of detail. This could open up opportunities for vendors in general. Let me add another point here: I strongly believe that Java EE 7 will be biggest challenge for the spec lead since ages. Following the overall “cloud” theme without distracting or prioritizing the single contained specifications will be a very political job in general. Even if Linda DeMichiel is a Java EE veteran I believe that a lot of work is waiting here. Summer 2013 vs Q3 2012 Final Release – missed opportunitiesThe real big issue I have with the timeline is the fact, that we don’t have a chance to get a real modular approach for application packaging. Whatever will be designed in terms of packaging (and related stuff like versioning, SLA, and more) for the cloud will not be able to take advantage of the new project Jingsaw features coming with Java SE 8. I personally consider this a major requirement for cloud enabled Java EE PaaS infrastructures. If the new cloud metadata will be build upon the Java EE 6 packaging specification it is a missed opportunity to adopt latest and greatest in Java modularization. I am very curious to see, how the EG will work around this issue without having to rework everything with Java EE 8 again. Reference: Java EE Past, Present, & Cloud 7 from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” Blog. Related Articles :Developing and Testing in the Cloud Configuration Management in Java EE Leaked: Oracle WebLogic Server 12g Java EE6 Decorators: Decorating classes at injection time Java Tutorials and Android Tutorials list...

Java Twitter client with Twitter4j

Got inspired by my friend, who built his own Twitter client for his company’s client. So, I tried to build one using Java. And I’m surprised at how easy is to make a Twitter client, of course I still use a third-party API to make my job easier. This simple client will only have 2 purposes: reading timeline and post status. Don’t worry, you can expand this application later, it’s simple and easy once you have your app got authorized by Twitter. First of all, you have to go to official Twitter Developer Registration at https://dev.twitter.com/apps/new and register your application detail there. For this blog, I will create a new application that is called “Namex Tweet for Demo“. It’s simple, just fill in some required data and voila it’s done in seconds. After you passed this registration step, don’t forget the most important things in here are these Consumer and Consumer Secret key. In short, it’s a signature to let Twitter knows your application. These things will be hardcoded at your application. In here, my Consumer key is DXjHgk9BHPmekJ2r7OnDg and my Consumer Secret key is u36Xuak99M9tf9Jfms8syFjf1k2LLH9XKJTrAbftE0. Don’t use these keys in your application, it’s useless because I will turn off the application as short as this blog post is done.And after the registration step don’t forget to visit the Settings page and adjust the setting for your application access.Choose “Read, Write and Access direct messages” to get your application the full functionality.You can now download the additional Java API for twitter, I’m using Twitter4J. Here, you have to download several jars,twitter4j-async-a.b.c. twitter4j-core-a.b.c. twitter4j-media-support-a.b.c. twitter4j-stream-a.b.c.Note:Don’t use the twitter4j-appengine.jar, it will cause your application to throw an exception on authorizing process. In my example a is 2, b is 2 and c is 4. So it would look like twitter4j-async-2.2.4 etc. After these jars are downloaded at your machine, our downloading job has not done yet. We still have to download Apache Commons Codec as Twitter4J dependencies. After all of the jars downloaded, now we can start to code. Open your fave IDE and start with me. package com.namex.tweet;import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader;import twitter4j.Twitter; import twitter4j.TwitterException; import twitter4j.TwitterFactory; import twitter4j.auth.AccessToken; import twitter4j.auth.RequestToken;public class NamexTweet { private final static String CONSUMER_KEY = "DXjHgk9BHPmekJ2r7OnDg"; private final static String CONSUMER_KEY_SECRET = "u36Xuak99M9tf9Jfms8syFjf1k2LLH9XKJTrAbftE0";public void start() throws TwitterException, IOException {Twitter twitter = new TwitterFactory().getInstance(); twitter.setOAuthConsumer(CONSUMER_KEY, CONSUMER_KEY_SECRET); RequestToken requestToken = twitter.getOAuthRequestToken(); System.out.println("Authorization URL: \n" + requestToken.getAuthorizationURL());AccessToken accessToken = null;BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); while (null == accessToken) { try { System.out.print("Input PIN here: "); String pin = br.readLine();accessToken = twitter.getOAuthAccessToken(requestToken, pin);} catch (TwitterException te) {System.out.println("Failed to get access token, caused by: " + te.getMessage());System.out.println("Retry input PIN");} }System.out.println("Access Token: " + accessToken.getToken()); System.out.println("Access Token Secret: " + accessToken.getTokenSecret());twitter.updateStatus("hi.. im updating this using Namex Tweet for Demo");}public static void main(String[] args) throws Exception { new NamexTweet().start();// run the Twitter client } }Compile and run the code, it will create permission for “Namex Tweet for Demo” to be linked with your Twitter account. Just open the “Authorization URL” shown at the screen and input the PIN shown by the website.Your application will send back the pin to Twitter, if it’s match your account will be linked with this new application and you can see you just posted a new status using “Namex Tweet for Demo “. Congratulations!Note: Authorization URL and PIN will generated differently each time it’s run.In here you can see, we don’t give the username and password of Twitter account but we can use our account within the application. Yeah it’s possible because of OAuth. It “transformed” password-input-process to sending-receive-token. So don’t worry, a third-party Twitter client application can’t read and store no password of your Twitter account. It’s simple, it’s safer and prevents password thieving. Now we still have a tiny problem, at this point, your program still needs to open Twitter’s website and give the pin back to the application. So, maybe you are asking, on the cloud, do I need this annoying authorization on the future? Well, gladly the answer is NO. At the time your app being authorized by Twitter, you have no use to re-authorize it again — with a simple note you have to save the Access Token and Secret Access Token . What the hell is that, how could I get that. Well, you have it already, see the image below, I put it in a big red rectangle so it will be more eye-catchy. In here, our token is and our secret token is. These 2 tokens have to be saved somewhere, you can choose your own method to save it: Persistence, CSV, DBMS, etc. It’s all up to you.So, i saved the tokens! How do i reuse it? It’s simple, see below code. It’s how to use your tokens, so you wont have the re-authorization process again. Try to post and read your timeline now. package com.namex.tweet;import java.io.IOException;import twitter4j.ResponseList; import twitter4j.Status; import twitter4j.Twitter; import twitter4j.TwitterException; import twitter4j.TwitterFactory; import twitter4j.auth.AccessToken;public class NamexTweet { private final static String CONSUMER_KEY = "DXjHgk9BHPmekJ2r7OnDg"; private final static String CONSUMER_KEY_SECRET = "u36Xuak99M9tf9Jfms8syFjf1k2LLH9XKJTrAbftE0";public void start() throws TwitterException, IOException {Twitter twitter = new TwitterFactory().getInstance(); twitter.setOAuthConsumer(CONSUMER_KEY, CONSUMER_KEY_SECRET);// here's the difference String accessToken = getSavedAccessToken(); String accessTokenSecret = getSavedAccessTokenSecret(); AccessToken oathAccessToken = new AccessToken(accessToken, accessTokenSecret);twitter.setOAuthAccessToken(oathAccessToken); // end of differencetwitter.updateStatus("Hi, im updating status again from Namex Tweet for Demo");System.out.println("\nMy Timeline:");// I'm reading your timeline ResponseList list = twitter.getHomeTimeline(); for (Status each : list) {System.out.println("Sent by: @" + each.getUser().getScreenName() + " - " + each.getUser().getName() + "\n" + each.getText() + "\n"); }}private String getSavedAccessTokenSecret() { // consider this is method to get your previously saved Access Token // Secret return "oC8tImRFL6i8TuRkTEaIcWsF8oY4SL5iTGNkG9O0Q"; }private String getSavedAccessToken() { // consider this is method to get your previously saved Access Token return "102333999-M4W1Jtp8y8QY8RH7OxGWbM5Len5xOeeTUuG7QfcY"; }public static void main(String[] args) throws Exception { new NamexTweet().start(); }}Now our simple Twitter application has been -could be- done, we can read and post to Twitter. Of course, many things are still on the task list if you want to make it professionally and -perhaps- sell it. A nice UI, reading and sending Direct Message, Searching Users, Follow and Unfollow. I put these jobs on your shoulder, cause i just wanted to share that it’s easy to make a Twitter client and I hope this short tutorial can help you in developing a Twitter client using Java. Reference: How Easy to Make Your Own Twitter Client Using Java from our JCG partner Ronald at Naming Exception Blog. Related Articles :Getting Started with YouTube Java API Xuggler Development Tutorials Simple Twitter: Play Framework, AJAX, CRUD on Heroku Build Twitter with Grails in 90 Minutes: The Gist of it Java Tutorials and Android Tutorials list...

Leaked: Oracle WebLogic Server 12g

JavaOne is nearly one week behind us already and I am still working on the detailed blog posts about it. One thing I was really surprised of is the fact, that I didn’t see a single mention about an update to my favorite application server out there. Yes, I love the WebLogic product. Since the beginning. Even if Oracle is making this a hard love for me since the acquisition of BEA. Especially the late adoption path for Java EE 6 was a point that forced me to look into other servers. Thank god, we have GlassFish as the reference implementation. Sticking to WebLogic with all the little Java EE 6 preview stuff in it wouldn’t be satisfying at all.WebLogic at OpenWorld and JavaOne Have you seen it? You probably have. But the server versions running on the laptops in the DemoGrounds were some reasonable recent versions of the 10.3.x.x also referred to as 11g. And this is still the old Java EE 5 version of it. And even the latest bug fix release was shipped way back in May. So I was really looking forward to see a 12g somewhere. Even if it was under the table or in a dark room in the back. Nothing like this happened. At least not officially. But something else hit my inbox a few days back. A little demo video or better a screen-cast. But have a look and tell me, what you think: In 30 Seconds – WLS Web Project with NetBeans The video starts with a normal Java EE 6 Web profile based web-project with enabled CDI. Servlet without web.xml After the project is created, you see (someone) creating a WebServlet called DemoServlet with path mappings to different URL-Pattern. The package names indicate, that this was thought of as an Oracle OpenWorld presentation. The WLS is obviously running on a 64-bit HotSpot Java SE Milestone Build (1.6.0_26). Even if the WLS start up time seems to be comparable to what I see with 11g today, I wouldn’t bet on this to be the fact for a final GA release.Running a browser against the app shows the servlet responding accordingly. Changes to the servlet code seems to be hot deployed to the server. So no surprises here. Context and Dependency Injection After 3 minutes and 20 seconds a simple POJO is created with a simple getter returning a smiley. This little smiley is injected into the servlet. Here we go: CDI rocks! Furter on, the Smiley interface is introduced and we get a couple of different flavours of smiles injected via custom qualifyers. An exception reveals, that JBoss Weld is in use. No further details about the version. Around minute 7 a javax.enterprise.event.Event is introduced which is a simple String event getting fired by servlet access. A corresponding EventCapcha class receives the events via an @Observer method, adds them to a list and prints it to out. The EventCapcha class is exposed as session scoped bean and gives access to the events list. JSF 2.0 with facelets Ok. This is not new. We have JSF 2.0 since a while with WLS. Now we have the full power with CDI integration. Around minute 11 a simple JSF template with a h:dataTable component is created which shows the fired events. Conclusion The video ends at exactly 13 minutes and don’t show any of the other Java EE 6 goddess. But it’s by far more than we had seen until now about Java EE 6 running on WLS. And quite impressive to see WLS moving again. Don’t ask me about anything beyond this post. I DON’T KNOW about timelines, I DON’T KNOW about versions (the ones statet here were taken from the screen-cast) and I especially DON’T KNOW about when you finally will be able to test drive this new WebLogic 12g by your own. What I do know is, that I am surprised to see this “leaked” way after Open World. It seems as if it simply doesn’t fit into whatever strategy decisions have been made. I loved to watch this little screen-cast and I am badly looking forward having this in my own hands and giving it a test-drive with all the stuff we have created with GlassFish until now. Reference: Leaked: Oracle WebLogic Server 12g from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” Blog. Related Articles :GlassFish Response GZIP Compression in Production JBoss AS 7.0.2 “Arc” released – Playing with bind options JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Java EE6 Decorators: Decorating classes at injection time Java Tutorials and Android Tutorials list...

Integrating Maven with Ivy

The problem: you have some resources in an Ivy repository (and only there) which you would like to use in a project based on Maven. Possible solutions:Migrate the repository to Maven (Nexus for example) since Ivy can easily use Maven-style repositories (so your Ivy clients can continue to use Ivy with some slight configuration changes and Maven clients will also work – also the push-to-repo process needs to be changed) Try JFrog Artifactory since it reportedly can serve the same resources to both Ivy and Maven (disclaimer: I haven’t tried to use it actually and I don’t know if the Open Source version includes this feature or not) or read on…My goal for the solution (as complex as it may be) was:It should be as simple and self-explanatory as possible It should respect the DRY principle (Don’t Repeat Yourself) It shouldn’t have other dependencies than Maven itselfThe solution looks like the following (for the full source check out the code-repo): Have two Maven profiles: ivy-dependencies activates when the dependencies have already been downloaded and ivy-resolve when there are yet to download. This is based on checking the directory where the dependencies are to be copied ultimately: ... <id>ivy-dependencies</id> <activation> <activeByDefault>false</activeByDefault> <file> <exists>${basedir}/ivy-lib</exists> </file> </activation> ... <id>ivy-resolve</id> <activation> <activeByDefault>false</activeByDefault> <file> <missing>${basedir}/ivy-lib</missing> </file> </activation> ...Unfortunately there is a small repetition here, since Maven doesn’t seem to expand user-defined properties like ${ivy.target.lib.dir} in the profile activation section. The profiles also serve an other role: to avoid the consideration of the dependencies until they are actually resolved. When the build is first run, it creates the target directory, writes the files needed for an Ivy build there (ivy.xml, ivysettings.xml and build.xml – for this example I’ve used some parts from corresponding files of the Red5 repo), runs the build and tries to clean up after itself. It also creates a dependencies.txt file containing the chunck of text which needs to be added to the dependencies list. Finally, it bails out (fails) instructing the user to run the command again. On the second (third, fourth, etc) run the dependencies will already be present, so the resolution process won’t be run repeteadly. This approach was chosen instead of running the resolution at every build because – even though the resolution process is quick quick – it can take tens seconds in some more complicated cases and I didn’t want to slow the build down. And, Ivy, the Apache BSF framework, etc are fetched from the Maven central repository, so they need not be preinstalled for build to complete successfully. A couple of words about choosing ${ivy.target.lib.dir}: if you choose it inside your Maven tree (like it was chose in the example), you will receive warnings from Maven that this might not be supported in the future. Also, be sure to add the directory to the ignore mechanism of your VCS (.gitignore, .hgignore, .cvsignore, svn:ignore, etc), as to avoid accidentally committing the libraries to VCS. If you need to add a new (Ivy) dependency to the project, the steps are as follows:Delete the current ${ivy.target.lib.dir} directory Update the part of your pom.xml which writes out the ivy.xml file to include the new dependency Run a build and watch the new dependency being resolved Update the dependencies section of the ivy-dependencies profile to include the new dependency (possibly copying from dependencies.txt)One drawback of this method is the fact that advanced functionalities of systems based on Maven will not work with these dependencies (for example dependency analisys / graphing plugins, automated downloading of sources / javadocs, etc). A possible workaround (and a good idea in general) is to use this method for the minimal subset – just the jars which can’t be found in Maven central. All the rests (even if they are actually dependencies of the code fetched from Ivy) should be declared as a normal dependency, to be fetched from the Maven repository. Finally I would like to say that this endeavour once again showed me how flexible both Maven and Ivy/Ant can be and clarified many cornercases (like how we escape ]] inside of CDATA – we split it in two). And it can also be further tweaked (for example: adding a clean target to the ivy-resolve profile, so you can remove the directory with mvn clean -P ivy-resolve or re-jar-ing all the downloaded jars into a single one for example like this, thus avoiding the need to modify the pom file every time the list of Ivy dependencies gets changed – then again signed JARs can’t be re-jarred so it is not an universal solution either). Reference: Integrating Maven with Ivy from our JCG partners at Transylvania Java User Group. Related Articles :Services, practices & tools that should exist in any software development house, part 1 On domain-driven design, anemic domain models, code generation, dependency injection and more… OSGi Using Maven with Equinox Java Modularity Approaches – Modules, modules, modules Aspect Oriented Programming with Spring AspectJ and Maven GWT EJB3 Maven JBoss 5.1 integration tutorial...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: