Featured FREE Whitepapers

What's New Here?


IDE Project Files In Version Control – Yes or No? Of Course, Not!

Just recently I had some discussions with the clients who were claiming that they keep IDE project files in version control system hence they avoid any changes to those files. For reference, those are Eclipse generated .project and .classpath. From my point of view it is a bad practice by all means, however I usually prefer to collect some information on the topic before I say it loudly. So I asked from my G+ and Twitter followers, IDE project files in version control – Yes/No? Surprisingly, I’ve got quite a number of responses, so I decided to summarize the information as a blog post.  Basically, the answer to this question isn’t binary, yes or no, but it also could be ‘yes, but… ‘ or ‘no, if…’. So there might be some argument why someone prefers one way or another. The vast majority of the answers is ‘No, no, noooooooo!! Never!’. And just a couple of answers were ‘yes’ with some weak arguments of why someone should keep those files in version control. Here are some yes answers first. Bad stuff. Seems that the team isn’t really competent with the tools. The project should be easy to setup and why on earth I should press 100500 buttons to setup a project. If this is the case, the first thing that team lead should do, is to simplify the project structure, setup and build process. No excuses there. Yeah, yeah, I can already hear some arguments like ‘but we have a complex project’ or ‘this is the way our setup is done’. Bullsh*t!OK, there is some optimization for the setup, IDE specific though. If all the team members use the same setup, it might even make sense for those who want to keep hands off the console. Here’s another one:‘Specific Eclipse plugins’ – yeah! that’s is actually an argument for not keeping the files in version control! Eclipse plugins usually modify .project files as they add a ‘nature’ or any other project specific settings to the configuration. And actually, IntelliJ does the same, but (let me bash to tools a bit) IntelliJ can suggest the settings as you open a project from scratch. And with Eclipse, you have to do that manually. What if your colleague uses some awesome eclipse plugin that makes modifications to .project file, and you hate that plugin and do not want to install it, and the .project file is in version control? Here’s just a simple example that you could see while exporting a project with existing project files: This is so annoying to resolve this kind of problems. And all you want to do is just to open the project and proceed with your normal work. Here’s more:Currently, I use IntelliJ and the rest of the team uses Eclipse. And I absolutely do not care if they put the project files into version control, because I will import it into my IDE in two clicks anyway. So the assumption that ‘absolutely yes, given that the team is working with the same IDEs’ is wrong, and no viable argument here as well. However, here we have some much more interesting ideas:Wow, this is really interesting one – ‘those that define formatting or source code’. Indeed! In cases when you work on varions projects for different clients, the requirements and code styles might be quite different and it makes sense even to keep this kind of files in version control, so you can share it with the team and restore the code style settings if you lost them. Good point here! However, you can probably see the vector of my post now. So the point is, IDEs support Maven quite well, so why on earth I would need to keep the IDE settings in version control if we already have what we need: pom.xml. IntelliJ and NetBeans cope with that quite well, and Eclipse also, if you use JBoss Tools. But what if I’m a Maven hater? (I really am). Here’s an interesting conversation:Oh sure, Gradle that is! Well, the IDE support isn’t there yet. Luckily, STS provides a nice Gradle plugin for Eclipse, but the support for IntelliJ and NetBeans isn’t quite there yet. However, my claim is that Maven or Gradle isn’t the prerequisite to avoid the project files in version control. The real prerequisite is the simple setup of you project and a clean structure, so that importing the project is just a couple of clicks. Then you can cope with any kind of project, even if it doesn’t contain pom.xml. And here’s what most of them say about the topic: Myself, I have tried both ways – keeping the project files in version control and not checking them in, and my take is that under no circumstances it is a good idea to check the project files in. Again, a much better solution is to keep your project structure clean and simple, which might be harder than to check the files into version control, but much more beneficial on the long run. Thanks everyone for the input! Take care… Reference: IDE Project Files In Version Control – Yes or No? Of Course, Not! from our JCG partner Anton Arhipov at the Code Impossible blog....

Schema Creation Script With Hibernate 4, JPA And Maven

The scenario is trivial – you want to generate a database schema creation script while building your application (and then execute the script on the target database) This was relatively easy with Hibernate 3, as there was the hibernate3-maven-plugin, but it is not compatible with Hibernate 4. And for every new project you should start with Hibernate 4, of course. So what to do? It’s relatively simple, but takes some time to research and test. The idea is to use the SchemaExport tool. But it’s a bit tricky, because it only supports native Hibernate configuration and not JPA. First, you create a command-line application that handles the export. Note that Ejb3Configuration is deprecated, but it is deprecated for external use – hibernate are using it internally quite a lot. So it is a properly working class: @SuppressWarnings('deprecation') public class JpaSchemaExport {public static void main(String[] args) throws IOException { execute(args[0], args[1], Boolean.parseBoolean(args[2]), Boolean.parseBoolean(args[3])); }public static void execute(String persistenceUnitName, String destination, boolean create, boolean format) { System.out.println('Starting schema export'); Ejb3Configuration cfg = new Ejb3Configuration().configure(persistenceUnitName, new Properties()); Configuration hbmcfg = cfg.getHibernateConfiguration(); SchemaExport schemaExport = new SchemaExport(hbmcfg); schemaExport.setOutputFile(destination); schemaExport.setFormat(format); schemaExport.execute(true, false, false, create); System.out.println('Schema exported to ' + destination); } } Note here, that we are not directly deploying the file to the target database. (the 2nd argument to .execute is false). This is because we don’t have our database connection properties in persistence.xml – they are external. Deploying the schema file is done later in the maven build, but this is beyond the scope of this post. Then we have to just invoke this class from the maven build. I initially tried creating it as an ant task and run it with the antrun plugin, but it has classpath and classloader problems (doesn’t find the entities and persistence.xml). That’s why I used the exec-maven-plugin, which invokes the application in the same JVM as the build is running: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <phase>${sql.generation.phase}</phase> <!-- this is process-classes in our case currently --> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>com.yourcompany.util.JpaSchemaExport</mainClass> <arguments> <argument>core</argument> <argument>${project.build.directory}/classes/schema.sql</argument> <argument>true</argument> <argument>true</argument> </arguments> </configuration> </plugin> Then you can use the sql-maven-plugin to deploy the schema.sql file to the target database (you will need to have the externalized db properties loaded by maven, which is done by the properties-maven-plugin). Reference: How To Generate A Schema Creation Script With Hibernate 4, JPA And Maven from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Fixing a bug is like catching a fish

Manager: So, how long will it take to fix this bug? Inexperienced Programmer: An hour maybe? Two tops? I’ll get right on it! Experienced Programmer: Well, how long will it take to catch a fish? It’s hard to know how long it’s going to take to fix a bug, especially if you don’t know the code. James Shore points out in The Art of Agile that obviously before you can fix something, you have to figure out what’s wrong. The problem is that you can’t estimate accurately how long it will take to find out what’s wrong. It’s only after you know what’s wrong that you reasonably estimate how long it will take to fix it. But by then it’s too late. According to Steve McConnell “finding the defect – and understanding it – is usually 90 percent of the work.” A lot of bug fixes are only one line changes. What takes the time is figuring out the right line to change – like knowing where to tap the hammer, or when and where the fish will be biting. Some bugs are easy to find and easy to fix. Some bugs are hard to find, but easy to fix. Other bugs are easy to find and hard to fix. And some bugs can’t be found at all, so they probably can’t be fixed. Unless you wrote the code recently, you probably have no idea which kind of bug you’re being asked to work on. Finding and Fixing a Bug Let’s look at what’s involved in finding and fixing a bug. In Debug It! Paul Butcher does a good job of describing the steps that you need to go through, in a structured and disciplined way that will be familiar to experienced programmers:Make sure that you know what you’re looking for. Review the bug report, see if it makes sense, make sure it really is a bug and that you have enough information to figure the problem out and to reproduce it. Check if it has already been reported as a duplicate, and if so, what the guy before you did about it, if anything. Clear the decks – find and check out the right code, cleanup your work space. Setup your test environment to match. This can be trivial, or impossible, if the customer is running on a configuration that you don’t have access to. Make sure that you understand what the code is supposed to do, and that your existing test suite passes. Now it’s time to go fishing. Reproduce and diagnose the bug. If you can’t reproduce it, you can’t prove that you fixed it. Write new (failing) developer tests or fix existing tests to catch the bug. Make the fix – and make sure that you didn’t break anything else. This may include some refactoring work to understand the code better before you make the fix, so that you can do it safely. And regression testing afterwards to make sure that you didn’t introduce any new bugs. Try to make the code safer and cleaner if you can for the next guy, with some more step-by-step refactoring. At least make sure that you don’t make the code more brittle and harder to understand with your fix. Get the fix reviewed by somebody else to make sure that you didn’t do something stupid. Check the fix in. Check to see if this bug needs to be fixed in any other branches if you aren’t working from the mainline. Merge the change in, deal with differences in the code, and go through all of the same reviews and tests and other work again. Stop and think. Do you understand what went wrong, and why? Do you understand why your fix worked? Where else should you look for this kind of bug ? In The Pragmatic Programmer, Andy Hunt and Dave Thomas also ask “If it took a long time to fix this bug, ask yourself why”, and what can you do to make debugging problems like this easier in the future? How can you improve the approach that you took, or the tools that you used? How deep you go depends on the impact and severity of the bug and how much time you have.What takes longer, finding a bug, or fixing it? The time needed to setup a test environment, reproduce the problem or test it may far outweigh the amount of time that it takes to find the problem in the code and fix it. But for a small number of bugs, it’s not how long it takes to find it – it’s what’s involved in fixing it. In Making Software, the chapter Where Do Most Software Flaws Come From?, Dewayne Perry analyzed how hard it was to find a bug (understand it and reproduce it) compared to how long it took to fix it. The study found that most bugs (almost 3/4) were easy to understand and find and didn’t take long to fix: 5 days or less (this was on a large-scale real-time system with a heavyweight SDLC, lots of reviews and testing). But there’s a long tail of bugs that can take much longer to fix, even bugs that were trivial to find:Find/Fix Effort <=5 Days to Fix >5 Days to FixProblem can be reproduced 72.5% 18.4%Hard to Reproduce or Can’t be Reproduced 5.9% 3.2%So you can bet when you find a bug that it’s going to be easy to fix. And most of the time you’ll be right. But when you’re wrong, you can be a lot wrong. In subsequent posts, I am going to talk more about the issues and costs involved in reproducing, finding and fixing bugs, and how (or whether) to estimate bug fixes. Reference: Fixing a bug is like catching a fish from our JCG partner Jim Bird at the Building Real Software blog....

Characteristics of successful developers

Many blogs exist about personal (soft) characteristics of successful developers. Here is a short listing of some interesting links:50 characteristics of a great software developer Top 10 Traits of a Rockstar Software Engineer Five essential skills for software developers Manifesto for Agile Software Development Manifesto for Software CraftsmanshipThis one blog now is my personal view on that very topic. It’s of course subjective to my own history and environment and I don’t claim that the list is complete. Also, I do not have the discipline to always have all those characteristics a 100% myself. We’re all humans, so don’t take them to serious :-) Last not least: success must not be the target of your work. The target is to work on your own virtues, some of those virtues are the topic of this blog.The will to be good at somethingIt’s not easy to work as a developer! I say that for a couple of reasons that make our life a little harder compared to other professions. The fact – for instance – that the technology cycle in the IT world is very short, the actual knowledge becomes outdated in a few years. Therefore, we need to learn continuously – new things get important. To stay on top of things we really need the strong will to be good at our job. That’s probably the most important characteristic to me: being an excellent knowledge worker with great technical abilities, and have the will to be that over decades!To ask one’s wayBecause it’s impossible to know everything to do the job, it’s absolutely necessary that a developer finds its way through a new topic. How I typically do that is I use google and I talk to other experts to find out what they think. “I did not know what to do!” is not an argument for me. ‘Cause if I didn’t know enough about that new technology yet, I spent the energy that’s necessary to learn what I need to know to do the job. We need to work through the learning curve and make the last-ditch effort to get good at what we’re doing!To make oneself usefulIf I have some time left because I completed my tasks earlier then expected, then: I take a coffee and play tabletop soccer. I take a rest. Afterwards I think about what I could do that helps the team to achieve its targets, ’cause some of my team mates probably didn’t finish! (at least if I didn’t met them at tabletop soccer) If everyone’s finished then I think about improvements to the process or team organisation. I make myself useful.To careSome years ago I attended a software architecture course held by one of my idols Dana Bredemeyer. I had a discussion with him what it really takes to make a team successful or to be a successful team leader. He said: “Well, you need some people that really care!” I think there is a lot truth in that statement. If we do not care about quality, timelines, good team culture, respectful communication (!!), clean code, software-craftsmanship, if all this doesn’t matter to us, then I believe the probability is higher that we fail.Being productivePeter Kruchten put it right in his TAO for the software architect:“Those who know don’t talk. Those who talk don’t know. Those who do not have a clue are still debating about the process. Those who know just do it.”I am trying to be productive every week – at the end of a week I look back and I ask myself what I have produced. This could be paperwork, community days or (best!!) programming code.Working solution-orientiedIn many situations where people had trouble to achieve it’s targets I saw them debating about all the problems and the difficulties to solve the issue. They blamed each other and discussed about THE PAST. I am trying not to do that, I don’t blame others, I don’t just look at the difficulties. I am trying to suggest solutions instead! And yes, there is always a solution to a problem. Most of the times there are at least three solutions.Be good with peopleBecause our job typically involves to work in a (most wanted: cross-functional!) team, it’s important that we’re (more or less) good in dealing with other individuals. They have their own strengths and weaknesses, just like ourselves. It’s important to treat all the team mates with respect, regardless of their technical competence or contributions. Of course, sometimes people deserve a clear statement, but try to do these things one-on-one. Make sure nobody looses his face. Attend the meetings at the coffe bar, be good at tabletop soccer and go out once in a while to have a beer with your team. You know what I’m talking about.Reference: “Characteristics of successful developers” from our JCG partner Niklas....

Learning to Fail

Back at university, when I dealt with much low-level problem solving and very basic libraries and constructs, I learned to pay attention to what can possibly go wrong. A lot. Implementing reliable, hang-proof communication over plain sockets? I remember it today, a trivial loop of “core logic” and a ton of guards around it.Now I suspect I am not the only person who got used to all the convenient higher-level abstractions so much that he began to forget this approach. Thing is, real software is a little bit more complex, and the fact that our libraries kind of deal with most low-level problems for us doesn’t mean there are no reasons to fail. Software As I’m reading “Release It!” by Michael T. Nygard, I keep nodding in agreement: Been there, done this, suffered that. I’ve just started, but it’s already shown quite a few interesting examples of failure and error handling.Michael describes a spectacular outage of an airline software. Its experienced designers expected many kinds of failures and avoided many obvious issues. There was a nice layered architecture, with proper redundancy on every level from clients and terminals, through servers, through database. All was well, yet on a routine maintenance in database the entire system just hung. It did not kill anyone, but delayed flights and serious financial losses have an impact too.The root cause turned out to be one swallowed exception on servers talking to the database, thrown by JDBC driver when the virtual IP of the database server was remapped. If you don’t have proper handling for such situations, one such leakage can lock the entire server as all of its threads wait for the connection or for each other. Since there were no proper timeouts anywhere in the server or above, eventually everything hung.Now it’s easy to say: It’s obvious, thou shalt not swallow exceptions, you moron, and walk on. Or is it?The thing is, an unexpected or improperly handled error can always happen. In hardware. Or a third party component. Or core library of your programming language. Or even you or your colleague can screw up and fail to predict something. It. Just. Happens.   Real Life  Let’s take a look at two examples from real life.Everyone gets in the car thinking: I’m an awesome driver, accidents happen but not to me. Yet somehow we are grateful for having airbags, carefully designed crumple zones, and all kinds of automatic systems that prevent or mitigate effects of accidents.If you were offered two cars at the same cost, which would you choose? One is in pimp-my-ride style with extremely comfortable seats, sat TV, bright pink wheels and whatever unessential features. But it breaks down every so often based on its mood or the moon cycle, and would certainly kill you if you hit a hedgehog. The other is just comfortable enough, completely boring, no cool features to show off at all. But it will serve you 500,000 kilometers without a single breakdown and save your life when you hit a tree. Obvious, right?Another example. My brother-in-law happens to be a construction manager at a pretty big power plant. He recently took me on a trip and explained some basics on how it works, and one thing really struck me.The power station consists of a dozen separate generation units and is designed to survive all kinds of failures. I was impressed, and still am, that in power plant business it’s normal to say stuff like: If this block goes dark, this and this happens, that one takes over, whatever. No big deal. Let’s put it in a perspective. A damn complicated piece of engineering that can detect any potentially dangerous conditions, alarm, shut down and fail over just like that. From small and trivial things like variations in pressure or temperature, through conditions that could blow the whole thing up. And it is so reliable that when people talk about such conditions, rare and severe as they are, they say it in the same tone as “in case of rain the picnic will be held at Ms. Johnson’s”.  Software AgainIn his “After the Disaster” post, Uncle Bob asked: “How many times per day do you put your life in the hands of an ‘if’ statement written by some twenty-two year old at three in the morning, while strung out on vodka and redbull?”I wish it was a rhetorical question.We are pressed hard to focus on adding shiny new features, as fast as possible. That’s what makes our bosses and their bosses shine and what brings money to the table. But not only them, even we (the developers) naturally take most pride in all those features and find them the most exciting part of our work.Remember that we’re here to serve. While pumping out features is fun, remember that those people simply rely on you. Even if you don’t directly cause death or injury, your outages can still affects lives. Think more like a car or power station designer, your position is really closer to theirs than to a lone hippie who’s building a little wobbly shack for himself.When an outage happens and also causes financial loss, you will be to blame. If that reasoning does not work, do it for yourself – pay attention now to avoid pain in future, be it regular panic calls at 3 AM or your boss yelling at you.  More StuffMichael T. Nygard ends that airline example with a very valuable advice. Obvious as it may seem, it feels different if you realize it and engrave it deep in your mind. Expect failure everywhere, and plan for it. Even if your tools handle some failures, they can’t do everything for you. Even if you have at least two of each thing (no single point of failure), you can still suffer from bad design. Be paranoid. Place crumple zones on every integration point with other systems, and even different components of your system, in order to prevent cracks from propagating. Optimistic monoliths fail hard.Want something more concrete? Go read “Release It!”, it’s full of great and concrete examples. There’s a reason why it fits in a book and not in a blog post.Reference: Learning to Fail from our JCG partner Konrad Garus at the Squirrel’s blog....

Quartz 2 Scheduler example

Quartz is an open source job scheduling framework. It can be used to manage and schedule jobs in the application. STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : LIBRARIES Quartz dependencies are added to Maven’ s pom.xml. These dependency libraries will be downloaded by Maven Central Repository. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>OTV_Quartz</groupId> <artifactId>OTV_Quartz</artifactId> <version>0.0.1-SNAPSHOT</version><dependencies><!-- Quartz library --> <dependency> <groupId>org.quartz-scheduler</groupId> <artifactId>quartz</artifactId> <version>2.0.2</version> </dependency><!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency></dependencies></project>STEP 3 : CREATE NEW JOB A new Job is created by implementing the Quartz Job interface as below. TestJob Class specifies business logic which will be scheduled. package com.otv.job;import org.apache.log4j.Logger; import org.quartz.Job; import org.quartz.JobExecutionContext; import org.quartz.JobExecutionException;public class TestJob implements Job {private Logger log = Logger.getLogger(TestJob.class);public void execute(JobExecutionContext jExeCtx) throws JobExecutionException { log.debug("TestJob run successfully..."); }}STEP 4 : LINK JOB WITH JOBDETAIL OBJECT Created TestJob class is linked with a JobDetail object. JobDetail job = JobBuilder.newJob(TestJob.class) .withIdentity("testJob") .build();STEP 5 : CREATE NEW TRIGGER A new trigger is created as below. Trigger Class specifies running period of the job which will be scheduled. There are two kind of Quartz Triggers as : Trigger : specifies start time, end time, running period of the job. CronTrigger : specifies start time, end time, running period of the job according to Unix cron expression. //Trigger the job to run on the next round minute Trigger trigger = TriggerBuilder.newTrigger() .withSchedule( SimpleScheduleBuilder.simpleSchedule() .withIntervalInSeconds(30) .repeatForever()) .build();// CronTrigger the job to run on the every 20 seconds CronTrigger cronTrigger = TriggerBuilder.newTrigger() .withIdentity("crontrigger","crontriggergroup1") .withSchedule(CronScheduleBuilder.cronSchedule("10 * * * * ?")) .build();STEP 6 : CREATE SchedulerFactory A new SchedulerFactory is created and a Scheduler object is gotten from SchedulerFactory Class. SchedulerFactory schFactory = new StdSchedulerFactory(); Scheduler sch = schFactory.getScheduler();STEP 7 : START Scheduler Scheduler Object is started. // Start the schedule sch.start();STEP 8 : SCHEDULE JOB TestJob is scheduled: // Tell quartz to schedule the job using the trigger sch.scheduleJob(job, trigger);STEP 9 : FULL EXAMPLE TestJob will run two times per minute. package com.otv;import org.quartz.JobBuilder; import org.quartz.JobDetail; import org.quartz.Scheduler; import org.quartz.SchedulerException; import org.quartz.SchedulerFactory; import org.quartz.SimpleScheduleBuilder; import org.quartz.Trigger; import org.quartz.TriggerBuilder; import org.quartz.impl.StdSchedulerFactory;import com.otv.job.TestJob;/** * @author onlinetechvision.com * @since 17 Sept 2011 * @version 1.0.0 * */ public class JobScheduler {public static void main(String[] args) {try {// specify the job' s details.. JobDetail job = JobBuilder.newJob(TestJob.class) .withIdentity("testJob") .build();// specify the running period of the job Trigger trigger = TriggerBuilder.newTrigger() .withSchedule(SimpleScheduleBuilder.simpleSchedule() .withIntervalInSeconds(30) .repeatForever()) .build();//schedule the job SchedulerFactory schFactory = new StdSchedulerFactory(); Scheduler sch = schFactory.getScheduler(); sch.start(); sch.scheduleJob(job, trigger);} catch (SchedulerException e) { e.printStackTrace(); } }}STEP 10 : OUTPUT When JobScheduler Class is run, the output will be seen as below : 17.09.2011 23:39:37 DEBUG (TestJob.java:13) - TestJob run successfully... 17.09.2011 23:40:07 DEBUG (TestJob.java:13) - TestJob run successfully... 17.09.2011 23:40:37 DEBUG (TestJob.java:13) - TestJob run successfully... 17.09.2011 23:41:07 DEBUG (TestJob.java:13) - TestJob run successfully...STEP 11 : DOWNLOAD OTV_Quartz_Project Reference: Quartz 2 Scheduler from our JCG partner Eren Avsarogullari at the Online Technology Vision blog....

Smart Batching

How often have we all heard that “batching” will increase latency? As someone with a passion for low-latency systems this surprises me. In my experience when batching is done correctly, not only does it increase throughput, it can also reduce average latency and keep it consistent. Well then, how can batching magically reduce latency? It comes down to what algorithm and data structures are employed. In a distributed environment we are often having to batch up messages/events into network packets to achieve greater throughput. We also employ similar techniques in buffering writes to storage to reduce the number of IOPS. That storage could be a block device backed file-system or a relational database. Most IO devices can only handle a modest number of IO operations per second, so it is best to fill those operations efficiently. Many approaches to batching involve waiting for a timeout to occur and this will by its very nature increase latency. The batch can also get filled before the timeout occurs making the latency even more unpredictable.The image above depicts decoupling the access to an IO device, and therefore the contention for access to it, by introducing a queue like structure to stage the messages/events to be sent and a thread doing the batching for writing to the device. The Algorithm An approach to batching uses the following algorithm in Java pseudo code: public final class NetworkBatcher implements Runnable { private final NetworkFacade network; private final Queue<Message> queue; private final ByteBuffer buffer;public NetworkBatcher(final NetworkFacade network, final int maxPacketSize, final Queue<Message> queue) { this.network = network; buffer = ByteBuffer.allocate(maxPacketSize); this.queue = queue; }public void run() { while (!Thread.currentThread().isInterrupted()) { while (null == queue.peek()) { employWaitStrategy(); // block, spin, yield, etc. }Message msg; while (null != (msg = queue.poll())) { if (msg.size() > buffer.remaining()) { sendBuffer(); }buffer.put(msg.getBytes()); }sendBuffer(); } }private void sendBuffer() { buffer.flip(); network.send(buffer); buffer.clear(); } } Basically, wait for data to become available and as soon as it is, send it right away. While sending a previous message or waiting on new messages, a burst of traffic may arrive which can all be sent in a batch, up to the size of the buffer, to the underlying resource. This approach can use ConcurrentLinkedQueue which provides low-latency and avoid locks. However it has an issue in not creating back pressure to stall producing/publishing threads if they are outpacing the batcher whereby the queue could grow out of control because it is unbounded. I’ve often had to wrap ConcurrentLinkedQueue to track its size and thus create back pressure. This size tracking can add 50% to the processing cost of using this queue in my experience. This algorithm respects the single writer principle and can often be employed when writing to a network or storage device, and thus avoid lock contention in third party API libraries. By avoiding the contention we avoid the J-Curve latency profile normally associated with contention on resources, due to the queuing effect on locks. With this algorithm, as load increases, latency stays constant until the underlying device is saturated with traffic resulting in a more ‘bathtub’ profile than the J-Curve. Let’s take a worked example of handling 10 messages that arrive as a burst of traffic. In most systems traffic comes in bursts and is seldom uniformly spaced out in time. One approach will assume no batching and the threads write to device API directly as in Figure 1. above. The other will use a lock free data structure to collect the messages plus a single thread consuming messages in a loop as per the algorithm above. For the example let’s assume it takes 100 µs to write a single buffer to the network device as a synchronous operation and have it acknowledged. The buffer will ideally be less than the MTU of the network in size when latency is critical. Many network sub-systems are asynchronous and support pipelining but we will make the above assumption to clarify the example. If the network operation is using a protocol like HTTP under REST or Web Services then this assumption matches the underlying implementation.Best (µs) Average (µs) Worst (µs) Packets SentSerial 100 500 1,000 10Smart Batching 100 150 200 1-2The absolute lowest latency will be achieved if a message is sent from the thread originating the data directly to the resource, if the resource is un-contended. The table above shows what happens when contention occurs and a queuing effect kicks in. With the serial approach 10 individual packets will have to be sent and these typically need to queue on a lock managing access to the resource, therefore they get processed sequentially. The above figures assume the locking strategy works perfectly with no perceivable overhead which is unlikely in a real application. For the batching solution it is likely all 10 packets will be picked up in first batch if the concurrent queue is efficient, thus giving the best case latency scenario. In the worst case only one message is sent in the first batch with the other nine following in the next. Therefore in the worst case scenario one message has a latency of 100 µs and the following 9 have a latency of 200 µs thus giving a worst case average of 190 µs which is significantly better than the serial approach. This is one good example when the simplest solution is just a bit too simple because of the contention. The batching solution helps achieve consistent low-latency under burst conditions and is best for throughput. It also has a nice effect across the network on the receiving end in that the receiver has to process fewer packets and therefore makes the communication more efficient both ends. Most hardware handles data in buffers up to a fixed size for efficiency. For a storage device this will typically be a 4KB block. For networks this will be the MTU and is typically 1500 bytes for Ethernet. When batching, it is best to understand the underlying hardware and write batches down in ideal buffer size to be optimally efficient. However keep in mind that some devices need to envelope the data, e.g. the Ethernet and IP headers for network packets so the buffer needs to allow for this. There will always be an increased latency from a thread switch and the cost of exchange via the data structure. However there are a number of very good non-blocking structures available using lock-free techniques. For the Disruptor this type of exchange can be achieved in as little as 50-100ns thus making the choice of taking the smart batching approach a no brainer for low-latency or high-throughput distributed systems. This technique can be employed for many problems and not just IO. The core of the Disruptor uses this technique to help rebalance the system when the publishers burst and outpace the EventProcessors. The algorithm can be seen inside the BatchEventProcessor. Note: For this algorithm to work the queueing structure must handle the contention better than the underlying resource. Many queue implementations are extremely poor at managing contention. Use science and measure before coming to a conclusion. Batching with the Disruptor The code below shows the same algorithm in action using the Disruptor’s EventHandler mechanism. In my experience, this is a very effective technique for handling any IO device efficiently and keeping latency low when dealing with load or burst traffic. public final class NetworkBatchHandler implements EventHander<Message> { private final NetworkFacade network; private final ByteBuffer buffer;public NetworkBatchHandler(final NetworkFacade network, final int maxPacketSize) { this.network = network; buffer = ByteBuffer.allocate(maxPacketSize); } public void onEvent(Message msg, long sequence, boolean endOfBatch) throws Exception { if (msg.size() > buffer.remaining()) { sendBuffer(); }buffer.put(msg.getBytes()); if (endOfBatch) { sendBuffer(); } }private void sendBuffer() { buffer.flip(); network.send(buffer); buffer.clear(); } }The endOfBatch parameter greatly simplifies the handling of the batch compared to the double loop in the algorithm above. I have simplified the examples to illustrate the algorithm. Clearly error handling and other edge conditions need to be considered. Separation of IO from Work Processing There is another very good reason to separate the IO from the threads doing the work processing. Handing off the IO to another thread means the worker thread, or threads, can continue processing without blocking in a nice cache friendly manner. I’ve found this to be critical in achieving high-performance throughput. If the underlying IO device or resource becomes briefly saturated then the messages can be queued for the batcher thread allowing the work processing threads to continue. The batching thread then feeds the messages to the IO device in the most efficient way possible allowing the data structure to handle the burst and if full apply the necessary back pressure, thus providing a good separation of concerns in the workflow. Conclusion So there you have it. Smart Batching can be employed in concert with the appropriate data structures to achieve consistent low-latency and maximum throughput. Reference: Smart Batching from our JCG partner Martin Thompson at the Mechanical Sympathy blog....

JavaFX 2.0 Layout Panes – GridPane

The GridPane is without a doubt the most powerfull and flexible layout pane in JavaFX 2.0. It lays out its children in a flexible grid of columns and rows and is very similar to Swing´s GridBagLayout or HTML´s table model. This approach makes this pane very well suited for any kind of form (like contact forms on a website). You have the ability to…apply any Node to a cell (specified by column and row) in the GridPane to let the Node span multiple columns/rows to align the Node in the cell it was applied to to set horizontal or vertical grow for the Node and to apply a margin to be kept around the Node in the cell.The flexibility of the GridPane also extends to a very flexible API. You can use static class methods like setColumnIndex(node, index) or setRowSpan(node, value), or you can use convenience instance methods like gridpane.add(node, column, row, columnSpan, rowSpan). Note:You don´t have to set the maximum number of columns or rows in the GridPane as it will grow automatically. The size of one column is automatically determined by the widest Node in this column, the height of each row is determined by the tallest Node in the row.The last note is probably the most important fact about the GridPane as it has to be considered for the column/row and the column span/row span of every single Node in order to get the layout you want. For more complex layouts it is a very good idea to draw the layout on a piece of paper and to draw all lines for the columns and rows. This will ease development because you can diretly see in which cell you have to put each Node and how many rows or columns they have to span. Lets have a look at the first simple example:GridPane – Example 1 import javafx.application.Application; import javafx.geometry.HPos; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.PasswordField; import javafx.scene.control.TextField; import javafx.scene.layout.GridPane; import javafx.stage.Stage;/** * Created on: 23.06.2012 * @author Sebastian Damm */ public class GridPaneExample extends Application { @Override public void start(Stage primaryStage) throws Exception { GridPane gridPane = new GridPane(); gridPane.setPadding(new Insets(40, 0, 0, 50)); gridPane.setHgap(5); gridPane.setVgap(5); Scene scene = new Scene(gridPane, 300, 150); Label lbUser = new Label('Username:'); GridPane.setHalignment(lbUser, HPos.RIGHT); TextField tfUser = new TextField(); Label lbPass = new Label('Password:'); GridPane.setHalignment(lbPass, HPos.RIGHT); PasswordField tfPass = new PasswordField(); Button btLogin = new Button('Login'); GridPane.setMargin(btLogin, new Insets(10, 0, 0, 0));gridPane.add(lbUser, 0, 0); gridPane.add(tfUser, 1, 0); gridPane.add(lbPass, 0, 1); gridPane.add(tfPass, 1, 1); gridPane.add(btLogin, 1, 2); primaryStage.setTitle('GridPaneExample 1'); primaryStage.setScene(scene); primaryStage.show(); } public static void main(String[] args) { Application.launch(args); } } Here you can see a little login form with with two labels and two textfields for the username and the password. Additionally theres a ‘login’ button. In lines 21-23 we create the GridPane and apply some padding. Furthermore you can specify a horizontal and a vertical gap to be kept between each Node. Next, take a look at line 28: The alignment of a Node inside the boundaries of the cell it was put into, can be set with the static class methods GridPane.setHalignment(Node node, HPos pos), respectively GridPane.setValignment(Node node, VPos pos). In line 36 you can see how to put a individual margin around a single Node by using the GridPane.setMargin(Node node, Insets insets) method. Finally in lines 38 to 42 we add each Node to the GridPane and specify the column and the row of the Node. Your application should look like this now:In the next example you will see, why we need to set the column span and the row span of each Node in more complex layouts. Have a look at this code: GridPane – Example 2: User form import javafx.application.Application; import javafx.geometry.HPos; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.control.TextArea; import javafx.scene.control.TextField; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.layout.GridPane; import javafx.scene.paint.Color; import javafx.scene.paint.Paint; import javafx.scene.paint.RadialGradientBuilder; import javafx.scene.paint.Stop; import javafx.stage.Stage;/** * Created on: 23.06.2012 * @author Sebastian Damm */ public class GridPaneExample2 extends Application { private final Paint background = RadialGradientBuilder.create() .stops(new Stop(0d, Color.TURQUOISE), new Stop(1, Color.web('3A5998'))) .centerX(0.5d).centerY(0.5d).build(); private final String LABEL_STYLE = '-fx-text-fill: white; -fx-font-size: 14;' + '-fx-effect: dropshadow(one-pass-box, black, 5, 0, 1, 1);'; @Override public void start(Stage primaryStage) throws Exception { Scene scene = new Scene(createGridPane(), 370, 250, background); primaryStage.setTitle('GridPaneExample 2 - User form'); primaryStage.setScene(scene); primaryStage.show(); } private GridPane createGridPane() { GridPane gridPane = new GridPane(); gridPane.setPadding(new Insets(20, 0, 20, 20)); gridPane.setHgap(7); gridPane.setVgap(7); Label lbFirstName = new Label('First Name:'); lbFirstName.setStyle(LABEL_STYLE); GridPane.setHalignment(lbFirstName, HPos.RIGHT); TextField tfFirstName = new TextField(); Label lbLastName = new Label('Last Name:'); lbLastName.setStyle(LABEL_STYLE); GridPane.setHalignment(lbLastName, HPos.RIGHT); TextField tfLastName = new TextField(); Label lbCity = new Label('City:'); lbCity.setStyle(LABEL_STYLE); GridPane.setHalignment(lbCity, HPos.RIGHT); TextField tfCity = new TextField(); Label lbStreetNr = new Label('Street/Nr.:'); lbStreetNr.setStyle(LABEL_STYLE); GridPane.setHalignment(lbStreetNr, HPos.RIGHT); TextField tfStreet = new TextField(); tfStreet.setPrefColumnCount(14); GridPane.setColumnSpan(tfStreet, 2); TextField tfNumber = new TextField(); tfNumber.setPrefColumnCount(3); Label lbNotes = new Label('Notes:'); lbNotes.setStyle(LABEL_STYLE); GridPane.setHalignment(lbNotes, HPos.RIGHT); TextArea taNotes = new TextArea(); taNotes.setPrefColumnCount(5); taNotes.setPrefRowCount(5); GridPane.setColumnSpan(taNotes, 3); GridPane.setRowSpan(taNotes, 2); ImageView imageView = new ImageView(new Image(getClass() .getResourceAsStream('person.png'), 0, 65, true, true)); GridPane.setHalignment(imageView, HPos.LEFT); GridPane.setColumnSpan(imageView, 2); GridPane.setRowSpan(imageView, 3); // gridPane.setGridLinesVisible(true); gridPane.add(lbFirstName, 0, 0); gridPane.add(tfFirstName, 1, 0); gridPane.add(imageView, 2, 0); gridPane.add(lbLastName, 0, 1); gridPane.add(tfLastName, 1, 1); gridPane.add(lbCity, 0, 2); gridPane.add(tfCity, 1, 2); gridPane.add(lbStreetNr, 0, 3); gridPane.add(tfStreet, 1, 3); gridPane.add(tfNumber, 3, 3); gridPane.add(lbNotes, 0, 4); gridPane.add(taNotes, 1, 4); return gridPane; } public static void main(String[] args) { Application.launch(args); } }In this example we create a user form with different inputs and an image. To make the application appear a little nicer, i created a RadialGradient for the background of the Scene and applied a white font color and a little dropshadow to each label. The application should look like this:Compared to the previous example, the first difference occurs in line 64. With GridPane.setColumnSpan(tfStreet, 2); i tell this TextField to occupy two columns. This is needed, because i want this textfield to be a little wider (see line 63) than the other textfields. Otherwise the second column would be as wide as this textfield and therefore stretch the smaller ones. The TextArea (starting at line 71) and the ImageView (line 77) span across multiple columns and rows. Next, take a look at line 83. If you remove the comment lines and start the application, it should look like this:As you can see, this method makes all grid lines (including the horizontal and vertical gap between each Node visible which can be a great help if your Nodes arent aligned the way you want it. I don´t know how many times i wished for a method like this during the time i learned Swing and the GridBagLayout and i bet that i´m not the only one ;) Finally, please remove all lines, where column span or row span are specified (lines 64, 74, 75, 80, 81). This will help you to understand the necessity of column span and row span.You can see, that each Node occupies one single cell and that the layout is pretty messed up because the width/height of each column/row depend on the widest/tallest child Node. GridPane – Example 3: The setConstraints method The instance method add ‘only’ provides two versions, one with the Node, the column and the row, and one with additional column span and row span. Other properties like the alignment or the grow have to be set with dedicated class methods like GridPane.setHalignment like in the first two examples. But theres another nice way: the GridPane.setConstraints(...)method. At the moment (JavaFX 2.2) there are five overloaded versions of this method from setConstraints(Node child, int columnIndex, int rowIndex) to setConstraints(Node child, int columnIndex, int rowIndex, int columnspan, int rowspan, HPos halignment, VPos valignment, Priority hgrow, Priority vgrow, Insets margin). This is pretty similiar to Swing´s GridBagConstraints but here you don´t have to create a dedicated object and reuse it for multiple graphical objects. If you apply the constraints to every Node like this, you can simply add the Nodes to the GridPane´s collections of children. With this approach the code of the second example looks like this: private GridPane createGrid() { GridPane gridPane = new GridPane(); gridPane.setPadding(new Insets(20, 0, 20, 20)); gridPane.setHgap(7); gridPane.setVgap(7); Label lbFirstName = new Label('First Name:'); lbFirstName.setStyle(LABEL_STYLE); GridPane.setConstraints(lbFirstName, 0, 0, 1, 1, HPos.RIGHT, VPos.CENTER); TextField tfFirstName = new TextField(); GridPane.setConstraints(tfFirstName, 1, 0); Label lbLastName = new Label('Last Name:'); lbLastName.setStyle(LABEL_STYLE); GridPane.setConstraints(lbLastName, 0, 1, 1, 1, HPos.RIGHT, VPos.CENTER); TextField tfLastName = new TextField(); GridPane.setConstraints(tfLastName, 1, 1); Label lbCity = new Label('City:'); lbCity.setStyle(LABEL_STYLE); GridPane.setConstraints(lbCity, 0, 2, 1, 1, HPos.RIGHT, VPos.CENTER); TextField tfCity = new TextField(); GridPane.setConstraints(tfCity, 1, 2); Label lbStreetNr = new Label('Street/Nr.:'); lbStreetNr.setStyle(LABEL_STYLE); GridPane.setConstraints(lbStreetNr, 0, 3, 1, 1, HPos.RIGHT, VPos.CENTER); TextField tfStreet = new TextField(); tfStreet.setPrefColumnCount(14); GridPane.setConstraints(tfStreet, 1, 3, 2, 1); TextField tfNumber = new TextField(); tfNumber.setPrefColumnCount(3); GridPane.setConstraints(tfNumber, 3, 3); Label lbNotes = new Label('Notes:'); lbNotes.setStyle(LABEL_STYLE); GridPane.setConstraints(lbNotes, 0, 4, 1, 1, HPos.RIGHT, VPos.CENTER); TextArea taNotes = new TextArea(); taNotes.setPrefColumnCount(5); taNotes.setPrefRowCount(5); GridPane.setConstraints(taNotes, 1, 4, 3, 2); ImageView imageView = new ImageView(new Image(getClass() .getResourceAsStream('person.png'), 0, 65, true, true)); GridPane.setConstraints(imageView, 2, 0, 3, 3, HPos.LEFT, VPos.CENTER); gridPane.getChildren().addAll(lbFirstName, tfFirstName, imageView , lbLastName, tfLastName, lbCity, tfCity, lbStreetNr, tfStreet , tfNumber, lbNotes, taNotes); return gridPane; } You can see the usage of the overloaded setConstraints(...) methods and how you can simply add the Nodes to the GridPane in lines 51-53. I hope i could provide a good introduction to the GridPane in JavaFX 2.0. Feel free to add comments and post questions. Reference: JavaFX 2.0 Layout Panes – GridPane from our JCG partner Sebastian Damm at the Just my 2 cents about Java blog....

Software for Use

Here’s confession of a full time software developer: I hate most software. With passion.  Why I Hate Software Software developers and people around the process are often very self-centered and care more about having a good time than designing a useful product. They add a ton of cool but useless and bugged features. They create their own layers of frameworks and reinvent everything every time, because writing code is so much more fun than writing, reusing or improving it. They don’t care about edge cases, bugs, rare conditions and so on. They don’t care about performance. They don’t care about usability. They don’t care about anything but themselves. Examples? Firefox that has to be killed with task manager because it slows to a crawl during the day on most powerful hardware. Linux which never really cared or managed to solve the issues with drivers for end user hardware. Google maps showing me tons of hotel and restaurant names instead of street names, the exact opposite of what I want when planning a trip. Eclipse or its plugins that require me to kill the IDE from task manager, waste some more time, and eventually wipe out the entire workspace, recreate it and reconfigure. All the applications with tons of forms, popups, dialogs and whatnot. Every error message that is a page long, has a stacktrace, cryptic code and whatever internal stuff in it. All the bugs and issues in open source software, which is made in free time for fun, rarely addressing edge cases or issues happening to a few percent users because they’re not fun. It’s common among developers to hate and misunderstand the user. It’s common even at helpdesk, support and many people who actually deal with end users. In Polish there is this wordplay “u?yszkodnik”, a marriage of “u?ytkownik” (user) and “szkodnik” (pest). What Software Really Is About Let me tell you a secret. The only purpose of software is to serve. We don’t live in a vacuum, but are always paid by someone who has a problem to solve. We are only paid for two reasons: To save someone money, or to let them earn more money. All the stakeholders and users care about it is solving their problems. I’ve spent quite a few years on one fairly large project that is critical for most operations of a corporation. They have a few thousand field workers and a few dozen managers above, and only a handful of people responsible for software powering all this. Important as it is, the development team is a tiny part of the entire company. Whenever I design a form, a report, an email or whatever that the end user will ever see, the first and most important thing to do is: Get in their shoes. Understand what they really need and what problem they are trying to solve. See how we can provide it to the them so that it’s as simple, concise, self-explanatory and usable as possible. Only then we can start thinking about code and the entire backend, and even then the most important thing to keep in mind is the end user. We’re not writing software for ourselves. Most of the time we’re not writing it for educated and exceptionally intelligent geeks either. We write it for housewives, grandmas, unqualified workers, accountants, ladies at bookshops or insurance companies, all kinds of business people. We write it for people who don’t care about software at all and do not have a thorough understanding of it. Nor do they care care how good a time you were having while creating it. They just want to have the job done. You’re Doing It Wrong If someone has to ask or even think about how something works, it’s your failure. If they perform some crazy ritual like rebooting the computer or piece of software, or wipe out a work directory, that’s your fault. If they have to go through five dialogs for a job that could be done with two clicks, or are forced to switch between windows when there is a better way, it’s your failure. When they go fetch some coffee while a report that they run 5 times a day is running, it’s your fault. If there is a sequence of actions or form entries that can blow everything up, a little “don’t touch this” red button, it’s your fault. Not the end user’s. It’s not uncommon to see a sign in Polish offices that reads (sadly, literally): “Due to introduction of a computer system, our operations are much slower. We are sorry for the inconvenience.” Now, that’s a huge, epic failure. Better Ways That’s quite abstract, so let me bring up a few examples. IKEA. I know furniture does not seem as complicated as software, but it’s not that trivial either. It takes some effort to package a cabinet or a chest of drawers in a cardboard box that can be assembled by the end user. They could deliver you some wood and a picture of cabinet, and blame you for not knowing how to turn one into another. They could deliver a bunch of needlessly complicated parts without a manual, and blame the user again. They know they need to sell and have returning customers, not just feel good themselves and blame others. What they do is carefully design every single part and deliver a manual with large, clear pictures and not a single line of text. And it’s completely fool-proof and obvious, so that even such a carpentry ignorant as you can assemble it. LEGO. Some sets have thousands of pieces and are pretty complex. So complex that it would be extremely difficult even for you, craftsman proficient in building stuff, to reproduce. Again, they could deliver 5,000 pieces and a single picture to you and put the blame on you for being unable to figure it out. Again, that’s not what what they do. They want to sell and they want you to return. So they deliver a 200-page-long manual full of pictures, so detailed and fool-proof that even a child can do it. There are good examples in software world as well. StackOverflow is nice, but only for certain kind of users. It’s great for the Internet geeks who get the concept of upvotes, gamification, focusing on tiny narrow parts and not wider discussion etc. Much less for all kinds of scientists and, you know, regular people, who seem to be the intended audience of StackExchange. Google search and maps (for address search, intuitiveness and performance), DuckDuckGo are pretty good. Wolfram Alpha. Skyscanner and Himpunk. Much of the fool-proof Apple hardware and software. In other words, when you know what it does and how to use it the first time you see it, and it Just Works, it’s great. Conclusion Successful startups know it. They want to sell and if they make people think or overly complicate something, people will just walk on by. I guess many startups fail because they don’t realize it. Many established brands try to do it and learn from startups, simplifying and streamlining their UIs (Amazon, MS Office, Ebay…). It’s high time we applied it to all kinds of software, including the internal corporate stuff and open source. After all, we’re only here to serve and solve problems of real people. That’s the way you do it. Reference: Software for Use from our JCG partner Konrad Garus at the Squirrel’s blog....

Email filtering using Aspect and Spring Profile

During web application development, often the need for sending emails arise. However, sometimes the database is populated by data from production and there is a risk of sending emails to real customers during email test execution. This post will explain how to avoid it without explicitly write code in the send email function. We would use 2 techniques:Spring Profiles – a mechanism to indicate what the running environment is (i.e. development, production,..) AOP – in simplified words its a mechanism to write additional logic on methods in decoupled way.I would assume you already have Profiles set on your projects and focus on the Aspect side. In that example the class which sends emails is EmailSender with the method send, as specified below: public class EmailSender { //empty default constructor is a must due to AOP limitation public EmailSender() {}//Sending email function //EmailEntity - object which contains all data required for email sending (from, to, subject,..) public void send(EmailEntity emailEntity) { //logic to send email } } Now, we will add the logic which prevent sending email to customers where code is not running on production. For this we will use Aspects so we won’t have to write it in the send method and by that we would maintain the separation of concern principle. Create a class that will contain the filtering method: @Aspect @Component public class EmailFilterAspect {public EmailFilterAspect() {} } Then create a PointCut for catching the send method execution: @Pointcut("execution(public void com.mycompany.util.EmailSender.send(..))") public void sendEmail(){} Since we need to control whether the method should be executed or not, we need to use the Arround annotation. @Around("sendEmail()") public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){ try { proceedingJoinPoint.proceed(); //The send email method execution } catch (Throwable e) { e.printStackTrace(); } } As a last point, we need to access the send method input parameter (i.e. get the EmailEntity) and verify we don’t send emails to customers on development. @Around("sendEmail()") public void emailFilterAdvice(ProceedingJoinPoint proceedingJoinPoint){//Get current profile ProfileEnum profile = ApplicationContextProvider.getActiveProfile();Object[] args = proceedingJoinPoint.getArgs(); //get input parameters if (profile != ProfileEnum.PRODUCTION){ //verify only internal mails are allowed for (Object object : args) { if (object instanceof EmailEntity){ String to = ((EmailEntity)object).getTo(); if (to!=null && to.endsWith("@mycompany.com")){//If not internal mail - Dont' continue the method try { proceedingJoinPoint.proceed(); } catch (Throwable e) { e.printStackTrace(); } } } } }else{ //In production don't restrict emails try { proceedingJoinPoint.proceed(); } catch (Throwable e) { e.printStackTrace(); } } } That’s it. Regarding configuration, you need to include the aspect jars in your project. In Maven it’s look like this: org.aspectj aspectjrt ${org.aspectj.version} org.aspectj aspectjweaver ${org.aspectj.version} runtime and in your spring application configuration xml file, you need to have this:Good luck! Reference: Email filtering using Aspect and Spring Profile from our JCG partner Gal Levinsky at the Gal Levinsky’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books