Featured FREE Whitepapers

What's New Here?


How To Test Your Tests

When we write tests, we focus on the scenario we want to test, and then write that test. Pretty simple, right? That’s how our minds work. We can’t focus on many things at the same time. TDD acknowledges that and its incremental nature is built around it. TDD or not, when we have a passing test, we should do an evaluation. Start with this table:Property DescriptionValidity Does it test a valid scenario? Is this scenario always valid?Readability Of course I understand the test now, but will someone else understand the test a year from now?Speed How quickly does it run? Will it slow down an entire suite?Accuracy When it fails, can I easily find the problem is in the code, or do I need to debug?Differentiation How is this case different than its brothers? Can I understand just by looking at the tests?Maintenance How much work will I need to do around this test when requirements change? How fragile is it?Footprint Does the test clean after itself? Or does it leave files, registry handles, threads, or a memory blob that can affect other tests?Robustness How easy it is to break this test? What kind of variation are we permitting, and is that variation allowed?Deterministic Does this test have dependencies (the computer clock, CPU, files, data) that can alter its result based on when or where it runs?Isolation Does the test rely on a state that was not specified explicitly in it? If not, will the implicit state always be true?If something’s not up to your standards (I’m assuming you’re a high standard professional) fix it. Now, I hear you asking: Do all this for every test? Let’s put it this way: If the test fails the evaluation, there’s going to be work later to fix it. When would you rather do it – now, when the test is fresh in your head, or later, when you have to dive in again, into code that you haven’t seen in 6 months, instead of working on the new exciting feature you want to work on? It’s testing economics 101. Do it now.Reference: How To Test Your Tests from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Do Software Developers Really Need Degrees?

When I first started out my career as a software developer, I didn’t have a degree. I took my first real job when I was on summer break from my first year of college. By the time the summer was up and it was time to enroll back in school, I found that the salary I was making from that summer job was about what I had expected to make when I graduated college—only I didn’t have any debt at this point—so, I dropped out and kept the job. But, did I make the right choice? Do you really need a university degree to be a computer programmer? The difference between education and school Just because you have a college degree doesn’t mean you have learned anything. That is the main problem I have with most traditional education programs today. School has become much more about getting a degree – a piece of paper – than it has about actually learning something of value. To some extent, I am preaching to the choir. If you have a degree that you worked hard for and paid a large amount of money for, you are more inclined to believe that piece of paper has more value than it really does.If you don’t have a degree, you are probably more inclined to believe that degrees are worthless and completely unnecessary—even though you may secretly wish you had one. So, whatever side you fall on, I am going to ask you to momentarily suspend your beliefs — well, biases really — and consider that both views are not exactly correct, that there is a middle-ground somewhere in between the two viewpoints where a degree isn’t necessarily worthless and it isn’t necessarily valuable either. You see, the issue is not really whether or not a particular degree has any value. The degree itself represents nothing but a cost paid and time committed. A degree can be acquired by many different methods, none of which guarantee any real learning has taken place. If you’ve ever taken a college course, you know that it is more than possible to pass that course without actually learning much at all. Now, don’t get me wrong, I’m not saying that you can’t learn anything in college. I’m not saying that every degree that is handed out is a fraud. I’m simply saying that the degree itself does not prove much; there is a difference between going to school and completing a degree program and actually learning something. Learning is not just memorizing facts. True learning is about understanding. You can memorize your multiplication tables and not understand what they mean. With that knowledge, you can multiply any two numbers that you have memorized the answer for, but you would lack the ability to multiply any numbers that you don’t already have a memorized answer for. If you understand multiplication, even without knowing any multiplication tables, you can figure out how to work out the answer to any multiplication problem — even if it takes you a while. You can be highly educated without a degree Traditional education systems are not the only way to learn things. You don’t have to go to school and get a degree in order to become educated. Fifty years ago, this probably wasn’t the case — although I can’t say for sure, since I wasn’t alive back then. Fifty years ago we didn’t have information at our fingertips. We didn’t have all the resources we have today that make education, on just about any topic, so accessible. A computer science degree is merely a collection of formalized curriculum. It is not magic. There is no reason a person couldn’t save the money and a large degree of the time required to get a computer science degree from an educational institution by learning the exact same information on their own. Professors are not gifted beings who impart knowledge and wisdom on students simply by being in the same room with them. Sure, it may be easier to obtain an education by having someone spoon-feed it to you, but you do not need a teacher to learn. You can become your own teacher. In fact, today there are a large number of online resources where you can get the equivalent of a degree, for free – or at least very cheap.Coursera edX Khan Academy MIT Open Courseware Udemy Pluralsight (I have courses here)Even if you have a degree, self-education is something you shouldn’t ignore—especially when it’s practically free. You can also find many great computer science textbooks online. For example, one the best ones is: Structure and Interpretation of Computer Programs – 2nd Edition (MIT Electrical Engineering and Computer Science) So, is there any real benefit to having a degree? My answer may surprise you, but, yes right now I think there is. I told you that I had forgone continuing my education in order to keep my job, but what I didn’t tell you is that I went back and got my degree later. Now, I didn’t go back to college and quit my job, but I did think there was enough value in having an actual computer science degree that I decided to enroll in an online degree program and get my degree while keeping my job.Why did I go back and get my degree? Well, it had nothing to do with education. By that point, I knew that anything I wanted or needed to learn, I could learn myself. I didn’t really need a degree. I already had a good paying job and plenty of work experience. But, I realized that there would be a significant number of opportunities that I might be missing out on if I didn’t go through the formal process of getting that piece of paper. The reality of the situation is even though you and I may both know that degrees don’t necessarily mean anything, not everyone holds the same opinion. You may be able to do your job and you may know your craft better than someone who has a degree, but sometimes that piece of paper is going to make the difference between getting a job or not and is going to have an influence on how high you can raise in a corporate environment. We can’t simply go by our own values and expect the world to go along with them. We have to realize that some people are going to place a high value on having a degree—whether you actually learned anything while getting one or not. But, at the same time, I believe you can get by perfectly well without one – you’ll just have a few less opportunities – a few more doors that are closed to you. For a software developer, the most important thing is the ability to write code. If you can demonstrate that ability, most employers will hire you—at least it has been my experience that this is the case. I have the unique situation of being on both sides of the fence. I’ve tried to get jobs when I didn’t have a degree and I’ve tried to get jobs when I did have a degree. I’ve found that in both cases, the degree was not nearly as important as being able to prove that I could actually write good code and solve problems. So, I know it isn’t necessary to have a degree, but it doesn’t hurt either. What should you do if you are starting out? If I were starting out today, here is what I would do: I would plan to get my degree as cheaply as possible and to either work the whole time or, better yet, create my own product or company during that time. I’d try and get my first two years of school at a community college where the tuition is extremely cheap. During that time, I’d try to gain actual work experience either at a real job or developing my own software. Once the two-year degree was complete, then I’d enroll in a university, hopefully getting scholarships that would pay for most of my tuition. I would also avoid taking on any student debt. I would make sure that I was making enough money outside of school to be able to afford the tuition. I realize this isn’t always possible, but I’d try to minimize that debt as much as possible. What you absolutely don’t want to do is to start working four year later than you could be and have a huge debt to go with it. Chances are, the small amount of extra salary your degree might afford you will not make up for the sacrifice of losing four years of work experience and pay and going deeply into debt. Don’t make that mistake. The other route I’d consider is to completely get your education online – ignoring traditional school completely. Tuition prices are constantly rising and the value of a traditional degree is constantly decreasing – especially in the field of software development. If you go this route, you need to have quite a bit of self-motivation and self-discipline. You need to be willing to create your own education plan and to start building your own software that will prove that you know what you are doing. The biggest problem you’ll face without a degree is getting that first job. It is difficult to get a job with no experience, but it is even more difficult when you don’t have a degree. What you need is a portfolio of work that shows that you can actually write code and develop software. I’d even recommend creating your own company and creating at least one software product that you sell through that company. You can put that experience down on your resume and essentially create your own first job. (A mobile app is a great product for a beginning developer to create.) What if you are already an experienced developer? Should you go back and get your degree now? It really depends on your goals. If you are planning on climbing the corporate ladder, then yes. In a corporate environment, you are very likely to hit a premature glass-ceiling if you don’t have a degree. That is just how the corporate world works. Plus, many corporations will help pay for your degree, so why not take advantage of that. If you just want to be a software developer and write code, then perhaps not. It might not be worth the investment, unless you can do it for very cheaply—and even then the time investment might not be worth it. You really have to weigh how much you think you’ll be able to earn extra versus how much the degree will cost you. You might be better off self-educating yourself to improve your skills than you would going back to school to get a traditional degree.Reference: Do Software Developers Really Need Degrees? from our JCG partner John Sonmez at the Making the Complex Simple blog....

Mapping your Entities to DTO’s Using Java 8 Lambda expressions

We all facing the cluttered overhead code when we need to convert our DTO’S to Entities(Hibernate Entities, etc..) and backwards. In my example ill demonstrate how the code is getting much shorter with Java 8. Let’s create the Target DTO:             public class ActiveUserListDTO {public ActiveUserListDTO() { }public ActiveUserListDTO(UserEntity userEntity) {this.username = userEntity.getUsername();... } } A simple find method to retrieve all entities using Spring data JPA API: userRepository.findAll();Problem:Find.All() method signature (like many others) returns java.lang.Iterable<T> 1java.lang.Iterable<T> findAll(java.lang.Iterable<ID> iterable) We can’t make a Stream out of java.lang.Iterable(* Streams working on collections. Every Collection is Iterable but not every Iterable is necessary a collection). So how do we get a Stream object in order to get Java8 Lambda’s Power? Let’s use StreamSupport object to convert Iterable into Stream: Stream<UserEntity> userEntityStream = StreamSupport.stream(userRepository.findAll().spliterator(), false); Great. Now we’ve got Stream in our hands which is the key to our Java 8 labmda’s! What’s left is to map and collect: List<ActiveUserList> activeUserListDTOs = userEntities.stream().map(ActiveUserList::new).collect(Collectors.toList()); I am using Java 8 Method Reference and therefor initiating (and mapping) each entity into dto. So let’s make one short line out of everything: List<ActiveUserList> activeUserListDTOs=StreamSupport.stream(userRepository.findAll().spliterator(), false).map(ActiveUserList::new).collect(Collectors.toList()); That’s neat!! Idan. Related Articles:Auditing infrastructure for your app using Spring AOP, Custom annotations and Reflection AmazonSQS and Spring for messaging queue Authentication and Authorization service as an open source solution Invoking Async method call using Future object Using Spring IntegrationReference: Mapping your Entities to DTO’s Using Java 8 Lambda expressions from our JCG partner Idan Fridman at the IdanFridman.com blog....

Use Cases for Elasticsearch: Document Store

I’ll be giving an introductory talk about Elasticsearch twice in July, first at Developer Week Nürnberg, then at Java Forum Stuttgart. I am showing some of the features of Elasticsearch by looking at certain use cases. To prepare for the talks I will try to describe each of the use cases in a blog post as well. When it comes to Elasticsearch the first thing to look at often is the search part. But in this post I would like to start with its capabilities as a distributed document store. Getting Started Before we start we need to install Elasticsearch which fortunately is very easy. You can just download the archive, unpack it and use a script to start it. As it is a Java based application you of course need to have a Java runtime installed. # download archive wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.2.1.zip # zip is for windows and linux unzip elasticsearch-1.2.1.zip # on windows: elasticsearch.bat elasticsearch-1.2.1/bin/elasticsearch Elasticsearch can be talked to using HTTP and JSON.When looking around at examples you will often see curl being used because it is widely available. (See this post on querying Elasticsearch using plugins for alternatives). To see if it is up and running you can issue a GET request on port 9200: curl -XGET http://localhost:9200. If everything is set up correctly Elasticsearch will respond with something like this: { "status" : 200,"name" : "Hawkeye", "version" : { "number" : "1.2.1", "build_hash" : "6c95b759f9e7ef0f8e17f77d850da43ce8a4b364", "build_timestamp" : "2014-06-03T15:02:52Z", "build_snapshot" : false, "lucene_version" : "4.8" }, "tagline" : "You Know, for Search" } Storing Documents When I say document this means two things. First, Elasticsearch stores JSON documents and even uses JSON internally a lot. This is an example of a simple document that describes talks for conferences. { "title" : "Anwendungsfälle für Elasticsearch", "speaker" : "Florian Hopf", "date" : "2014-07-17T15:35:00.000Z", "tags" : ["Java", "Lucene"], "conference" : { "name" : "Java Forum Stuttgart", "city" : "Stuttgart" } } There are fields and values, arrays and nested documents. Each of those features is supported by Elasticsearch. Besides the JSON documents that are used for storing data in Elasticsearch, document refers to the underlying library Lucene, that is used to persist the data and handles data as documents consisting of fields. So this is a perfect match: Elasticsearch uses JSON, which is very popular and supported from lots of technologies. But the underlying data structures also use documents. When indexing a document we can issue a post request to a certain URL. The body of the request contains the document to be stored, the file we are passing contains the content we have seen above. curl -XPOST http://localhost:9200/conferences/talk/ --data-binary @talk-example-jfs.json When started Elasticsearch listens on port 9200 by default. For storing information we need to provide some additional information in the URL. The first segment after the port is the index name. An index name is a logical grouping of documents. If you want to compare it to the relational world this can be thought of as the database. The next segment that needs to be provided is the type. A type can describe the structure of the doucments that are stored in it. You can again compare this to the relational world, this could be a table, but that is only slightly correct. Documents of any kind can be stored in Elasticsearch, that is why it is often called schema free. We will look at this behaviour in the next post where you will see that schema free isn’t the most appropriate term for it. For now it is enough to know that you can store documents with completely different structure in Elasticsearch. This also means you can evolve your documents and add new fields as appropriate. Note that neither index nor type need to exist when starting indexing documents. They will be created automatically, one of the many features that makes it so easy to start with Elasticsearch. When you are storing a document in Elasticsearch it will automatically generate an id for you that is also returned in the result. { "_index":"conferences", "_type":"talk", "_id":"GqjY7l8sTxa3jLaFx67_aw", "_version":1, "created":true } In case you want to determine the id yourself you can also use a PUT on the same URL we have seen above plus the id. I don’t want to get into trouble by calling this RESTful but did you notice that Elasticsearch makes good use of the HTTP verbs? Either way how you stored the document you can always retrieve it by specifying the index, type and id. curl -XGET http://localhost:9200/conferences/talk/GqjY7l8sTxa3jLaFx67_aw?pretty=true which will respond with something like this: { "_index" : "conferences", [...] "_source":{ "title" : "Anwendungsfälle für Elasticsearch", "speaker" : "Florian Hopf", "date" : "2014-07-17T15:35:00.000Z", "tags" : ["Java", "Lucene"], "conference" : { "name" : "Java Forum Stuttgart", "city" : "Stuttgart" } } } You can see that the source in the response contains exactly the document we have indexed before. Distributed Storage So far we have seen how Elasticsearch stores and retrieves documents and we have learned that you can evolve the schema of your documents. The huge benefit we haven’t touched so far is that it is distributed. Each index can be split into several shards that can then be distributed across several machines. To see the distributed nature in action fortunately we don’t need several machines. First, let’s see the state of our currently running instance in the plugin elasticsearch-kopf (See this post on details how to install and use it):  On the left you can see that there is one machine running. The row on top shows that it contains our index conferences. Even though we didn’t explicitly tell Elasticsearch it created 5 shards for our index that are currently all on the instance we started. As each of the shards is a Lucene index in itself even if you are running your index on one instance the documents you are storing are already distributed across several Lucene indexes. We can now use the same installation to start another node. After a short time we should see the instance in the dashboard as well.   As the new node joins the cluster (which by default happens automatically) Elasticsearch will automatically copy the shards to the new node. This is because by default it not only uses 5 shards but also 1 replica, which is a copy of a shard. Replicas are always placed on different nodes than their shards and are used for distributing the load and for fault tolerance. If one of the nodes crashes the data is still available on the other node. Now, if we start another node something else will happen. Elasticsearch will rebalance the shards. It will copy and move shards to the new node so that the shards are distributed evenly across the machines.    Once defined when creating an index the number of shards can’t be changed. That’s why you normally overallocate (create more shards than you need right now) or if your data allows it you can create time based indices. Just be aware that sharding comes with some cost and think carefully about what you need. Designing your distribution setup can still be difficult even with Elasticsearch does a lot for you out of the box. Conclusion In this post we have seen how easy it is to store and retrieve documents using Elasticsearch. JSON and HTTP are technologies that are available in lots of programming environments. The schema of your documents can be evolved as your requirements change. Elasticsearch distributes the data by default and lets you scale across several machines so it is suited well even for very large data sets. Though using Elasticsearch as a document store is a real use case it is hard to find users that are only using it that way. Nobody retrieves the documents only by id as we have seen in this post but uses the rich query facilities we will look at next week. Nevertheless you can read about how Hipchat uses Elasticsearch to store billions of messages and how Engagor uses Elasticsearch here and here. Both of them are using Elasticsearch as their primary storage. Though it sounds more drastic than it probably is: If you are considering using Elasticsearch as your primary storage you should also read this analysis of Elasticsearchs behaviour in case of network partitions. Next week we will be looking at using Elasticsearch for something obvious: Search.Reference: Use Cases for Elasticsearch: Document Store from our JCG partner Florian Hopf at the Dev Time blog....

Making the Reactive Queue durable with Akka Persistence

Some time ago I wrote how to implement a reactive message queue with Akka Streams. The queue supports streaming send and receive operations with back-pressure, but has one downside: all messages are stored in-memory, and hence in case of a restart are lost. But this can be easily solved with the experimental akka-persistence module, which just got an update in Akka 2.3.4.         Queue actor refresher To make the queue durable, we only need to change the queue actor; the reactive/streaming parts remain intact. Just as a reminder, the reactive queue consists of:a single queue actor, which holds an internal priority queue of messages to be delivered. The queue actor accepts actor-messages to send, receive and delete queue-messages a Broker, which creates the queue actor, listens for connections from senders and receivers, and creates the reactive streams when a connection is established a Sender, which sends messages to the queue (for testing, one message each second). Multiple senders can be started. Messages are sent only if they can be accepted (back-pressure from the broker) a Receiver, which receives messages from queue, as they become available and as they can be processed (back-pressure from the receiver).Going persistent (remaining reactive) The changes needed are quite minimal. First of all, the QueueActor needs to extend PersistentActor, and define two methods:receiveCommand, which defines the “normal” behaviour when actor-messages (commands) arrive receiveRecover, which is used during recovery only, and where replayed events are sentBut in order to recover, we first need to persist some events! This should of course be done when handling the message queue operations. For example, when sending a message, a MessageAdded event is persisted using persistAsync: def handleQueueMsg: Receive = { case SendMessage(content) => val msg = sendMessage(content) persistAsync(msg.toMessageAdded) { msgAdded => sender() ! SentMessage(msgAdded.id) tryReply() }   // ... } persistAsync is one way of persisting events using akka-persistence. The other, persist (which is also the default one), buffers subsequent commands (actor-messages) until the event is persisted; this is a bit slower, but also easier to reason about and remain consistent. However in case of the message queue such behaviour isn’t necessary. The only guarantee that we need is that the message send is acknowledged only after the event is persisted; and that’s why the reply is sent in the after-persist event handler. You can read more about persistAsync in the docs. Similarly, events are persisted for other commands (actor-messages, see QueueActorReceive). Both for deletes and receives we are using persistAsync, as the queue aims to provide an at-least-once delivery guarantee. The final component is the recovery handler, which is defined in QueueActorRecover (and then used in QueueActor). Recovery is quite simple: the events correspond to adding a new message, updating the “next delivery” timestamp or deleting. The internal representation uses both a priority queue and a by-id map for efficiency, so when the events are handled during recovert we only build the map, and use the RecoveryCompleted special event to build the queue as well. The special event is sent by akka-persistence automatically. And that’s all! If you now run the broker, send some messages, stop the broker, start it again, you’ll see that the messages are recovered, and indeed, they get received if a receiver is run. The code isn’t production-ready of course. The event log is going to constantly grow, so it would certainly make sense to make use of snapshots, plus delete old events/snapshots to make the storage size small and recovery fast. Replication Now that the queue is durable, we can also have a replicated persistent queue almost for free: we simply need to use a different journal plugin! The default one relies on LevelDB and writes data to the local disk. Other implementations are available: for Cassandra, HBase, and Mongo. Making a simple switch of the persistence backend we can have our messages replicated across a cluster. Summary With the help of two experimental Akka modules, reactive streams and persistence, we have been able to implement a durable, reactive queue with a quite minimal amount of code. And that’s just the beginning, as the two technologies are only starting to mature!If you’d like to modify/fork the code, it is available on Github.Reference: Making the Reactive Queue durable with Akka Persistence from our JCG partner Adam Warski at the Blog of Adam Warski blog....

Common mistakes when using Spring MVC

 When I started my career around 10 years ago, Struts MVC was the norm in the market. However, over the years, I observed the Spring MVC slowly gaining popularity. This is not a surprise to me, given the seamless integration of Spring MVC with Spring container and the flexibility and extensibility that it offers. From my journey with Spring so far, I usually saw people making some common mistakes when configuring Spring framework. This happened more often comparing to the time people still used Struts framework. I guess it is the trade off between flexibility and usability. Plus, Spring documentation is full of samples but lack of explanation. To help filling up this gap, this article will try to elaborate and explain 3 common issues that I often see people encounter. Declare beans in Servlet context definition file So, everyone of us knows that Spring uses ContextLoaderListener to load Spring application context. Still, when declaring the DispatcherServlet, we need to create the servlet context definition file with the name “${servlet.name}-context.xml”. Ever wonder why? Application Context Hierarchy Not all developers know that Spring application context has hierarchy. Let’s look at this method: org.springframework.context.ApplicationContext.getParent() It tells us that Spring Application Context has parent. So, what is this parent for? If you download the source code and do a quick references search, you should find that Spring Application Context treat parent as its extension. If you do not mind to read code, let I show you one example of the usage in method BeanFactoryUtils.beansOfTypeIncludingAncestors(): if (lbf instanceof HierarchicalBeanFactory) { HierarchicalBeanFactory hbf = (HierarchicalBeanFactory) lbf; if (hbf.getParentBeanFactory() instanceof ListableBeanFactory) { Map parentResult = beansOfTypeIncludingAncestors((ListableBeanFactory) hbf.getParentBeanFactory(), type); ... } } return result; } If you go through the whole method, you will find that Spring Application Context scan to find beans in internal context before searching parent context. With this strategy, effectively, Spring Application Context will do a reverse breadth first search to look for beans. ContextLoaderListener This is a well known class that every developer should know. It helps to load the Spring application context from a pre-defined context definition file. As it implements ServletContextListener, the Spring application context will be loaded as soon as the web application is loaded. This bring indisputable benefit when loading the Spring container that contains beans with @PostContruct annotation or batch jobs. In contrast, any bean define in the servlet context definition file will not be constructed until the servlet is initialized. When does the servlet be initialized? It is indeterministic. In worst case, you may need to wait until users make the first hit to the servlet mapping URL to get the spring context loaded. With the above information, where should you declare all your precious beans? I feel the best place to do so is the context definition file loaded by ContextLoaderListener and no where else. The trick here is the storage of ApplicationContext as a servlet attribute under the key org.springframework.web.context.WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE    Later, DispatcherServlet will load this context from ServletContext and assign it as the parent application context. protected WebApplicationContext initWebApplicationContext() { WebApplicationContext rootContext = WebApplicationContextUtils.getWebApplicationContext(getServletContext()); ... } Because of this behaviour, it is highly recommended to create an empty servlet application context definition file and define your beans in the parent context. This will help to avoid duplicating the bean creation when web application is loaded and guarantee that batch jobs are executed immediately. Theoretically, defining the bean in servlet application context definition file make the bean unique and visible to that servlet only. However, in my 8 years of using Spring, I hardly found any use for this feature except defining Web Service end point. Declare Log4jConfigListener after ContextLoaderListener This is a minor bug but it will catch you when you do not pay attention to it. Log4jConfigListener is my preferred solution over -Dlog4j.configuration as we can control the log4j loading without altering server bootstrap process. Obviously, this should be the first listener to be declared in your web.xml. Otherwise, all of your effort to declare proper logging configuration will be wasted. Duplicated Beans due to mismanagement of bean exploration In the early day of Spring, developers spent more time typing on xml files than Java classes. For every new bean, we need to declare and wiring the dependencies ourselves, which is clean, neat but very painful. No surprise that later versions of Spring framework evolved toward greater usability. Nowadays, developers may only need to declare transaction manager, data source, property source, web service endpoint and leave the rest to component scan and auto-wiring. I like these new features but this great power needs to come with great responsibility; otherwise, things will be messy quickly. Component Scan and bean declaration in XML files are totally independent. Therefore, it is perfectly possible to have identical beans of the same class in the bean container if the beans are annotated for component scan and declare manually as well. Fortunately, this kind of mistake should only happen with beginners. The situation gets more complicated when we need to integrate some embedded components into the final product. Then we really need a strategy to avoid duplicated bean declaration.The above diagram shows a realistic sample of the kind of problems we face in daily life. Most of the time, a system is composed from multiple components and often, one component serves multiple product. Each application and component has it own beans. In this case, what should be the best way to declare to avoid duplicated bean declaration? Here is my proposed strategy:Ensure that each component needs to start with a dedicated package name. It makes our life easier when we need to do component scan. Don’t dictate the team that develops the component on the approach to declare the bean in the component itself (annotation versus xml declaration). It is the responsibility of the developer whom packs the components to final product to ensure no duplicated bean declaration. If there is context definition file packed within the component, give it a package rather than in the root of classpath. It is even better to give it a specific name. For example src/main/resources/spring-core/spring-core-context.xml is way better than src/main/resource/application-context.xml. Imagine what can we do if we pack few components that contain the same file application-context.xml on the identical package! Don’t provide any annotation for component scan (@Component, @Service or @Repository) if you already declare the bean in one context file. Split the environment specific bean like data-source, property-source to a separate file and reuse. Do not do component scan on the general package. For example, instead of scanning org.springframework package, it is easier to manage if we scan several sub-packages like org.springframework.core, org.springframework.context, org.springframework.ui,…Conclusion I hope you found the above tips useful for daily usage. If there is any doubt or any other idea, please help by sending feedback.Reference: Common mistakes when using Spring MVC from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....

How We Chose Framework

When you develop your application most of the time you are writing code that deals with some of the resources. Code lines that open database connection, allocate memory and alikes. The lower level you code the more code is dealing with the computational environment. This is cumbersome and though may be enjoyable for some of the programmers the less such code is needed the better. The real effort delivering business value is when you write code lines that implement business function. It is obvious that you just can not make a simple decision to write only business function implementing code. The other types of code lines are also needed to execute the code and it is also true that the border between infrastructure code and business code is sometimes blurry. You just can not tell sometime whether the code you type is infrastructure or business. What you really can do is to select a framework that fits the business problem the best. Something that is easy to configure, does not need boiler plate code and easy to learn. That way you can focus more on the business code. Well, easy to say, hard to do. How could you tell which framework will be the best on the long run when the project has so many uncertainties? You can not tell precisely. But you can try and strive for more precision. And a model does to follow does not hurt. So what is the model in this case? During the lifetime of a project there will be a constant effort to develop the business logic. If the business logic is fixed the number of the code lines to develop that can not change much. There may be some difference because some programming language is more verbose than the other, but this is not significant. The major difference is framework supporting code. There is also an effort to learn the framework, however that may be negligible for a longer project. This effort is needed at the start of the project, say sprint 1 and 2 and after that this fixed cost diminishes compared to the total cost of development. For the model I setup I will neglect this effort not at least because I can not measure a-priori how much effort an average programmer needs to learn a specific framework. So the final, very simplified model is to compare the amount of code delivering business value compared to the amount of code configuring and supporting the selected framework. How to measure this? I usually… Well, not usually. Selecting a framework is not an everyday practice. What we did in our team last time to perform a selection was the following: We pre selected five possible frameworks. We ruled out one of them in the first run as not being widely known and used. We did not want to be on the bleeding edge. Another was filtered out as closer examination showed that the framework is a total misfit for our purpose. There remained three. After that we looked up projects on GitHub that utilized one of the framework, at least two for each framework (and not more than three). We looked at 8 projects total and we counted the lines categorizing each as business versus framework code lines. And then we realized that this just can not be done during the lifetime of a human, therefore we made it simpler. We started to categorize the classes based on their names. There were business classes related to some business data and also classes named after some business functions. The rest was treated as framework supporting, configuration class. The final outcome was to sculptured into a good old ppt presentation and we added the two slides to the other slides that qualitatively analyzed the three frameworks listing pros and the cons. The final outcome, no surprise, was coherent: the calculation showed that the framework requiring the less configuration and supporting code was the one we favored anyway. What was the added value then? Making the measurement we had to review projects and we learnt a lot about the frameworks. Not as much as one coding in it, but more than just staring at marketing materials. We touched real code that programmers created while facing the real problems and the real features of the frameworks. This also helps the evaluator to gain more knowledge, gives a rail to grab on and lead us where to look, what to try when piloting the framework. It is also an extremely important result that the decision process left less doubt in us. If the outcome were just opposite then we would have been in trouble and it would have made us thinking hard: why did we favor a framework that needs more business irrelevant code. But it did not. The result was concise with common sense. Would I recommend this calculation to be the sole source for framework selection? Definitely no. But it can be a good addition that you can perform burning two or three days of your scrum team and it also helps your team to get the tip of their fingers into new technologies.Reference: How We Chose Framework from our JCG partner Peter Verhas at the Java Deep blog....

10 things you can do as a developer to make your app secure: #8 Leverage other people’s Code (Carefully)

As you can see from the previous posts, building a secure application takes a lot of work. One short cut to secure software can be to take advantage of the security features of your application framework. Frameworks like .NET and Rails and Play and Django and Yii provide lots of built-in security protection if you use them properly. Look to resources like OWASP’s .NET Project and .NET Security Cheat Sheet, the Ruby on Rails Security Guide, the Play framework Security Guide, Django’s security documentation, or How to write secure Yii applications, Apple’s Secure Coding Guide or the Android security guide for developers for framework-specific security best practices and guidelines. There will probably be holes in what your framework provides, which you can fill in using security libraries like Apache Shiro, or Spring Security, or OWASP’s comprehensive (and heavyweight) ESAPI, and special purpose libraries like Jasypt or Google KeyCzar and the Legion of the Bouncy Castle for crypto, and encoding libraries for XSS protection and protection from other kinds of injection. Keep frameworks and libraries up to date If you are going to use somebody else’s code, you also have to make sure to keep it up to date. Over the past year or so, high-profile problems including a rash of serious vulnerabilities in Rails in 2013 and the recent OpenSSL Heartbleed bug have made it clear how important it is to know all of the Open Source frameworks and libraries that you use in your application (including in the run-time stack), and to make sure that this code does not have any known serious vulnerabilities. We’ve known for a while that popular Open Source software components are also popular (and easy) attack targets for bad guys. And we’re making it much too easy for the bad guys. A 2012 study by Aspect Security and Sonatype looked at 113 million downloads of the most popular Java frameworks (including Spring, Apache CXF, Hibernate, Apache Commons, Struts and Struts2, GWT, JSF, Tapestry and Velocity) and security libraries (including Apache Shiro, Jasypt, ESAPI, BouncyCastle and AntiSamy). They found that 37% of this software contained known vulnerabilities, and that people continued to download obsolete versions of software with well-known vulnerabilities more than ¼ of the time. This has become a common enough and serious enough problem that using software frameworks and other components with known vulnerabilities is now in the OWASP Top 10 Risk list. Find Code with Known Vulnerabilities and Patch It – Easy, Right? You can use a tool like OWASP’s free Dependency Check or commercial tools like Sonatype CLM to keep track of Open Source components in your repositories and to identify code that contains known vulnerabilities. Once you find the problems, you have to fix them – and fix them fast. Research by White Hat Security shows that serious security vulnerabilities in most Java apps take an average of 91 days to fix once a vulnerability is found. That’s leaving the door wide open for way too long, almost guaranteeing that bad guys will find their way in. If you don’t take responsibility for this code, you can end up making your app less secure instead of more secure. Next: let’s go back to the beginning, and look at security in requirements.Reference: 10 things you can do as a developer to make your app secure: #8 Leverage other people’s Code (Carefully) from our JCG partner Jim Bird at the Building Real Software blog....

Java EE Concurrency API Tutorial

This is a sample chapter taken from the Practical Java EE 7 development on WildFly book edited by Francesco Marchioni. This chapter discusses about the new Java EE Concurrency API (JSR 236) which outlines a standard way for executing tasks in parallel on a Java EE Container using a set of Managed resources. In order to describe how to use this API in your applications, we will follow this roadmap:A short introduction to the Concurrency Utilities How to leverage asynchronous tasks using the ManagedExecutorService How to schedule tasks at specific times using the ManagedScheduledExecutorService How to create dynamic proxy objects which add contextual information available in Java EE environment How to use the ManagedThreadFactory to create managed threads to be used by your applicationsOverview of Concurrency Utilities Prior to Java EE 7 executing concurrent tasks within a Java EE Container was widely acknowledged as a dangerous practice and sometimes even prohibited by the container: “The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread’s priority or name. The enterprise bean must not attempt to manage thread groups” Actually, by creating your own un-managed Threads in a Java EE container, using the J2SE API, would not guarantee that the context of the container is propagated to the thread executing the task. The only available pattern was either using Asynchronous EJB or Message Driven Bean, in order to execute a task in an asynchronous way; most often this was enough for simple fire and forget patterns, yet the control of Threads still lied in the hands of the Container. With the Java EE Concurrency API (JSR 236) you can use extensions to the java.util.concurrent API as Managed Resources, that is, managed by the Container. The only difference from the standard J2SE programming is that you will retrieve your Managed resources from the JNDI tree of the Container. Yet you will still use your Runnable interfaces or classes that are part of the java.util.concurrent package such as Future or ScheduledFuture. In the next section, we will start from the simplest example, which is executing an asynchronous task using the ManagedExecutorService. Using the ManagedExecutorService to submit tasks In order to create our first asynchronous execution we will show how to use the ManagedExecutorService, which extends the Java SE ExecutorService to provide methods for submitting tasks for execution in a Java EE environment. By using this managed service, the context of the container is propagated to the thread executing the task: The ManagedExecutorService is included as part of the EE configuration of the application server: <subsystem xmlns="urn:jboss:domain:ee:2.0">. . .<concurrent>. . . .<managed-executor-services><managed-executor-service name="default"jndi-name="java:jboss/ee/concurrency/executor/default"context-service="default" hung-task-threshold="60000"core-threads="5" max-threads="25" keepalive-time="5000"/></managed-executor-services>. . . .</concurrent></subsystem> In order to create our first example, we retrieve the ManagedExecutorService from the JNDI context of the container as follows: @Resource(name = "DefaultManagedExecutorService")ManagedExecutorService executor; By using the ManagedExecutorService instance, you are able to submit your tasks that can implement either the java.lang.Runnable interface or the java.util.concurrent.Callable interface. Instead of having a run() method, the Callable interface offers a call() method, which can return any generic type. Coding a simple Asynchronous Task So let’s see a simple Servlet example which fires an asynchronous task using the ManagedExecutorService: @WebServlet("/ExecutorServlet")public class ExecutorServlet extends HttpServlet {@Resource(name = "DefaultManagedExecutorService")ManagedExecutorService executor;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {PrintWriter writer = response.getWriter();executor.execute(new SimpleTask());writer.write("Task SimpleTask executed! check logs");}} The class SimpleTask in our example implements the Runnable interface by providing concurrent execution. public class SimpleTask implements Runnable {@Override public void run() {System.out.println("Thread started.");}} Retrieving the result from the Asynchronous Task The above Task is a good option for a down-to-earth scenario; as you might have noticed, there’s no way to intercept a return value from the Task. In addition, when using Runnable you are constrained to use unckecked exceptions (if run() threw a checked exception, who would catch it? There is no way for you to enclose that run() call in a handler, since you don’t write the code that invokes it). If you want to overcome this limitations then you can implement a java.util.concurrent.Callable interface instead, submit it to the ExecutorService, and waiting for result with FutureTask.isDone() returned by the ExecutorService.submit(). Let’s see a new version of our Servlet, which captures the result of a Task named CallableTask: @WebServlet("/CallableExecutorServlet")public class CallableExecutorServlet extends HttpServlet {@Resource(name = "DefaultManagedExecutorService") ManagedExecutorService executor;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {PrintWriter writer = response.getWriter(); Future<Long> futureResult = executor.submit(new CallableTask(5));while (!futureResult.isDone()) {// Wait try { Thread.sleep(100); } catch (InterruptedException e) { e.printStackTrace(); }}try {writer.write("Callable Task returned " +futureResult.get());} catch ( Exception e) { e.printStackTrace(); }}} As you can see from the code, we are polling for the task completion using the isDone() method. When the task is completed we can call the FutureTask’s get() method and get the return value. Now let’s see our CallableTask implementation which, in our example, returns the value of the summation of a number: public class CallableTask implements Callable<Long> {private int id;public CallableTask(int id) {this.id = id;}public Long call() {long summation = 0;for (int i = 1; i <= id; i++) {summation += i;}return new Long(summation);}} In our example, all we had to do is implementing the call method, which returns the Integer that will be eventually collected via the get method of the Future interface. If your Callable task has thrown an Exception, then FutureTask.get() will raise an Exception too and the original Exception can be accessed by using Exception.getCause() Monitoring the state of a Future Task In the above example, we are checking the status of the Future Task using the FutureTask.isDone() method. If you need a more accurate control over the Future Task lifecycle, then you can implement javax.enterprise.concurrent.ManagedTaskListener instance in order to receive lifecycle event notifications. Here’s our enhanced Task, which implements the taskSubmitting, taskStarting, taskDone and taskAborted methods: public class CallableListenerTask implements Callable<Long>,ManagedTaskListener {private int id;public CallableListenerTask(int id) {this.id = id;}public Long call() {long summation = 0;for (int i = 1; i <= id; i++) {summation += i;}return new Long(summation);}public void taskSubmitted(Future<?> f, ManagedExecutorService es, Object obj) {System.out.println("Task Submitted! "+f);}public void taskDone(Future<?> f, ManagedExecutorService es, Object obj, Throwable exc) {System.out.println("Task DONE! "+f);}public void taskStarting(Future<?> f, ManagedExecutorService es, Object obj) {System.out.println("Task Starting! "+f);}public void taskAborted(Future<?> f, ManagedExecutorService es, Object obj, Throwable exc) {System.out.println("Task Aborted! "+f);}} The lifecycle notifications are invoked in this order:taskSubmitting: on Task submission to the Executor taskStarting: before the actual Task startup taskDone: trigger on Task completion taskAborted: triggered when the user invoked futureResult.cancel()Using Transaction in asynchronous Tasks Within a distributed Java EE environment, it is a challenging task to guarantee proper transaction execution also for concurrent task executions. The Java EE concurrency API relies on Java Transaction API (JTA) to support transactions on the top of its components via the javax.transaction.UserTransaction which is used to explicitly demarcate transaction boundaries. The following code shows how a Callable Task retrieves an UserTransaction from the JNDI tree and then starts and commit a transaction with an external component (an EJB): public class TxCallableTask implements Callable<Long> {long id;public TxCallableTask(long i) {this.id = i;}public Long call() {long value = 0;UserTransaction tx = lookupUserTransaction();SimpleEJB ejb = lookupEJB();try {tx.begin();value = ejb.calculate(id); // Do Transactions heretx.commit();} catch (Exception e) {e.printStackTrace();try { tx.rollback(); } catch (Exception e1) { e1.printStackTrace(); }}return value;}// Lookup EJB and UserTransaction here ..} The major limit of this approach is that, although context objects can begin, commit, or roll back transactions, these objects cannot enlist in parent component transactions. Scheduling tasks with the ManagedScheduledExecutorService The ManagedScheduledExecutorService extends the Java SE ScheduledExecutorService to provide methods for submitting delayed or periodic tasks for execution in a Java EE environment. As for the other managed objects, you can obtain an instance of the ExecutorService via JNDI lookup:@Resource(name ="DefaultManagedScheduledExecutorService") ManagedScheduledExecutorService scheduledExecutor; Once that you have a reference to the ExecutorService, then you can invoke the schedule method on it to submit a delayed or periodic tasks. ScheduledExecutors, just like ManagedExecutors, can be bound either to a Runnable interface or to a Callable interface. Next section shows both approaches. Submitting a simple ScheduledTask In its simplest form, submitting a Scheduled Task requires setting up a schedule expression and passing it to the ManagedSchedulerExecutor Service. In this example, we are creating a delayed task which will run just once, in 10 seconds, since the schedule() method is invoked: @WebServlet("/ScheduledExecutor") public class ScheduledExecutor extends HttpServlet {@Resource(name ="DefaultManagedScheduledExecutorService") ManagedScheduledExecutorService scheduledExecutor;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {PrintWriter writer = response.getWriter();ScheduledFuture<?> futureResult = scheduledExecutor.schedule(new SimpleTask(), 10,TimeUnit.SECONDS);writer.write("Waiting 10 seconds before firing the task");}} If you need to schedule your task repeatedly, then you can use the scheduleAtFixedRate method, which takes as input the time before firing the Task, the time before each repeated execution and the TimeUnit. See the following example, which schedules a Task every 10 seconds of seconds, after an initial delay of 1 second: ScheduledFuture<?> futureResult = scheduledExecutor. scheduleAtFixedRate (new SimpleTask(),1, 10,TimeUnit.SECONDS); Capturing the result of a Scheduled execution If you need to capture a return value from the task that is scheduled to be executed, then you can use the ScheduledFuture interface which is returned by the schedule method. Here’s an example which captures the result from our factorial example Task that we have earlier coded: ScheduledFuture<Long> futureResult =scheduledExecutor.schedule(new CallableTask(5), 5, TimeUnit.SECONDS);while (!futureResult.isDone()) {try {Thread.sleep(100); // Wait} catch (InterruptedException e) {e.printStackTrace();}}try {writer.write("Callable Task returned " +futureResult.get());} catch ( Exception e) {e.printStackTrace();} Creating Managed Threads using the ManagedThreadFactory The javax.enterprise.concurrent.ManagedThreadFactory is the equivalent of the J2SE ThreadFactory, which can be used to create your own Threads. In order to use the ManagedThreadFactory, you need to inject it from the JNDI as usual: @Resource(name ="DefaultManagedThreadFactory")ManagedThreadFactory factory; The main advantage of creating your own Managed Threads from a Factory (compared with those created by the ManagedExecutorService) is that you can set some typical Thread properties (such as name or priority) and that you can create a managed version of the J2SE Executor Service. The following examples will show you how. Creating Managed Threads from a Factory In this example, we will create and start a new Thread using the DefaultManagedThreadFactory. As you can see from the code, once that we have created an instance of a Thread class, we are able to set a meaningful name for it and associate it with a priority. We will then associate the Thread with our SimpleTask that logs some data on the console: @WebServlet("/FactoryExecutorServlet")public class FactoryExecutorServlet extends HttpServlet {@Resource(name ="DefaultManagedThreadFactory") ManagedThreadFactory factory;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {PrintWriter writer = response.getWriter(); Thread thread = factory.newThread(new SimpleTask());thread.setName("My Managed Thread");thread.setPriority(Thread.MAX_PRIORITY);thread.start();writer.write("Thread started. Check logs");}} Now check your server logs: no doubt that it is easier to detect the output of your self-created Threads: 14:44:31,838 INFO [stdout] (My Managed Thread) Simple Task started Collecting information about the Thread name is especially useful when analyzing a thread dump and the thread name is the only clue to trace a thread execution path. Using a Managed Executor Service The java.util.concurrent.ExecutorService interface is a standard J2SE mechanism, which has vastly replaced the usage of direct Threads to perform asynchronous executions. One of the main advantages of the ExecutorService over the standard Thread mechanism is that you can define a pool of instances to execute your jobs and that you have a safer way to interrupt your jobs. Using the ExecutorService in your Enterprise applications is straightforward: all you have to do is passing an instance of your Managed ThreadFactory to a constructor of your ExecutorService. In the following example, we are using a SingletonEJB to provide the ExecutorService as a service in its method getThreadPoolExecutor: @Singletonpublic class PoolExecutorEJB {private ExecutorService threadPoolExecutor = null;int corePoolSize = 5;int maxPoolSize = 10;long keepAliveTime = 5000;@Resource(name = "DefaultManagedThreadFactory") ManagedThreadFactory factory;public ExecutorService getThreadPoolExecutor() {return threadPoolExecutor;}@PostConstruct public void init() {threadPoolExecutor = new ThreadPoolExecutor(corePoolSize, maxPoolSize,keepAliveTime, TimeUnit.SECONDS,new ArrayBlockingQueue<Runnable>(10), factory);}@PreDestroy public void releaseResources() {threadPoolExecutor.shutdown();}} The ThreadPoolExecutor contains two core parameters in its constructor: the corePoolSize and the maximumPoolSize. When a new task is submitted in method and fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full. The ExecutorService is then used to start a new asynchronous task as in the following example, where an anonymous implementation of Runnable is provided in a Servlet: @WebServlet("/FactoryExecutorServiceServlet") public class FactoryExecutorServiceServlet extends HttpServlet {@EJB PoolExecutorEJB ejb;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {final PrintWriter writer = response.getWriter();writer.write("Invoking ExecutorService. Check Logs.");ExecutorService executorService = ejb.getThreadPoolExecutor();executorService.execute(new Runnable() {public void run() {System.out.println("Message from your Executor!");}});}} As soon as the PoolExecutorEJB is terminated, the ExecutorService will be finalized as well in the @PreDestroy method of the Singleton Bean which will invoke the shutdown() method of the ThreadPoolExecutor. The ExecutorService will not shut down immediately, but it will no longer accept new tasks, and once all threads have finished current tasks, the ExecutorService shuts down. Using Dynamic Contextual objects A dynamic proxy is an useful Java tweak that can be used create dynamic implementations of interfaces using the java.lang.reflect.Proxy API. You can use dynamic proxies for a variety of different purposes such as database connection and transaction management, dynamic mock objects for unit testing and other AOP-like method intercepting purposes. In a Java EE Environment, you can use a special type of dynamic proxies called dynamic contextual proxies. The most interesting feature of dynamic contextual objects is that the JNDI naming context, classloader, and security context are propagated to the proxied objects. This can be useful in a context where you are bringing J2SE implementations in your Enterprise applications and want to run them within the context of the container. The following snippet shows how to inject contextual objects into the container. Since contextual objects also need an ExecutorService to which you can submit the task, a ThreadFactory is injected as well: @Resource(name ="DefaultContextService")ContextService cs;@Resource(name ="DefaultManagedThreadFactory")ManagedThreadFactory factory; In the following section, we will show how to create dynamic contextual objects using a revised version of our Singleton EJB. Executing Contextual Tasks The following example shows how to trigger a contextual proxy for a Callable task. For this purpose, we will need both the ManagedThreadfactory and the ContextService. Our ContextExecutor EJB will initially create the ThreadPoolExecutor within its init method. Then, within the submit method, new contextual proxies for Callable tasks are created and submitted to the ThreadPool Executor. Here is the code for our ContextExecutorEJB: @Singletonpublic class ContextExecutorEJB {private ExecutorService threadPoolExecutor = null;@Resource(name = "DefaultManagedThreadFactory") ManagedThreadFactory factory;@Resource(name = "DefaultContextService") ContextService cs;public ExecutorService getThreadPoolExecutor() {return threadPoolExecutor;}@PostConstruct public void init() { threadPoolExecutor = new ThreadPoolExecutor(5, 10, 5, TimeUnit.SECONDS,new ArrayBlockingQueue>Runnable>(10), factory); }public Future>Long> submitJob(Callable>Long> task) {Callable>Long> proxy = cs.createContextualProxy(task, Callable.class);return getThreadPoolExecutor().submit(proxy);}} The CallableTask class is a bit more complex than our first example, as it is going to log information about the javax.security.auth.Subject, which is contained in the caller Thread: public class CallableTask implements Callable<Long> {private int id;public CallableTask(int id) {this.id = id;}public Long call() {long summation = 0;// Do calculationSubject subject = Subject.getSubject(AccessController.getContext());logInfo(subject, summation); // Log Traces Subject identityreturn new Long(summation);}private void logInfo(Subject subject, long summation) { . . }} Following here is a simple way to submit new contextual tasks to our SingletonEJB:@EJB ContextExecutorEJB ejb;protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {CallableTask task = new CallableTask(5);ejb.submitJob(task);} Building your examples In order to use the Concurrency utilities for Java EE API you need the following Maven dependency in your application: <dependency><groupId>org.jboss.spec.javax.enterprise.concurrent</groupId><artifactId>jboss-concurrency-api_1.0_spec</artifactId><version>1.0.0.Final</version></dependency>   This excerpt has been taken from the “Practical Java EE 7 development on WildFly” book which is a hands-on practical guide disclosing all areas of Java EE 7 development on the newest WildFly application server. Covers everything from the foundation components (EJB, Servlets, CDI, JPA) to the new technology stack defined in Java Enterprise Edition 7 hence including the new Batch API, JSON-P Api, the Concurrency API,Web Sockets, the JMS 2.0 API, the core Web services stack (JAX-WS, JAX-RS). The testing area with Arquillian framework and the Security API complete the list of topics discussed in the book. ...

Explicit Implicit Conversion

One of the most common pattern we use on our day to day is converting objects from one type of object to another. The reasons for that are varied; one reason is to distinguish between external and internal implementations, another reason would be to enrich incoming data with additional information or to filter out some aspects of the data before sending it over to the user. There are several approaches to achieve this conversion between objects:          The Naïve Approach Add your converter code to the object explicitly: case class ClassA(s: String)case class ClassB(s: String) { def toClassA = ClassA(s) } While this is the most straightforward and obvious implementation, it ties ClassA and ClassB together which is exactly what we want to avoid.The fat belly syndrome When we want to convert between objects, the best way is to refactor the logic out of the class, allowing us to test it separately but still use it on several classes. A typical implementation would look like this: class SomeClass(c1: SomeConverter, c2: AnotherConverter, ...., cn, YetAnotherConverter) { ........... } The converter itself can be implemented as a plain class, for example: enum CustomToStringConverter {INSTANCE;public ClassB convert(ClassA source) { return new ClassB(source.str); } } This method forces us to include all the needed converters for each class that requires these converters. Some developers might be tempted to mock those converters, which will tightly-couple their test to concrete converters. for example: // set mock expectations converter1.convert(c1) returns c2 dao.listObj(c2) returns List(c3) converter2.convert(c3) returns o4someClass.listObj(o0) mustEqual o4 What I don’t like about these tests is that all of the code flows through the conversion logic and in the end you are comparing the result returned by some of the mocks. If for example one of the mock expectation of the converters doesn’t exactly compare the input object and a programmer will not match the input object and use the any operator, rendering the test moot.The Lizard’s Tail Another option is to use with Scala is the ability to inherit multiple traits and supply the converter code with traits. Allowing us to mix and match these converters. A typical implementation would look like this: class SomeClass extends AnotherClass with SomeConverter with AnotherConverter..... with YetAnotherConverter { ............... } Using this approach will allow us to plug in the converters into several implementations while removing the need (or the urge) to mock conversion logic in our tests, but it raises a design question – is the ability to convert one object to another related to the purpose of the class? It also encourages developers to pile up more and more traits into a class and never remove old unused traits from it.The Ostrich way Scala allows us to hide the problem and use implicit conversions. This approach allows us to actually hide the problem. An implementation would now look like this: implicit def converto0too2(o0: SomeObject): AnotherObj = ... implicit def convert01to02(o1: AnotherObject): YetAnotherObj = ...def listObj(o0: SomeObj): YetAnotherObj = dao.doSomethingWith(entity = o0) What this code actually does is converting o0 to o1 because this is what listObj needs. When the result returns o1 and implicitly convert it to o2. The code above is hiding a lot from us and leaves us puzzled if the tooling doesn’t show us those conversions. A good use case in which implicit conversions works is when we want to convert between object that has the same functionality and purpose. A good example for those is to convert between Scala lists and Java lists, both are basically the same and we do not want to litter our code in all of the places where we convert between those two.To summarize the issues we encountered:Long and unused list of junk traits or junk classes in the constructor. Traits that doesn’t represent the true purpose of the class. Code that hides its true flow. To solve all of these, Scala has created a good pattern with the usage of implicit classes.To write conversion code we can do something like this: object ObjectsConveters {implicit class Converto0To1(o0: SomeObject) { def asO1: AnotherObject = ..... }implicit class Converto1To2(o0: AnotherObject) { def asO2With(id: String): YetAnotherObject = ..... } Now our code will look like this: import ObjectsConveters._def listObj(o0: SomeObj): YetAnotherObj = listObj(o0.asO1).asO2With(id = "someId") This approach allows us to be implicit and explicit at the same time. From looking at the code above you can understand that o0 is converted to o1 and the result is converted again to o2. If the conversion is not being used, the IDE will optimize the imports out of our code. Our tests won’t prompt us to mock each converter, resulting in specifications which explain the proper behavior of the code flow in our class. Note that the converter code is tested elsewhere. This approach allows us to write more readable test on other spots of the code. For example, in our e2e tests we reduce the number of objects we define: "some API test" in { callSomeApi(someId, o0) mustEqual o0.aso2With(id = "someId") } This code is now more readable and makes more sense; we are passing some inputs and the result matches the same objects that we used in our API call.Reference: Explicit Implicit Conversion from our JCG partner Noam Almog at the Wix IO blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: