Featured FREE Whitepapers

What's New Here?


Development Horror Story – Mail Bomb

Based on my session idea at JavaOne about things that went terrible wrong in our development careers, I thought about writing a few of these stories. I’ll start with one of my favourites ones: Crashing a customer’s mail server after generating more than 23 Million emails! Yes, that’s right, 23 Millions!              History A few years ago, I’ve joined a project that was being developed for several months, but had no production release yet. Actually, the project was scheduled to replace an existing application in the upcoming weeks. My first task in the project was to figure out what was needed to deploy the application in a production environment and replace the old application. This application had a considerable amount of users (around 50 k), but not all of them were active. The new application had a new feature to exclude the users that didn’t log into the application for the last few months. This was implemented as a timer (executed daily) and a email notification was sent to that user warning him that he was excluded from the application. The Problem The release was installed on a Friday (yes, Friday!), and everyone went for a rest. Monday morning, all hell broke loose! The customer mail server was down, and nobody had any idea why. The first reports indicated that the mail server was out of disk space, because it had around 2 Million emails pending delivery and a lot more incoming. What the hell happened? The Cause Even with the server down, support was able to show us a copy of an email stuck in the server. It was consistent with the email sent when a user was excluded. It didn’t make any sense, because we counted the number of users to be excluded and they were around 28 k, so only 28 k emails should have been sent. Even if all users were excluded the number could not be higher than 50 k (the total number of users). Invalid Email Looking into the code, we found out a bug that would cause the user to not be excluded if he had an invalid email. As a consequence these users were caught every time that the timer executed. From the total 28 k users to be excluded, around 26 k had invalid emails. From Friday to Monday, we count 3 executions * 26 k users, so 78k k emails. Ok, so now we have an email increase, but not close enough to the reported numbers. Timer Bug Actually the timer also had a bug. It was not scheduled to be executed daily, but every 8 hours. Let’s adjust the numbers: 3 days * 3 executions a day * 26 k users, brings the total to 234 k emails. A considerable increase but still far from a big number. Additional Node The operations installed the application in a second node, and the timer was executed in both. So a double increase. Let’s update: 2 * 234 k emails, brings the total to 468 k emails. No-reply Address Since the emails were automated, you usually set up a no-reply email as the email sender. Now the problem was that the domain for the no-reply address was invalid. Combining this with the users invalid emails, the mail server entered in a loop state. Each invalid user email generated an error email sent to the no-reply address, which was invalid as well and this caused a returned email again to the server. The loop end when the Maximum hop count is exceeded. In this case it was 50. Now everything starts to make sense! Let’s update the numbers: 26 k users * 3 days * 3 executions * 2 servers * 50 hops for a grand total of 23.4 Million emails! Aftermath The customer lost all their email from Friday to Monday, but it was possible to recover the mail server. The problems were fixed and it never happened again. I remember those days, to be very stressful, but today all of us involved, laugh about it! Remember: always check the no-reply address!Reference: Development Horror Story – Mail Bomb from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

CallSerially The EDT & InvokeAndBlock (Part 2)

The last time we talked about the EDT we covered some of the basic ideas, such as call serially etc. We left out two major concepts that are somewhat more advanced. Invoke And Block When we write typical code in Java we like that code to be in sequence as such:     doOperationA(); doOperationB(); doOperationC();This works well normally but on the EDT it might be a problem, if one of the operations is slow it might slow the whole EDT (painting, event processing etc.). Normally we can just move operations into a separate thread e.g.: doOperationA(); new Thread() { public void run() { doOperationB(); } }).start(); doOperationC();Unfortunately, this means that operation C will happen in parallel to operation C which might be a problem…E.g. instead of using operation names lets use a more “real world” example: updateUIToLoadingStatus(); readAndParseFile(); updateUIWithContentOfFile();Notice that the first and last operations must be conducted on the EDT but the middle operation might be really slow! Since updateUIWithContentOfFile needs readAndParseFile to be before it doing the new thread won’t be enough. Our automatic approach is to do something like this: updateUIToLoadingStatus(); new Thread() { public void run() { readAndParseFile(); updateUIWithContentOfFile(); } }).start();But updateUIWithContentOfFile should be executed on the EDT and not on a random thread. So the right way to do this would be something like this: updateUIToLoadingStatus(); new Thread() { public void run() { readAndParseFile(); Display.getInstance().callSerially(new Runnable() { public void run() { updateUIWithContentOfFile(); } }); } }).start();This is perfectly legal and would work reasonably well, however it gets complicated as we add more and more features that need to be chained serially after all these are just 3 methods! Invoke and block solves this in a unique way you can get almost the exact same behavior by using this: updateUIToLoadingStatus(); Display.getInstance().invokeAndBlock(new Runnable() { public void run() { readAndParseFile(); } }); updateUIWithContentOfFile();Invoke and block effectively blocks the current EDT in a legal way. It spawns a separate thread that runs the run() method and when that run method completes it goes back to the EDT. All events and EDT behavior still works while invokeAndBlock is running, this is because invokeAndBlock() keeps calling the main thread loop internally. Notice that this comes at a slight performance penalty and that nesting invokeAndBlocks (or over using them) isn’t recommended. However, they are very convenient when working with multiple threads/UI. Why Would I Invoke callSerially when I’m on the EDT already? We discussed callSerially in the previous post but one of the misunderstood topics is why would we ever want to invoke this method when we are still on the EDT. The original version of LWUIT used to throw an IllegalArgumentException if callSerially was invoked on the EDT since it seemed to make no sense. However, it does make some sense and we can explain that using an example. E.g. say we have a button that has quite a bit of functionality tied to its events e.g.:A user added an action listener to show a Dialog. A framework the user installed added some logging to the button. The button repaints a release animation as its being released.However, this might cause a problem if the first event that we handle (the dialog) might cause an issue to the following events. E.g. a dialog will block the EDT (using invokeAndBlock), events will keep happening but since the event we are in “already happened” the button repaint and the framework logging won’t occur. This might also happen if we show a form which might trigger logic that relies on the current form still being present. One of the solutions to this problem is to just wrap the action listeners body with a callSerially. In this case the callSerially will postpone the event to the next cycle (loop) of the EDT and let the other events in the chain complete. Notice that you shouldn’t use this normally since it includes an overhead and complicates application flow, however when you run into issues in event processing I suggest trying this to see if its the cause.Reference: CallSerially The EDT & InvokeAndBlock (Part 2) from our JCG partner Shai Almog at the Codename One blog....

Use reactive streams API to combine akka-streams with rxJava

Just a quick article this time, since I’m still experimenting with this stuff. There is a lot of talk around reactive programming. In Java 8 we’ve got the Stream API, we got rxJava we got ratpack and Akka has got akka-streams. The main issue with these implementations is that they aren’t compatible. You can’t connect the subscriber of one implementation to the publisher of another. Luckily an initiative has started to provide a way that these different implementations can work together:      “It is the intention of this specification to allow the creation of many conforming implementations, which by virtue of abiding by the rules will be able to interoperate smoothly, preserving the aforementioned benefits and characteristics across the whole processing graph of a stream application.”From – http://www.reactive-streams.org/ How does this work Now how do we do this? Lets look at a quick example based on the akka-stream provided examples (from here). In the following listing: package sample.stream   import akka.actor.ActorSystem import akka.stream.FlowMaterializer import akka.stream.scaladsl.{SubscriberSink, PublisherSource, Source} import com.google.common.collect.{DiscreteDomain, ContiguousSet} import rx.RxReactiveStreams import rx.Observable; import scala.collection.JavaConverters._   object BasicTransformation {   def main(args: Array[String]): Unit = {   // define an implicit actorsystem and import the implicit dispatcher implicit val system = ActorSystem("Sys") import system.dispatcher   // flow materializer determines how the stream is realized. // this time as a flow between actors. implicit val materializer = FlowMaterializer()   // input text for the stream. val text = """|Lorem Ipsum is simply dummy text of the printing and typesetting industry. |Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, |when an unknown printer took a galley of type and scrambled it to make a type |specimen book.""".stripMargin   // create an observable from a simple list (this is in rxjava style) val first = Observable.from(text.split("\\s").toList.asJava); // convert the rxJava observable to a publisher val publisher = RxReactiveStreams.toPublisher(first); // based on the publisher create an akka source val source = PublisherSource(publisher);   // now use the akka style syntax to stream the data from the source // to the sink (in this case this is println) source. map(_.toUpperCase). // executed as actors filter(_.length > 3). foreach { el => // the sink/consumer println(el) }. onComplete(_ => system.shutdown()) // lifecycle event } } The code comments in this example explain pretty much what is happening. What we do here is we create a rxJava based Observable. Convert this Observable to a “reactive streams” publisher and use this publisher to create an akka-streams source. For the rest of the code we can use the akka-stream style flow API to model the stream. In this case we just do some filtering and print out the result.Reference: Use reactive streams API to combine akka-streams with rxJava from our JCG partner Jos Dirksen at the Smart Java blog....

Spring boot war packaging

Spring boot recommends creating an executable jar with an embedded container (tomcat or jetty) during build time and using this executable jar as a standalone process at runtime. It is common however to deploy applications to an external container instead and Spring boot provides packaging the applications as a war specifically for this kind of a need. My focus here is not to repeat the already detailed Spring Boot instructions on creating the war artifact, but on testing the created file to see if it would reliably work on a standalone container. I recently had an issue when creating a war from a Spring Boot project and deploying it on Jetty and this is essentially a learning from that experience. The best way to test if the war will work reliably will be to simply use the jetty-maven and/or the tomcat maven plugin, with the following entries to the pom.xml file: <plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> </plugin> <plugin> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-maven-plugin</artifactId> <version>9.2.3.v20140905</version> </plugin> With the plugins in place, starting up the war with the tomcat plugin: mvn tomcat7:run and with the jetty plugin: mvn jetty:run If there any issues with the way the war has been created, it should come out at start-up time with these containers. For eg, if I were to leave in the embedded tomcat dependencies: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </dependency> then when starting up the maven tomcat plugin, an error along these lines will show up: java.lang.ClassCastException: org.springframework.web.SpringServletContainerInitializer cannot be cast to javax.servlet.ServletContainerInitializer an indication of a servlet jar being packaged with the war file, fixed by specifying the scope as provided in the maven dependencies: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <scope>provided</scope> </dependency> why both jetty and tomcat plugins, the reason is I saw a difference in behavior specifically with websocket support with jetty as the runtime and not in tomcat. So consider the websocket dependencies which are pulled in the following way: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-websocket</artifactId> </dependency> This gave me an error when started up using the jetty runtime, and the fix again is to mark the underlying tomcat dependencies as provided, replace above with the following: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-websocket</artifactId> </dependency> <dependency> <groupId>org.apache.tomcat.embed</groupId> <artifactId>tomcat-embed-websocket</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-messaging</artifactId> </dependency> So to conclude, a quick way to verify if the war file produced for a Spring-boot application will cleanly deploy to a container (atleast tomcat and jetty) is to add the tomcat and jetty maven plugins and use these plugins to start the application up. Here is a sample project demonstrating this – https://github.com/bijukunjummen/spring-websocket-chat-sample.gitReference: Spring boot war packaging from our JCG partner Biju Kunjummen at the all and sundry blog....

EasyCriteria has evolved to uaiCriteria. New name and more features

Hello, how are you? I am very happy to announce the release of the uaiCriteria, the EasyCriteria evolution. Was it really needed to change the framework name? Yes, sadly it was. I found  another framework with the same name, that is why I decided to change the name (I do not want any kind of legal problems). The difference of the framework is that the other framework works with MetaModel, and uaiCriteria works with strings as parameters.   About the framework name change:Your code will work with this new version without a problem, the code is retro compatible All EasyCriteria classes are annotated with @Deprecated and will be removed in the next version The new classes has all the methods of the old version. If you want to change for the new code just “replace” the text EasyCriteria for UaiCriteria in your code Again, I did not want to change the framework name but I do not want legal problemsThe framework now has a mascot:  The new version has a lot of new stuff. Let us talk first about the structural changes:The site has changed, now is http://uaicriteria.com The repository has changed, now is on GIT (requested by a lot of develpers) https://github.com/uaihebert/uaicriteria The SONAR plug-in was added to the pom.xml to help with code the code coverage and static analysis:  The old site will be deactivated, but all the old documentation was migrated. The current API has some criteria limitations, using HAVING in the criteria is something that is not possible. We will create a new Interface/API to use with complex criteria – I am looking for a new name for the new Interface, could you suggest me one? (:Let us talk about the new features: Welcome to Batoo Batoo is a JPA provider like EclipseLink or Hibernate. In this new version we got a good number of methods tested with Batoo. Notice that I talked about “a good number of methods” but not most of the methods. Unfortunately Batoo has several problems with JPQL and Criterias, and I could not cover most of the methods with it. uaiCriteria framework support almost all methods with EclipseLink, Hibernate and OpenJPA. MultiSelect It is possible to choose which attributes will be returned: select p.name, p.age from Person p If we transform the JPQL above in Criteria: finalUaiCriteria<Person> uaicriteria = UaiCriteriaFactory.UaiCriteriaFactory.createMultiSelectCriteria(entityManager, Person.class);uaiCriteria.addMultiSelectAttribute("name") .addMultiSelectAttribute("age");finalList multiselectList = uaiCriteria.getMultiSelectResult(); Some considerations about the code above:Object will be returned if you select only one attribute Object[] will be returned if you select more than one attribute The JPA provider may return Vector instead of Object[] (with my tests EclipseLink was returning a Vector)SubQuery Now it is possible to do a subQuery like below: select p from Person p where p.id in (select dog.person.id from Dog dog where dog.cute = true) I will not talk about the several lines of native JPA criteria needed to do the JPQL above, but with UaiCriteria is very easy to do: final UaiCriteria<Person> uaiCriteria = UaiCriteriaFactory.createQueryCriteria(Person.class); final UaiCriteria<Dog> subQuery = uaiCriteria.subQuery("person.id", Dog.class); // dog.person.id subQuery.andEquals("cute", true); uaiCriteria.andAttributeIn("id", subQuery); //person.id All you need to do is to create a subQuery informing its return; then call the method attributeIn of the root criteria. MapIsEmpty [NOT] The isEmpty method can be used with maps: uaiCriteria.andCollectionIsEmpty("ENTITY_MAP"); AttributeIn [NOT] If you want to validate if a value is inside a list like the JPQL: select p from Payment p where p.statusEnum in :enumList You can create the JPQL above like: final UaiCriteria<Payment> uaiCriteria = UaiCriteriaFactory.createQueryCriteria(Payment.class); uaiCriteria.andAttributeIn("statusEnum", Arrays.asList(StatusEnum.VALUE_01, StatusEnum.VALUE_02)); The attribute could be a enum, integer, String, etc. MemberOf [NOT] The query below: select d from Departament d where :person member of d.employeeList Could be created like: final UaiCriteria<Departament> uaiCriteria = UaiCriteriaFactory.createQueryCriteria(Departament.class); uaiCriteria.andIsMemberOf(person, "employeeList"); Count and CountRegularCriteria Now it is possible to do a count with a MultiSelect criteria. The count method was renamed to countRegularCriteria(). It works like the older version, just the name was refactored to make things more distinct. CountAttribute Sometimes you need to count an attribute instead of an entity: select count(p.id) from Person p You can run the JPQL above like: final UaiCriteria<Person> uaiCriteria = UaiCriteriaFactory.createMultiSelectCriteria(Person.class); uaiCriteria.countAttribute("id"); final List result = uaiCriteria.getMultiSelectResult(); GroupBy and Aggregate Functions Now it is possible to do a GroupBy with aggregate functions: sum, diff, divide, module,  etc. select sum(p.value), p.status from Payment p group by p.status Could be executed like: final UaiCriteria<Payment> uaiCriteria = UaiCriteriaFactory.createMultiSelectCriteria(Payment.class); uaiCriteria.sum("id").groupBy("status"); final List result = uaiCriteria.getMultiSelectResult(); New Maven Import If you want to use the new version, just add the xml below to your pom.xml: <dependency> <groupId>uaihebert.com</groupId> <artifactId>uaiCriteria</artifactId> <version>4.0.0</version> </dependency> I hope you liked the news. Do not forget to visit the new site ———–> http://uaicriteria.com If you have any doubts, questions or suggestions just post it. See you soon.Reference: EasyCriteria has evolved to uaiCriteria. New name and more features from our JCG partner Hebert Coelho at the uaiHebert blog....

How to build and clear a reference data cache with singleton EJBs, Ehcache and MBeans

In this post I will present how to build a simple reference data cache in Java EE, using singleton EJBs and Ehcache. The cache will reset itself after a given period of time, and can be cleared “manually” by calling a REST endpoint or a MBean method. This post actually builds on a previous post How to build and clear a reference data cache with singleton EJBs and MBeans; the only difference is that instead of the storing of the data in a ConcurrentHashMap<String, Object> I will be using an Ehcache cache, and the cache is able to renew itself by Ehcache means.         1. Cache This was supposed to be a read-only cache with the possibility to flush it from exterior. I wanted to have the cache as a sort of a wrapper on the service providing the actual reference data for the application – AOP style with code ! 1.1. Interface Simple Interface for Reference Data @Local public interface ReferenceDataCache {/** * Returns all reference data required in the application */ ReferenceData getReferenceData(); /** * evict/flush all data from cache */ void evictAll(); } The caching functionality defines two simple methods:getReferenceData() – which caches the reference data gathered behind the scenes from all the different sources evictAll() – method called to completely clear the cache1.2. Implementation Simple reference data cache implementation with Ehcache @ConcurrencyManagement(ConcurrencyManagementType.CONTAINER) @Singleton public class ReferenceDataCacheBean implements ReferenceDataCache { private static final String ALL_REFERENCE_DATA_KEY = "ALL_REFERENCE_DATA"; private static final int CACHE_MINUTES_TO_LIVE = 100; private CacheManager cacheManager; private Cache refDataEHCache = null; @EJB ReferenceDataLogic referenceDataService;@PostConstruct public void initialize(){ cacheManager = CacheManager.getInstance(); CacheConfiguration cacheConfiguration = new CacheConfiguration("referenceDataCache", 1000); cacheConfiguration.setTimeToLiveSeconds(CACHE_MINUTES_TO_LIVE * 60); refDataEHCache = new Cache(cacheConfiguration ); cacheManager.addCache(refDataEHCache); } @Override @Lock(LockType.READ) public ReferenceData getReferenceData() { Element element = refDataEHCache.get(ALL_REFERENCE_DATA_KEY); if(element != null){ return (ReferenceData) element.getObjectValue(); } else { ReferenceData referenceData = referenceDataLogic.getReferenceData(); refDataEHCache.putIfAbsent(new Element(ALL_REFERENCE_DATA_KEY, referenceData)); return referenceData; } }@Override public void evictAll() { cacheManager.clearAll(); } ........... } Note:@Singleton – probably the most important line of code in this class. This annotation specifies that there will be exactly one singleton of this type of bean in the application. This bean can be invoked concurrently by multiple threads.Let’s break now the code into the different parts: 1.2.1. Cache initialization The @PostConstruct annotation is used on a method that needs to be executed after dependency injection is done, to perform any initialization – in our case is to create and initialize the (eh)cache. Ehcache initialization @PostConstruct public void initialize(){ cacheManager = CacheManager.create();CacheConfiguration cacheConfiguration = new CacheConfiguration("referenceDataCache", 1000); cacheConfiguration.setTimeToLiveSeconds(CACHE_MINUTES_TO_LIVE * 60); refDataEHCache = new Cache(cacheConfiguration ); cacheManager.addCache(refDataEHCache); } Note: Only one method can be annotated with this annotation. All usages of Ehcache start with the creation of a CacheManager, which is a container for Ehcaches that maintain all aspects of their lifecycle. I use the CacheManager.create() method, which is a factory method to create a singleton CacheManager with default config, or return it if it exists: cacheManager = CacheManager.create(); I’ve built then a  CacheConfiguration object by providing the name of the cache (“referenceDataCache”) and number of  the maximum number of elements in memory (maxEntriesLocalHeap), before they are evicted (0 == no limit), and finally I set the default amount of time to live for an element from its creation date: CacheConfiguration cacheConfiguration = new CacheConfiguration("referenceDataCache", 1000); cacheConfiguration.setTimeToLiveSeconds(CACHE_MINUTES_TO_LIVE * 60); Now, with the help of the CacheConfiguration object I programmatically create my reference data cache and add to the CacheManager. Note that Caches are not usable until they have been added to a CacheManager: refDataEHCache = new Cache(cacheConfiguration ); cacheManager.addCache(refDataEHCache); Note:  You can also create the caches in a declarative way: when the CacheManager is created, it creates caches found in the configuration. You can create CacheManager by specifying the path of a configuration file, from a configuration in the classpath, from a configuration in an InputStream or by havind the default ehcache.xml file in your classpath. Take a look at Ehcache code samples for more information. 1.2.2. Get data from cache @Override @Lock(LockType.READ) public ReferenceData getReferenceData() { Element element = refDataEHCache.get(ALL_REFERENCE_DATA_KEY); if(element != null){ return (ReferenceData) element.getObjectValue(); } else { ReferenceData referenceData = referenceDataLogic.getReferenceData(); refDataEHCache.put(new Element(ALL_REFERENCE_DATA_KEY, referenceData)); return referenceData; } } First I try to get the element from the cache based on its key, and if it’s present in the cache (==null), then if will be received from the service class and placed in cache for future requests. Note:  The @Lock(LockType.READ) specifies the concurrency lock type for singleton beans with container-managed concurrency. When set to LockType.READ, it enforces the method to permit full concurrent access to it (assuming no write locks are held). This is exactly what I wanted, as I only need to do read operations. The other more conservative option @Lock(LockType.WRITE), which is the DEFAULT by the way, enforces exclusive access to the bean instance. This should make the method slower in a highly concurrent environment… 1.2.3. Clear the cache Clear cache @Override public void evictAll() { cacheManager.clearAll(); } The clearAll() method of the CacheManager clears the contents of all caches in the CacheManager, but without removing any caches. I just used it here for simplicity and because I only have one cache I need to refresh. Note: If you have several caches, that is several cache-names, and want to clear only one you need to use the CacheManager.clearAllStartingWith(String prefix), which clears the contents of all caches in the CacheManager with a name starting with the prefix, but without removing them. 2. How to trigger flusing the cache The second part of this post will deal with the possibilities of clearing the cache. Since the cache implementation is an enterprise java bean, we can call it either from an MBean or, why not, from a web service. 2.1. MBean If you are new to Java Management Extensions (JMX) , which is a Java technology that supplies tools for managing and monitoring applications, system objects, devices (e.g. printers) and service oriented networks. Those resources are represented by objects called MBeans (for Managed Bean), I highly recommend you start with this tutorial Trail: Java Management Extensions (JMX) 2.1.1. Interface The method exposed will only allow the reset of the cache via JMX: @MXBean public interface CacheResetMXBean { void resetReferenceDataCache(); } “An MXBean is a type of MBean that references only a predefined set of data types. In this way, you can be sure that your MBean will be usable by any client, including remote clients, without any requirement that the client have access to model-specific classes representing the types of your MBeans. MXBeans provide a convenient way to bundle related values together, without requiring clients to be specially configured to handle the bundles.” [5]   2.1.2. Implementation CacheReset MxBean implementation @Singleton @Startup public class CacheReset implements CacheResetMXBean { private MBeanServer platformMBeanServer; private ObjectName objectName = null; @EJB ReferenceDataCache referenceDataCache; @PostConstruct public void registerInJMX() { try { objectName = new ObjectName("org.codingpedia.simplecacheexample:type=CacheReset"); platformMBeanServer = ManagementFactory.getPlatformMBeanServer();//unregister the mbean before registerting again Set<ObjectName> existing = platformMBeanServer.queryNames(objectName, null); if(existing.size() > 0){ platformMBeanServer.unregisterMBean(objectName); } platformMBeanServer.registerMBean(this, objectName); } catch (Exception e) { throw new IllegalStateException("Problem during registration of Monitoring into JMX:" + e); } } @Override public void resetReferenceDataCache() { referenceDataCache.evictAll();} }  Note: as mentioned the implementation only calls the evictAll() method of the injected singleton bean described in the previous section the bean is also defined as @Singleton the @Startup annotation causes the bean to be instantiated by the container when the application starts – eager initialization I use again the @PostConstruct functionality. Here this bean is registered in JMX, checking before if the ObjectName is used to remove it if so…2.2. Rest service call I’ve also built in the possibility to clear the cache by calling a REST resource. This happends when you execute a HTTP POST on the (rest-context)/reference-data/flush-cache: Trigger cache refresh via REST resource @Path("/reference-data") public class ReferenceDataResource { @EJB ReferenceDataCache referenceDataCache; @POST @Path("flush-cache") public Response flushReferenceDataCache() { referenceDataCache.evictAll(); return Response.status(Status.OK).entity("Cache successfully flushed").build(); } @GET @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) public Response getReferenceData(@QueryParam("version") String version) { ReferenceData referenceData = referenceDataCache.getReferenceData(); if(version!=null && version.equals(referenceData.getVersion())){ return Response.status(Status.NOT_MODIFIED).entity("Reference data was not modified").build(); } else { return Response.status(Status.OK) .entity(referenceData).build(); } } } Notice the existence of the version query parameter in the @GET getReferenceData(...) method.  This represents a hash on the reference data and if it hasn’t modified the client will receive a 304 Not Modified HTTP Status. This is a nice way to spare some bandwidth, especially if you have mobile clients. See my post Tutorial – REST API design and implementation in Java with Jersey and Spring, for a detailed discussion around REST services design and implementation. Note: In a clustered environment, you need to call resetCache(…) on each JVM where the application is deployed, when the reference data changes. Well, that’s it. In this post we’ve learned how to build a simple reference data cache in Java EE with the help of Ehcache. Of course you can easily extend the cache functionality to offer more granular access/clearing to cached objects. Don’t forget to use LockType.WRITE for the clear methods in this case…Reference: How to build and clear a reference data cache with singleton EJBs, Ehcache and MBeans from our JCG partner Adrian Matei at the Codingpedia.org blog....

Slides from Nuts and Bolts of WebSocket at #Devoxx 2014

Peter Moskovits from Kaazing (@peterm_kaazing) and I gave a university talk at Devoxx 2014 on Nuts and Bolts of WebSocket. The slides are now available at:                       The entire session is recorded and will be made available on parleys.com in the coming weeks/months. The complete script for the demo is available at github.com/arun-gupta/nuts-and-bolts-of-websocket (including pointers to the demos). Most of the demos are anyway available at the following links:Chrome Developer Tools: http://www.websocket.org/echo.html Canonical Chat Server: https://github.com/javaee-samples/javaee7-samples/tree/master/websocket/chat Chat Server on OpenShift: http://blog.arungupta.me/2014/10/websocket-chat-wildfly-openshift-techtip51/ Collaborative Whiteboard: https://github.com/javaee-samples/javaee7-samples/tree/master/websocket/whiteboard WebSocket Client API: https://github.com/javaee-samples/javaee7-samples/tree/master/websocket/google-docs User-name based security: http://blog.arungupta.me/2014/10/securing-websockets-username-password-servlet-security-techtip49/ TLS-based security: http://blog.arungupta.me/2014/10/securing-websocket-wss-https-tls-techtip50/ Load Balancing WebSocket: http://blog.arungupta.me/2014/08/load-balance-websockets-apache-httpd-techtip48/ JMS and JavaScript: Pub/Sub over WebSocket: http://demo.kaazing.com/demo/jms/javascript/jms-javascript.html STOMP over WebSocket: http://blog.arungupta.me/2014/11/stomp-over-websocket-tech-tip-53/ Compare WebSocket with REST: https://github.com/javaee-samples/javaee7-samples/tree/master/websocket/websocket-vs-rest-payloadPositive feedback from twitter overall: Nice working application build in seconds! #Devoxx #WebSocket And once again, we developer are very… http://t.co/mLsnKf4Vcp — Miguel Discart (@Miguel_Discart) November 10, 2014Who’s doing Nuts and Bolts of #Websocket ? @arungupta #Devoxx pic.twitter.com/nAeGFTj6vx — Antoine Sabot-Durand (@antoine_sd) November 10, 2014@arungupta and @peterm_kaazing gave a very informative code live session about #Websocket in #Devoxx — jianing zhang (@jianinz) November 10, 2014#Devoxx #websocket really interesting uni session today in Antwerpen. Arun and Peter giving good info — Koen (@koendroid) November 10, 2014Nuts and Bolts of #Websocket SRO #Devoxx @arungupta pic.twitter.com/XoedutFiVJ — Java (@java) November 10, 2014Really cool WebSocket demo. Open http://t.co/C9ybgFPTLw and control it from mobile. #devoxx #websocket — DiversIT Europe (@diversit) November 10, 2014@arungupta & @peterm_kaazing gave amazing talk on #websocket at #devoxx — Vladimir Milovanović (@mmwlada) November 10, 2014At #Devoxx I Just developed & deployed my first websocket app on WildfFly on Openshift in 4mins! Thanks @arungupta Great Talk — Sabri Skhiri (@sskhiri) November 10, 2014And it was rated the top talk for the day until 6pm:With Red Hat spirit, “the more you share, the more you grow”, share the slides and demos all over and spread the love! Happy Devoxx!Reference: Slides from Nuts and Bolts of WebSocket at #Devoxx 2014 from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

CallSerially The EDT & InvokeAndBlock (Part 1)

We last explained some of the concepts behind the EDT in 2008 so its high time we wrote about it again, there is a section about it in the developer guide as well as in the courses on Udemy but since this is the most important thing to understand in Codename One it bares repeating. One of the nice things about the EDT is that many of the concepts within it are similar to the concepts in pretty much every other GUI environment (Swing/FX, Android, iOS etc.). So if you can understand this explanation this might help you when working in other platforms too.   Codename One can have as many threads as you want, however there is one thread created internally in Codename One named “EDT” for Event Dispatch Thread. This name doesn’t do the thread justice since it handles everything including painting etc. You can imagine the EDT as a loop such as this: while(codenameOneRunning) { performEventCallbacks(); performCallSeriallyCalls(); drawGraphicsAndAnimations(); sleepUntilNextEDTCycle(); } The general rule of the thumb in Codename One is: Every time Codename One invokes a method its probably on the EDT (unless explicitly stated otherwise), every time you invoke something in Codename One it should be on the EDT (unless explicitly stated otherwise). There are a few notable special cases:NetworkManager/ConnectionRequest – use the network thread internally and not the EDT. However they can/should be invoked from the EDT. BrowserNavigationCallback – due to its unique function it MUST be invoked on the native browser thread. Displays invokeAndBlock/startThread – create completely new threads.Other than those pretty much everything is on the EDT. If you are unsure you can use the Display.isEDT method to check whether you are on the EDT or not. EDT Violations You can violate the EDT in two major ways:Call a method in Codename One from a thread that isn’t the EDT thread (e.g. the network thread or a thread created by you). Do a CPU intensive task (such as reading a large file) on the EDT – this will effectively block all event processing, painting etc. making the application feel slow.Luckily we have a tool in the simulator: the EDT violation detection tool. This effectively prints a stack trace to suspect violations of the EDT. Its not fool proof and might land your with false positives but it should help you with some of these issues which are hard to detect. So how do you prevent an EDT violation? To prevent abuse of the EDT thread (slow operations on the EDT) just spawn a new thread using either new Thread(), Display.startThread or invokeAndBlock (more on that later). Then when you need to broadcast your updates back to the EDT you can use callSerially or callSeriallyAndWait. CallSerially callSerially invokes the run() method of the runnable argument it receives on the Event Dispatch Thread. This is very useful if you are on a separate thread but is also occasionally useful when we are on the EDT and want to postpone actions to the next cycle of the EDT (more on that next time). callSeriallyAndWait is identical to call serially but it waits for the callSerially to complete before returning. For obvious reasons it can’t be invoked on the EDT. In the second part of this mini tutorial I will discuss invokeAndBlock and why we might want to use callSerially when we already are on the EDT. Update: You can read part 2 of this post here.Reference: CallSerially The EDT & InvokeAndBlock (Part 1) from our JCG partner Shai Almog at the Codename One blog....

The State of REST

The S in REST stands for State. Unfortunately, state is an overloaded word. In this post I’ll discuss the two different kinds of state that apply to REST APIs. Applications The first type of state is application state, as in Hypermedia As The Engine Of Application State (HATEOAS), the distinguishing feature of the REST architectural style. We must first understand what exactly an application in a RESTful architecture is and isn’t. A REST API is not an interface to an application, but an interface for an application. A RESTful service is just a bunch of interrelated and interconnected resources. In and of themselves they don’t make up an application. It’s a particular usage pattern of those resources that turn them into an application. The term application pertains more to the client than to the server. It’s possible to build different applications on top of the same resources. It’s also possible to build an application out of resources hosted on different servers or even by different organizations (mashups). Application State vs. Resource State The stateless constraint of REST doesn’t say that a server can’t maintain any state, it only says that a server shouldn’t maintain application (or session) state. A server is allowed to maintain other state. In fact, most servers would be completely useless if they didn’t. Think of the Amazon web store without any books! We call this resource state to distinguish it from application state. So where do we draw the line between resource state and application state? Resource state is information we want to be available between multiple sessions of the same user, and between sessions of different users. Resource state can initially be supplied by either servers (e.g. books) or clients (e.g. book reviews). Application state is the information that pertains to one particular session of the application. The contents of my shopping cart could be application state, for instance. Note that this is not how Amazon implemented it; they keep this state on the server. That doesn’t mean that the people at Amazon don’t understand REST. The web browser that I use to shop isn’t sophisticated enough to maintain the application state. Also, they want me to be able to close my browser and return to my shopping cart tomorrow. This example shows that what is application and what resource state is a design decision. Application state pertains to the goal the user is trying to achieve while driving the client. It is this state that we’re referring to when we talk about ...

Someday Is a Lie

“You don’t have a time machine— you’re living the same twenty-four hours we all are. You can barely make it through your day with all the current things there are to do; when is “someday” finally going to arrive? The answer is, of course, never. Today is it.”  - Andrew J. Mellen, “Unstuff Your Life!” If you’re a software professional, you’ve probably been faced with (or at least affected by) a quintessential architectural decision: Do you take the time hit to do something the “right way” now or do you settle for a less-desirable, “good enough” solution? I’ve been both an active participant and a passive bystander in many such debates. It always plays out the same way: Everyone wants to do the right thing now, but most acknowledge that there just isn’t time (except for that one guy who wants to “make time” over the weekend). No one wants the “good enough” solution, but they rationalize it by saying that “someday” they’ll circle back and do it the right way. They may even get specific, but it’s usually after or when something – “after this big release” or “when we have more resources.” Unfortunately, these people are lying. Now you might be thinking, “that’s a little harsh.” No, it’s not. If I felt the least bit hesitant or apologetic about that sentence, I wouldn’t have given it its own paragraph. Remember that I’m not throwing stones – I’ve been in that spot and said those things, too. But when I think back to every case where I or my team intended to revisit something and do what we should have done in the first place, I can quantify how often that actually happened. Zero percent. Early in my career, I let myself off the hook by saying that I was naive or there were circumstances outside of my control that prevented us from coming back and doing the right thing. But by now I can be honest with myself and admit that I had my doubts that “someday” would ever come when I said it. Deep down I knew that I was buying into the lie. I was lying to myself and my team. We could analyze why this “mythical land where time stands still” (as Mellen calls it) never comes, but don’t we already know the answer? Of course, when weighing two choices, “already done” will always win. And that assumes that the topic even comes up for debate again! More often, that thing you were going to do is simply forgotten altogether, or added to the backburner list of guilt-inducing things you’d like to do but never will. If those things accumulate and begin to weigh you down, you might even be tempted to move on to the next job and let them become someone else’s problem. So my advice must be to do it the “right way” from the get-go, right? Actually, no. You don’t have time for that, remember? I believe you. But instead of buying into the someday myth, I propose a third option: Plan the someday. Be explicit about what you are going to do and when you are going to do it. “We will come back and write unit tests for that” or “I will fix that concurrency issue.” When, specifically? Don’t wonder it to yourself – ask it out loud and commit to it as a team. A conditional or relative when will not suffice – it can’t depend on some release deadline or an additional team member that you hope to hire. It can’t be a user story or a bug either – these devices are usually only effective in illustrating the technical debt to your managers. They don’t ensure that the work will actually get done, because your managers are pressured to show value to their managers and other priorities will prevail. At the risk of sounding cynical, a story or bug of this nature is typically only given attention when an important customer (or prospective customer) notices and complains about it. Enter the calendar. I believe that there is no resource more precious than time, so it follows that there is no tool more powerful than the calendar. The calendar does not lie. Ever. You may not always like what it tells you, but it does not care. It says, “Tomorrow is your mom’s birthday and you have one day left to figure out what you’re getting her. Good luck with that!” @#F*$%! Thanks, calendar. You’re super helpful. By putting your intentions on a calendar, preferably one that everyone can see regularly, you are committing to act. You will rework the logging layer of your API, or you will fix the auto-generated documentation bug, or whatever. You will do it, and since you think it will take n days (n + (n/5) to be safe), you put that on the calendar. When sprint or release planning time comes around, you will look at the calendar and plan on accomplishing less because you will be doing that thing. Keep in mind that I’m a “tell, don’t ask” type of person, and occasionally this can get me into trouble. User stories and bugs effectively ask your managers if you can do this work, whereas scheduling the work yourself tells your managers that you are doing it and that they need to support you in that endeavor. As such, use this strategy with care. It is helpful if your manager was part of the “right way vs good enough” discussion when it happened. Make every effort to bring them into it then (or right after), so that you can remind them that we aaaaaaall agreed on this and that it’s not a surprise. It also helps if you move your hands in an outward-circling kumbaya sort of way when you say it. In part two of this post, I’ll explore one of my favorite calendars: Google calendar and its API. Google has thoroughly and elegantly solved the hard parts of a time-management solution (like timezones, internationalization, conflicts, and authorization) so you can focus on the fun parts (like integration and… ooh, colors!). Don’t reinvent the calendar wheel – harness its power to beat the “someday” lie into submission forever. Rawr! Until then, I hope that I’ve sufficiently warned you of the perils of “someday.” Come to think of it, I didn’t even go into its optimistic flavor – coding ahead (aka gold plating) your application with features because they might be needed someday. This is the opposite of technical debt, where you end up with (arguably) too much code instead of too little. The goal is to strike the right balance between these two extremes, so think hard about what is needed or wanted, and by whom. “We need to add this thing because we might need to do this other thing… someday.” Really? No. Just… stop. Stop overcomplicating. Stop making excuses. Stop starting and start finishing. Stop letting “someday” ruin today.Reference: Someday Is a Lie from our JCG partner Lyndsey Padget at the Keyhole Software blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: