Featured FREE Whitepapers

What's New Here?


Discovering the power of Apache Camel

These last years, ESB software has been getting more and more popular. If most people usually know what is an ESB, they are fewer to clearly understand the exact role of the different components of such architecture. For instance, Apache ServiceMix is composed of three major components : Apache Karaf (the OSGI container), Apache ActiveMQ (the message broker) and Apache Camel. By the way, what is exactly Camel ? What is a « routing and mediation engine » ? What is it useful for ?         I’ve been working with Camel for about one year now, and I think – although not being at all a Camel guru, that I now have enough hindsight to make you discovering the interest and power of Camel, using some very concrete examples.For the sake of clarity, I will, for the rest of this article, be using the Spring DSL – assuming the reader is familiar with Spring syntax. The Use Case Let us imagine we want to implement the following scenario using Camel. Requests for product information are coming as flat files (in CSV format) in a specific folder. Each line of such file contains a single request of a particular customer about a particular car model. We want to send these customers an email about the car they are interested in. To do so, we first need to invoke a web service to get additional customer data (e.g. their email). Then we have to fetch the car characteristics (lets us say a text) from a database. As we want a decent look (ie HTML) for our mails, a small text transformation will also be required. Of course, we do not want a mere sequential handling of the requests, but would like to introduce some parallelism. Similarly, we do not want to send many times the exact same mail to different customers (but rather a same unique mail to multiple recipients). It would be also nice to exploit the clustering facilities of our back-end to load-balance our calls to web services. And finally, in the event the processing of a request failed, we want to keep trace, in some way or another, of the originating request, so that we can for instance send it by postal mail.   A (possible) Camel implementation : <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://camel.apache.org/schema/springhttp://camel.apache.org/schema/spring/camel-spring.xsd " ><camelContext xmlns="http://camel.apache.org/schema/spring" errorHandlerRef="myDLQ"><!-- 2 redeliveries max before failed message is placed into a DLQ --> <errorHandler id="myDLQ" type="DeadLetterChannel" deadLetterUri="activemq:queue:errors" useOriginalMessage="true"> <redeliveryPolicy maximumRedeliveries="2"/> </errorHandler><!-- The polling of a specific folder every 30 sec --> <route id="route1"> <from uri="file:///Users/bli/folderToPoll?delay=30000&delete=true"/> <unmarshal> <csv/> </unmarshal> <split> <simple>${body}</simple> <setHeader headerName="customerId"> <simple>${body[1]}</simple> </setHeader> <setHeader headerName="carModelId"> <simple>${body[2]}</simple> </setHeader> <setBody> <simple>${body[0]}</simple> </setBody> <to uri="activemq:queue:individualRequests?disableReplyTo=true"/> </split> </route><!-- The consumption of individual (jms) mailing requests --> <route id="route2"> <from uri="activemq:queue:individualRequests?maxConcurrentConsumers=5"/> <pipeline> <to uri="direct:getCustomerEmail"/> <to uri="direct:sendMail"/> </pipeline> </route><!-- Obtain customer email by parsing the XML response of a REST web service --> <route id="route3"> <from uri="direct:getCustomerEmail"/> <setBody> <constant/> </setBody> <loadBalance> <roundRobin/> <to uri="http://backend1.mycompany.com/ws/customers?id={customerId}&authMethod=Basic&authUsername=geek&authPassword=secret"/> <to uri="http://backend2.mycompany.com/ws/customers?id={customerId}&authMethod=Basic&authUsername=geek&authPassword=secret"/> </loadBalance> <setBody> <xpath resultType="java.lang.String">/customer/general/email</xpath> </setBody> </route><!-- Group individual sendings by car model --> <route id="route4"> <from uri="direct:sendMail"/> <aggregate strategyRef="myAggregator" completionSize="10"> <correlationExpression> <simple>header.carModelId</simple> </correlationExpression> <completionTimeout> <constant>60000</constant> </completionTimeout> <setHeader headerName="recipients"> <simple>${body}</simple> </setHeader> <pipeline> <to uri="direct:prepareMail"/> <to uri="direct:sendMailToMany"/> </pipeline> </aggregate> </route><!-- Prepare the mail content --> <route id="route5"> <from uri="direct:prepareMail"/> <setBody> <simple>header.carModelId</simple> </setBody> <pipeline> <to uri="sql:SELECT xml_text FROM template WHERE template_id =# ?dataSourceRef=myDS"/> <to uri="xslt:META-INF/xsl/email-formatter.xsl"/> </pipeline> </route><!-- Send a mail to multiple recipients --> <route id="route6"> <from uri="direct:sendMailToMany"/> <to uri="smtp://mail.mycompany.com:25?username=geek&password=secret&from=no-reply@mycompany.com&to={recipients}&subject=Your request&contentType=text/html"/> <log message="Mail ${body} successfully sent to ${headers.recipients}"/> </route></camelContext><!-- Pure Spring beans referenced in the various Camel routes --><!-- The ActiveMQ broker --> <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent"> <property name="brokerURL" value="tcp://localhost:61616"/> </bean><!-- A datasource to our database --> <bean id="myDS" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName" value="org.h2.Driver"/> <property name="url" value="jdbc:h2:file:/Users/bli/db/MyDatabase;AUTO_SERVER=TRUE;TRACE_LEVEL_FILE=0"/> <property name="username" value="sa"/> <property name="password" value="sa"/> </bean><!-- An aggregator implementation --> <bean id="myAggregator" class="com.mycompany.camel.ConcatBody"/></beans> And the code of the (only!) Java class : public class ConcatBody implements AggregationStrategy {public static final String SEPARATOR = ", ";public Exchange aggregate(Exchange aggregate, Exchange newExchange) { if (aggregate == null) { // The aggregation for the very exchange item is the exchange itself return newExchange; } else { // Otherwise, we augment the body of current aggregate with new incoming exchange String originalBody = aggregate.getIn().getBody(String.class); String bodyToAdd = newExchange.getIn().getBody(String.class); aggregate.getIn().setBody(originalBody + SEPARATOR + bodyToAdd); return aggregate; } }} Some explanationsThe “route1” deals with the processing of incoming flat files. Thee file content is first unmarshalled (using CSV format) and then split into lines/records. Each line will be turned into an individual notification that is sent to a JMS queue. The “route2” is consuming these notifications. Basically, fulfilling a request means doing two things in sequence (“pipeline”) : get the customer email (route3) and send him a mail (route4). Note the ‘maxConcurrentConsumers’ parameter that is used to easily answer our parallelism requirement. The “route3” models how to get the customer email : simply by parsing (using XPath) the XML response of a (secured) REST web service that is available on two back-end nodes. The “route4” contains the logic to send massive mails. Each time 10 similar send requests (that is, in our case, 10 requests on same car model) are collected (and we are not ready to wait more than 1 minute) we want the whole process to be continued with a new message (or « exchange » in Camel terminology) being the concatenation of the 10 assembled messages. Continuing the process means: first prepare the mail body (route5), and then send it to the group (route6). In “route5“, a SQL query is issued in order to get the appropriate text depending on the car model. On that result, we apply a small XSL-T transformation (that will replace the current exchange body with the output of the xsl transformation). When entering “route6“, an exchange contains everything we need. We have the list of recipients (as header), and we also have (in the body) the html text to be sent. Therefore we can now proceed to the real sending using SMTP protocol. In case of errors (for instance, temporary network problems) – anywhere in the whole process, Camel will make maximum two additional attempts before giving up. In this latter case, the originating message will be automatically placed by Camel into a JMS Dead-Letter-Queue.Conclusion Camel is really a great framework – not perfect but yet great. You will be surprised to see how few lines of code are needed to model a complex scenario or route. You could also be glad to see how limpid is your code, how quickly your colleagues can understand the logic of your routes. But it’s certainly not the main advantage. Using Camel primarily invites you to think in terms of Enterprise Integration Patterns (aka “EIP”); it helps you to decompose the original complexity into less complex (possibly concurrent) sub-routes using well-known and proven techniques, thereby leading to more modular, more flexible implementations. In particular, using decoupling techniques facilitates the potential replacement or refactoring of individual parts or components of your solution.   Reference: Discovering the power of Apache Camel from our W4G partner Bernard Ligny. ...

MapReduce Algorithms – Order Inversion

This post is another segment in the series presenting MapReduce algorithms as found in the Data-Intensive Text Processing with MapReduce book. Previous installments are Local Aggregation, Local Aggregation PartII and Creating a Co-Occurrence Matrix. This time we will discuss the order inversion pattern. The order inversion pattern exploits the sorting phase of MapReduce to push data needed for calculations to the reducer ahead of the data that will be manipulated.. Before you dismiss this as an edge condition for MapReduce, I urge you to read on as we will discuss how to use sorting to our advantage and cover using a custom partitioner, both of which are useful tools to have available.       Although many MapReduce programs are written at a higher level abstraction i.e Hive or Pig, it’s still helpful to have an understanding of what’s going on at a lower level.The order inversion pattern is found in chapter 3 of Data-Intensive Text Processing with MapReduce book. To illustrate the order inversion pattern we will be using the Pairs approach from the co-occurrence matrix pattern. When creating the co-occurrence matrix, we track the total counts of when words appear together. At a high level we take the Pairs approach and add a small twist, in addition to having the mapper emit a word pair such as (“foo”,”bar”) we will emit an additional word pair of (“foo”,”*”) and will do so for every word pair so we can easily achieve a total count for how often the left most word appears, and use that count to calculate our relative frequencies. This approach raised two specific problems. First we need to find a way to ensure word pairs (“foo”,”*”) arrive at the reducer first. Secondly we need to make sure all word pairs with the same left word arrive at the same reducer. Before we solve those problems, let’s take a look at our mapper code. Mapper Code First we need to modify our mapper from the Pairs approach. At the bottom of each loop after we have emitted all the word pairs for a particular word, we will emit the special token WordPair(“word”,”*”) along with the count of times the word on the left was found. public class PairsRelativeOccurrenceMapper extends Mapper<LongWritable, Text, WordPair, IntWritable> { private WordPair wordPair = new WordPair(); private IntWritable ONE = new IntWritable(1); private IntWritable totalCount = new IntWritable();@Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { int neighbors = context.getConfiguration().getInt('neighbors', 2); String[] tokens = value.toString().split('\\s+'); if (tokens.length > 1) { for (int i = 0; i < tokens.length; i++) { tokens[i] = tokens[i].replaceAll('\\W+','');if(tokens[i].equals('')){ continue; }wordPair.setWord(tokens[i]);int start = (i - neighbors < 0) ? 0 : i - neighbors; int end = (i + neighbors >= tokens.length) ? tokens.length - 1 : i + neighbors; for (int j = start; j <= end; j++) { if (j == i) continue; wordPair.setNeighbor(tokens[j].replaceAll('\\W','')); context.write(wordPair, ONE); } wordPair.setNeighbor('*'); totalCount.set(end - start); context.write(wordPair, totalCount); } } } } Now that we’ve generated a way to track the total numbers of times a particular word has been encountered, we need to make sure those special characters reach the reducer first so a total can be tallied to calculate the relative frequencies. We will have the sorting phase of the MapReduce process handle this for us by modifying the compareTo method on the WordPair object. Modified Sorting We modify the compareTo method on the WordPair class so when a “*” caracter is encountered on the right that particular object is pushed to the top. @Override public int compareTo(WordPair other) { int returnVal = this.word.compareTo(other.getWord()); if(returnVal != 0){ return returnVal; } if(this.neighbor.toString().equals('*')){ return -1; }else if(other.getNeighbor().toString().equals('*')){ return 1; } return this.neighbor.compareTo(other.getNeighbor()); } By modifying the compareTo method we now are guaranteed that any WordPair with the special character will be sorted to the top and arrive at the reducer first. This leads to our second specialization, how can we guarantee that all WordPair objects with a given left word will be sent to the same reducer? The answer is to create a custom partitioner. Custom Partitioner Intermediate keys are shuffled to reducers by calculating the hashcode of the key modulo the number of reducers. But our WordPair objects contain two words, so taking the hashcode of the entire object clearly won’t work. We need to wright a custom Partitioner that only takes into consideration the left word when it comes to determining which reducer to send the output to. public class WordPairPartitioner extends Partitioner<WordPair,IntWritable> {@Override public int getPartition(WordPair wordPair, IntWritable intWritable, int numPartitions) { return wordPair.getWord().hashCode() % numPartitions; } } Now we are guaranteed that all of the WordPair objects with the same left word are sent to the same reducer. All that is left is to construct a reducer to take advantage of the format of the data being sent. Reducer Building the reducer for the inverted order inversion pattern is straight forward. It will involve keeping a counter variable and a “current” word variable. The reducer will check the input key WordPair for the special character “*” on the right. If the word on the left is not equal to the “current” word we will re-set the counter and sum all of the values to obtain a total number of times the given current word was observed. We will now process the next WordPair objects, sum the counts and divide by our counter variable to obtain a relative frequency. This process will continue until another special character is encountered and the process starts over. public class PairsRelativeOccurrenceReducer extends Reducer<WordPair, IntWritable, WordPair, DoubleWritable> { private DoubleWritable totalCount = new DoubleWritable(); private DoubleWritable relativeCount = new DoubleWritable(); private Text currentWord = new Text('NOT_SET'); private Text flag = new Text('*');@Override protected void reduce(WordPair key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { if (key.getNeighbor().equals(flag)) { if (key.getWord().equals(currentWord)) { totalCount.set(totalCount.get() + getTotalCount(values)); } else { currentWord.set(key.getWord()); totalCount.set(0); totalCount.set(getTotalCount(values)); } } else { int count = getTotalCount(values); relativeCount.set((double) count / totalCount.get()); context.write(key, relativeCount); } } private int getTotalCount(Iterable<IntWritable> values) { int count = 0; for (IntWritable value : values) { count += value.get(); } return count; } } By manipulating the sort order and creating a custom partitioner, we have been able to send data to a reducer needed for a calculation, before the data needed for those calculation arrive. Although not shown here, a combiner was used to run the MapReduce job. This approach is also a good candidate for the “in-mapper” combining pattern. Example & Results Given that the holidays are upon us, I felt it was timely to run an example of the order inversion pattern against the novel “A Christmas Carol” by Charles Dickens. I know it’s corny, but it serves the purpose. new-host-2:sbin bbejeck$ hdfs dfs -cat relative/part* | grep Humbug {word=[Humbug] neighbor=[Scrooge]} 0.2222222222222222 {word=[Humbug] neighbor=[creation]} 0.1111111111111111 {word=[Humbug] neighbor=[own]} 0.1111111111111111 {word=[Humbug] neighbor=[said]} 0.2222222222222222 {word=[Humbug] neighbor=[say]} 0.1111111111111111 {word=[Humbug] neighbor=[to]} 0.1111111111111111 {word=[Humbug] neighbor=[with]} 0.1111111111111111 {word=[Scrooge] neighbor=[Humbug]} 0.0020833333333333333 {word=[creation] neighbor=[Humbug]} 0.1 {word=[own] neighbor=[Humbug]} 0.006097560975609756 {word=[said] neighbor=[Humbug]} 0.0026246719160104987 {word=[say] neighbor=[Humbug]} 0.010526315789473684 {word=[to] neighbor=[Humbug]} 3.97456279809221E-4 {word=[with] neighbor=[Humbug]} 9.372071227741331E-4 Conclusion While calculating relative word occurrence frequencies probably is not a common task, we have been able to demonstrate useful examples of sorting and using a custom partitioner, which are good tools to have at your disposal when building MapReduce programs. As stated before, even if most of your MapReduce is written at higher level of abstraction like Hive or Pig, it’s still instructive to have an understanding of what is going on under the hood. Thanks for your time.   Reference: MapReduce Algorithms – Order Inversion from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog. ...

Java EE 7 Community Survey Results!

Work on Java EE 7 presses on under JSR 342. Things are shaping up nicely and Java EE 7 is now in the Early Draft Review stage. In beginning of November Oracle posted a little community survey about upcoming Java EE 7 features. Yesterday the results were published. Over 1,100 developers participated in the survey and there was a large number of thoughtful comments to almost every question asked. Compare the prepared PDF attached to the EG mailing-list discussion.       New APIs for the Java EE 7 Profiles We have a couple of new and upcoming APIs which needs to be incorporated into either the Full or the Web Profile. Namely this are WebSocket 1.0, JSON-P 1.0, Batch 1.0 and JCache 1.0. The community was asked in which profile those should end up. The results about which of them should be in the Full Profile:As the graph depicts, support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw JSON-P and WebSocket 1.0 as a critical technology. The same for both with regards to the Web Profile. Support for adding JCache 1.0 and Batch 1.0 is relatively weak. Batch got 51.8% ‘No’ votes.Enabling CDI by Default The majority (73.3%) of developers support enabling CDI by default. Also the detailed comments reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI. Consistent Usage of @Inject A light majority (53.3%) of developers support using @Inject consistently across all Java EE JSRs. 28.8% still believe using custom injection annotations is ok. The remaining 18.0% were not sure about the right way to go. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI. Expanding the Use of @Stereotype 62.3% of the attending developers support expanding the use of @Stereotype across Java EE. A majority of the comments express ideas about general CDI/Java EE alignment. Expanding Interceptor Use 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. The remaining 12.2% were certain that there are places where injection should be supported but not interceptors. Thanks for taking the time answering the survey. This gives a solid decision base for moving on with Java EE 7. Keep the feedback coming and subscribe to the users@javaee-spec.java.net alias (see archives online)!   Reference: Java EE 7 Community Survey Results! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

Authentication against a RESTful Service with Spring Security

1. Overview This article is focused on how to authenticate against a secure REST API that provides security services – mainly, a RESTful User Account and Authentication Service. 2. The Goal First, let’s go over the actors – the typical Spring Security enabled application needs to authenticate against something – that something can be a database, LDAP or it can be a REST service. The database is the most common scenario; however, a RESTful UAA (User Account and Authentication) Service can work just as well. For the purpose of this article, the REST UAA Service will expose a single GET operation on /authentication, which will return the Principal information required by Spring Security to perform the full authentication process. 3. The Client Typically, a simple Spring Security enabled application would use a simple user service as the authentication source: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="customUserDetailsService" /> </authentication-manager> This would implement the org.springframework.security.core.userdetails.UserDetailsService and would return the Principal based on a provided username: @Component public class CustomUserDetailsService implements UserDetailsService { @Override public UserDetails loadUserByUsername(String username) { ... } } When a Client authenticates against the RESTful UAA Service, working only with the username will no longer be enough – the client now needs the full credentials – both username and password – when it’s sending the authentication request to the service. This makes perfect sense, as the service itself is secured, so the request itself needs to contain the authentication credentials in order to be handled properly. From the point of view or Spring Security, this cannot be done from within loadUserByUsername because the password is no longer available at that point – we need to take control of the authentication process sooner. We can do this by providing the full authentication provider to Spring Security: <authentication-manager alias="authenticationManager"> <authentication-provider ref="restAuthenticationProvider" /> </authentication-manager> Overriding the entire authentication provider gives us a lot more freedom to perform custom retrieval of the Principal from the Service, but it does come with a fair bit of complexity. The standard authentication provider – DaoAuthenticationProvider – has most of what we need, so a good approach would be to simply extend it and modify only what is necessary. Unfortunately this is not possible, as retrieveUser – the method we would be interested in extending – is final. This is somewhat unintuitive (there is a JIRA discussing the issue) – it looks like the design intention here is simply to provide an alternative implementation which is not ideal, but not a major problem either – our RestAuthenticationProvider copy-pastes most of the implementation of DaoAuthenticationProvider and rewrites what it needs to – the retrieval of the principal from the service: @Override protected UserDetails retrieveUser(String name, UsernamePasswordAuthenticationToken auth){ String password = auth.getCredentials().toString(); UserDetails loadedUser = null; try { ResponseEntity<Principal> authenticationResponse =             authenticationApi.authenticate(name, password); if (authenticationResponse.getStatusCode().value() == 401) { return new User("wrongUsername", "wrongPass", Lists.<GrantedAuthority> newArrayList()); } Principal principalFromRest = authenticationResponse.getBody(); Set<String> privilegesFromRest = Sets.newHashSet(); // fill in the privilegesFromRest from the Principal String[] authoritiesAsArray =             privilegesFromRest.toArray(new String[privilegesFromRest.size()]); List<GrantedAuthority> authorities =             AuthorityUtils.createAuthorityList(authoritiesAsArray); loadedUser = new User(name, password, true, authorities); } catch (Exception ex) { throw new AuthenticationServiceException(repositoryProblem.getMessage(), ex); } return loadedUser; } Let’s start from the beginning – the HTTP communication with the REST Service – this is handled by the authenticationApi – a simple API providing the authenticate operation for the actual service. The operation itself can be implemented with any library capable of HTTP – in this case, the implementation is using RestTemplate: public ResponseEntity<Principal> authenticate(String username, String pass) { HttpEntity<Principal> entity = new HttpEntity<Principal>(createHeaders(username, pass)) return restTemplate.exchange(authenticationUri, HttpMethod.GET, entity, Principal.class); }HttpHeaders createHeaders(String email, String password) { HttpHeaders acceptHeaders = new HttpHeaders() { { set(com.google.common.net.HttpHeaders.ACCEPT,                 MediaType.APPLICATION_JSON.toString()); } }; String authorization = username + ":" + password; String basic = new String(Base64.encodeBase64 (authorization.getBytes(Charset.forName("US-ASCII")))); acceptHeaders.set("Authorization", "Basic " + basic);return acceptHeaders; } A FactoryBean can be used to set up the RestTemplate in the context. Next, if the authentication request resulted in a HTTP 401 Unauthorized, most likely because of incorrect credentials from the client, a principal with wrong credentials is returned so that the Spring Security authentication process can refuse them: return new User("wrongUsername", "wrongPass", Lists.<GrantedAuthority> newArrayList()); Finally, the Spring Security Principal needs some authorities – the privileges which that particular principal will have and use locally after the authentication process. The /authenticate operation had retrieved a full principal, including privileges, so these need to be extracted from the result of the request and transformed into GrantedAuthority objects, as required by Spring Security. The details of how these privileges are stored is irrelevant here – they could be stored as simple Strings or as a complex Role-Privilege structure – but regardless of the details, we only need to use their names to construct the GrantedAuthoritiy objects. After the final Spring Security principal is created, it is returned back to the standard authentication process: List<GrantedAuthority> authorities = AuthorityUtils.createAuthorityList(authoritiesAsArray); loadedUser = new User(name, password, true, authorities); 4. Testing the Authentication Service Writing an integration test that consumes the authentiction REST service on the happy-path is straightforward enough: @Test public void whenAuthenticating_then200IsReceived() { // When ResponseEntity<Principal> response = authenticationRestTemplate.authenticate("admin", "adminPass");// Then assertThat(response.getStatusCode().value(), is(200)); } Following this simple test, more complex integration tests can be implemented as well – however this is outside of the scope of this post. 5. Conclusion This article explained how to authenticate against a REST Service instead of doing so against a local system such as a database. For a full implementation of a secure RESTful service which can be used as an authentication provider, check out the github project.   Reference: Authentication against a REST Service with Spring Security from our JCG partner Eugen Paraschiv at the baeldung blog. ...

Google Guava BiMaps

Next up on my tour of Guava, is the BiMap, another useful collection type. It’s pretty simple really, a BiMap is simply a two way map. Inverting a Map A normal java map is a set of keys and values, and you can look up values by key, very useful, eg lets say I wanted to create a (very rudimentary) British English to American English dictionary:       Map<String,String> britishToAmerican = Maps.newHashMap(); britishToAmerican.put('aubergine','egglant'); britishToAmerican.put('courgette','zucchini'); britishToAmerican.put('jam','jelly'); But what if you want an American to British dictionary? Well you could write some code to invert the map: // Generic method to reverse map. public %lt;S,T> Map<T,S> getInverseMap(Map<S,T> map) { Map<T,S> inverseMap = new HashMap<T,S>(); for(Entry<S,T> entry: map.entrySet()) { inverseMap.put(entry.getValue(), entry.getKey()); } return inverseMap; } It’ll do the job, but there’s several complications you might need to think about.How do we handle duplicate values in the original map? At the moment they’ll be silently overwritten in the reverse map. What if we want to put a new entry in the reversed map? We’d also have to update the original map! This could get annoying.BiMaps Well, guess what? This is the sort of situation a BiMap is designed for! And here’s how you might use it. BiMap<String,String> britishToAmerican = HashBiMap.create();// Initialise and use just like a normal map britishToAmerican.put('aubergine','egglant'); britishToAmerican.put('courgette','zucchini'); britishToAmerican.put('jam','jelly');System.out.println(britishToAmerican.get('aubergine')); // eggplantBiMap<String,String> americanToBritish = britishToAmerican.inverse();System.out.println(americanToBritish.get('eggplant')); // aubergine System.out.println(americanToBritish.get('zucchini')); // courgette Pretty simple really, but there’s a few things to notice. Enforcing uniqueness Firstly the BiMap enforces uniqueness of it’s values, and will give you an illegal argument exception if you try to insert a duplicate value, ie britishToAmerican.put('pudding','dessert'); britishToAmerican.put('sweet','dessert'); // IllegalArgumentException. If you need to add a values that has already been added there’s a forcePut method that will overwrite the entry with the duplicate value. britishToAmerican.put('pudding','dessert'); britishToAmerican.forcePut('sweet','dessert'); // Overwrites the previous entry System.out.println(britishToAmerican.get('sweet')); // dessert System.out.println(britishToAmerican.get('pudding')); // null The inverse method The other crucial thing to understand is the inverse method, this returns the inverse BiMap, ie the a map with the keys and values switched round. Now this inverse map, isn’t just a new map, such as my earlier reverseMap method might have created. It’s actually a view of the of the original map. This means that any subsequent changes to the inverse method will affect the original map! americanToBritish.put('potato chips','crisps'); System.out.println(britishToAmerican.containsKey('crisps')); // true System.out.println(britishToAmerican.get('crisps')); // potato chips So that’s the BiMap, like I said pretty simple. As usual there are several implementations available, and as ever I recommend taking a look at the full API documentation: http://guava-libraries.googlecode.com/svn/tags/release09/javadoc/com/google/common/collect/BiMap.html   Reference: Google Guava BiMaps from our JCG partner Tom Jefferys at the Tom’s Programming Blog blog. ...

The frustrations of the development

It’s busy period always before Christmas. Clients have ideas and request, we [at 2dwarfs] have started projects of our own, so we have interference. And although it’s always much fun to develop projects of your own (I’d be dishonest and lying if I say otherwise, ask anyone), we must stick to our professional ideology of building trust with the clients, coming across their needs. Anyway, we are slowly finishing what’s been the busiest November the past few years and we have some experience to share with you. And why do we want to share it? Because we believe that sharing is what makes technology move forward, either an idea, experience, source code or motivation. And we hope it’s obvious we strongly stand by that.       What’s sometimes a month of finishing a project, for others is 2 weeks of hard work and 2 weeks of creating and fixing frustrations. And why would that happen, well it’s because of something more or less, everyone in the process is involved with. Whether is the project management (which, trying to be defensive here, most of the time in our case is someone at our client’s side), whether is the development process (and that is us resolving constraints, change requests, broken laptops, ousted networks), or is the indirect client (which comes as the client of ours client) changing the requests for the software you develop. I have the feeling that many times mobile development is underestimated by its size only. It’s mobile application, what can go wrong? Well many things. And if you change one brick while the walls of the construction are built, you get a tilted house. Luckily we are joyful whenever crap comes our way because we learn and that is something we did this last period. We had our notes applications on, writing down notes from the process until the final solution was reached. More or less they are notes of frustration and I apologize from the very beginning about the tone you may face here, but see it as my shrink, a therapy. Let’s go one by one. ‘But my caching implementation!!!’ Ask for specification or functionality list and you will be more successful at the end. Why? Because you will be able to bring a solution that will cover every connection of the final product. It will reach to every entity of the skeleton making it adaptable to changes, expansions and further (new) features. Yeah, that sounds nice but it’s not always the case. You get the idea described in email and you understand it, you know how to develop it, and as the deadline approaches the frequency of emails from your client grows. ‘Let’s put more spacing here…’, ‘Let’s make it thinner’, ‘..let’s make it more yellow’, etc. You get to deal with small screwups forgetting to finish something bigger and more important. And yes, the user (or the client) is not the one that will be happy with your caching code that runs behind, no matter how cleverly developed it is. The only one that will be happy and satisfied with such feature for the first versions is you only. Until the client understands the importance of having great architecture behind the application, don’t expect him to be happy about it and pay less attention to the bouncy animation you implemented in 10 minutes. That is why it’s important to have something more than a version of the application for some other platform, browser or system. It is really important to tell the client that the wireframes are not enough to develop everything that’s hidden in his head. What’s hidden has to go out and transform into functionality list that will be followed during the development. It might sound as the perfect scenario which has never happened to me anyway, but at least that is what I believe is the right way of software development. Once you have the specification for the software, lock it! ‘Make the spacing same as here…’ Lock the list, put the key in your throat and swallow. If the development process takes 4 weeks, do not unlock this list until the final week (well ok then magic happens and you’re fscked!). Try to answer politically to any question that starts with ‘can…’ while these 3/4 of the time of the development last. ‘Can we make it bigger…’, ‘yes, we can but after the main development is finished’. Other’s would go further and be arrogant, ‘Can we make the app more optimized?’ -‘No you cannot, but I can in the bug fixing period’. Whatever your attitude is. Just demand for a lock of the features list while you’re developing the backbones. ‘…well, the app looks too small on iPad!’ Do you ask whether the app you’re about to start developing should be supported for different hardware platforms? The ‘mobile’ would not be the only case when you should ask this, but in general, any type of development? Have you ever got a ‘remark’ that the software you created was great but it’d be much better if Internet Explorer was covered, or Mac? Or the latest iPad? Well, if yes, it’s your fault. Not defining the apps’ scope is common mistake made by many and again, the fault goes only to the guy that is developing the project. ‘But it’s slow!!’ Fragmentation. Really important issue when it comes to Android. Oh boy, the clients will find the crappiest devices out there. And will tell you your work sux because ‘this view is overlaying that view’ (and also if you come across those with negative attitude to the technology you develop for then it gets even worse). Yes, Android has so many flavors, versions, subversions, custom ROMS, it runs on so many devices in Europe, Asia, America… it’s, more or less, cataclysm. Many Android developers know how painful is to cover Android versions from e.g. Gingerbread (>=2.3). Why? First, because many old device got update to 2.3 and because many parts of Android’s API suck when compared to new those in the latest SDKs. There are technical examples that I would not like to mention (for the sake of readability) when one block of code works like ‘this’ on Gingerbread and like ‘????’ on Jelly Bean. It’s crazy. And then you have the expectation that the massive memory consuming application you develop would work on every device out there, starting from the first Android device up to Nexus 4. Nope. You cannot do that. Because Star Craft 2 would not work on my P2 from 1999. That’s why. Limit the set of devices and then limit the hardware specifications by CPU and RAM of the device. Trust me, you’ll save lots of frustrations, lost nerves and white hair while developing for Android. ‘I can rename the file to header.9.png if you want?’ Whether it’s a music file in a format not supported for the platform you develop, or a graphical resource (icon, background, color), always get the assets for your platform from the client. If not, well, charge for creating ones. It’s not a big deal for any developer to create a 9patch background by yourself. But that is not what you’re supposed to do and I know it’s no fun to do it. But then you have 100’s files that were used for previous projects and it’s expected that the same ones would work with your project. Yes, I know the pain Android developers, when you’re supposed to port that app on Android and you’re given the non-proportional, weird and twisted graphics without its states (pressed/clicked/normal), expecting that you’ll make them work since well, ‘they work on iOS, so what’s the big deal?’. Throw that away. Ask for Android specific assets. Ask for assets for the platform you develop for. And yes, the same can be applied for iOS developers that get Android resources from their clients. Just ask for assets which are applicable to iOS projects. ‘People would say, what looks good when shown in IE?’ One thing you have to respect. The tools other people make money with. I am really attached to my laptop and whenever someone says bad things about it being a small, lousy and slow model makes me mad. Well ok, mad inside. Because that is the tool I make my money with and so far it works just fine. I don’t mock your spreadsheet tables so stop mocking the platform you’re ordering software for. Whether it’s iOS, Android, Windows Phone or any of the many desktop operating systems. It’s not the platform that makes the software bad. It’s bad planning, bad implementation, bad graphical designer, bad idea. Bad professionals. The ‘do it yourself if you can better’ is the best answer to conclusions like ‘crappy Windows Phone’ or ‘ugly Android’ or ‘bloated Ubuntu’. Much easier now everything is out. I am not saying that developers should be arrogant and rude (well most of them are whatever I am saying). I am just trying to explain that developers, just as any other professional, should stand by what they are doing because they know it best. Otherwise other people would not put money in them. And if they know it best, they should set part of the rules of playing.   Reference: The frustrations of the development from our JCG partner Aleksandar Balalovski at the 2Dwarfs blog. ...

Increasing heap size – beware of the Cobra Effect

The term ‘Cobra effect’ stems from an anecdote set at the time of British rule of colonial India. The British government was concerned about the number of venomous cobra snakes. The Government therefore offered a reward for every dead snake. Initially this was a successful strategy as large numbers of snakes were killed for the reward. Eventually however Indians began to breed cobras for the income. When this was realized the reward was canceled, but the cobra breeders set the snakes free and the wild cobras consequently multiplied. The apparent solution for the problem made the situation even worse.       So how is Java heap size related with Colonial India and poisonous snakes? Bear with me and I’ll guide you through the analogy using a story from a real life as a reference. The term ‘Cobra effect’ You have created an amazing application. So amazing that it becomes truly popular and the sheer amount of traffic to your new service starts to push your application to its knees. Digging through the performance metrics you decide that the amount of heap available for your application will soon become a bottleneck. So you take the time to launch new infrastructure with six times the original heap. You test your application to verify that it works. You then launch it on the new infrastructure. And immediately complaints start flowing in – your application has become less responsive than with your original tiny 2GB heap. Some of your users face delays in length of minutes when waiting for your application to respond. What has just happened? There can be numerous reasons of course. But let’s focus on the most likely suspect – heap size change. This has several possible side effects like extended caching warmup times, problems with fragmentation, etc. But from the symptoms experienced you are most likely facing latency problems in your application during full GC runs. What this means is – as Java is a garbage collected language – your heap used is regularly being garbage collected by JVM internal processes. And as one might expect – if you have a larger room to clean then it tends to take more time for the janitor to clean the room. The very same applies to cleaning unused objects from memory. When running applications on small heaps (below 4GB) you often do not need to think about GC internals. But when increasing heap sizes to tens of gigabytes you should definitely be aware of the potential stop-the-world pauses induced by the full GC. The very same pauses did also exist with small heap sizes, but their length was significantly shorter – your pauses that now last for more than a minute might have originally spanned only a few hundred milliseconds. So what can you do in cases when you really need more heap for your application?The first option would be to consider scaling horizontally instead of vertically. What this means for our current case is – if your application is either stateless or easily partitionable then just add more small nodes and balance the load between them. In this case you could stick with 32bit-architectures which also imposes smaller memory footprint. If horizontal scaling is not possible then you should focus on your GC configuration. If latency is what you are after, then you should forget about the throughput oriented stop-the-world GCs and start looking for alternatives. Which you will soon find to be limited to Concurrent Mark and Sweep (CMS) or Garbage-First (G1) collectors. The saddest news being that your best choice between those two collector types and other heap configuration parameters can only be found by experimenting. So do not make choices just by reading something, go out there and try it out with your actual production load.But be aware of their limitations as well – both of those collectors pose throughput overhead on your application – especially G1 tends to show worse throughput numbers than the stop-the-world alternatives. And when the CMS garbage collector is not fast enough to finish operation before the tenured generation is full, it falls back to the standard stop-the-world GC. So you can still face 30 or more second pauses for heaps of size 16 GB and beyond.If you cannot scale horizontally or fail to achieve the required latency results on garbage collectors shipping with Oracle’s JVM, then you might also look into Zing JVM built by Azul Systems. One of the features making Zing to stand out is the pauseless garbage collector (C4), which might be exactly what you are looking for. Full disclosure though – we haven’t yet tried C4 in practice. But it does sound cool. Your last option is something for the true hardcore guys out there. You can allocate memory outside the heap. Those allocations obviously aren’t visible to the garbage collector and thus will not be collected. It might sound scary, but already from Java 1.4 we have access to the java.nio.ByteBuffer class which provides us a method allocateDirect() for off-heap memory allocations. This allows us to create very large data structures without bumping into multi-second GC pauses. This solution is not too uncommon – many BigMemory implementations are using ByteBuffers under the hood. Terracotta BigMemory and Apache DirectMemory for example.To conclude – even when making changes backed with good intentions, be aware of both the alternatives and the consequences. Just as the Government of India back in the days publishing rewards for dead cobras.   Reference: Increasing heap size – beware of the Cobra from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...

The Java Advent Calendar

The Java Advent Calendar is winter-festivities themed blog featuring (at least) one Java-related post per day between the 1st and 24th of December. The concept is rooted in the tradition of the advent calendar – getting a small gift each day while waiting for Christmas – but it is not a religious endeavor. We want to give people who have something interesting to say related to Java an outlet and a motivation to do so (similar initiatives exist for other domains in IT). The initiative was born in a split second and now we just crossed our half-way mark – we posted the 13th article. We cover a wide variety of topics from JMX trough computer vision and rounding of with Java sound. But this is not all! Stay tuned for a lot more interesting articles, including a pair of posts about the first two languages on the JVM (besides Java) which are still in use today (you will be able to win many bets with your friends with these two)! I would like to take this opportunity to thank all the people who volunteered to post: without You this wouldn’t be possible! I would also like to thank the people and sites (like Java Code Geeks) who help promote the initiative: Thank You! Finally I would like to ask for your support Dear Reader: if you like the idea, please spread the work. The posts are all CC-BY 3.0 licensed and you can share them freely as long as you specify its origin. We are present on most social networks, so please like, +1 and RT us. Also, if you have an idea for a post you would like to write, contact me, we might be able to squeeze it in! Happy winter festivities everyone! Attila Balazs ...

How to Create Extensible Java Applications

Many applications benefit from being open to extension. This post describes two ways to implement such extensibility in Java. Extensible Applications Extensible applications are applications whose functionality can be extended without having to recompile them and sometimes even without having to restart them. This may happen by simply adding a jar to the classpath, or by a more involved installation procedure.    One example of an extensible application is the Eclipse IDE. It allows extensions, called plug-ins, to be installed so that new functionality becomes available. For instance, you could install a Source Code Management (SCM) plug-in to work with your favorite SCM. As another example, imagine an implementation of the XACML specification for authorization. The “X” in XACML stands for “eXtensible” and the specification defines a number of extension points, like attribute and category IDs, combining algorithms, functions, and Policy Information Points. A good XACML implementation will allow you to extend the product by providing a module that implements the extension point. Service Provider Interface Oracle’s solution for creating extensible applications is the Service Provider Interface (SPI). In this approach, an extension point is defined by an interface: package com.company.application;public interface MyService { // ... } You can find all extensions for such an extension point by using the ServiceLoader class: public class Client {public void useService() { Iterator<MyService> services = ServiceLoader.load( MyService.class).iterator(); while (services.hasNext()) { MyService service = services.next(); // ... use service ... }} An extension for this extension point can be any class that implements that interface: package com.company.application.impl;public class MyServiceImpl implements MyService { // ... } The implementation class must be publicly available and have a public no-arg constructor. However, that’s not enough for the ServiceLoader class to find it. You must also create a file named after the fully qualified name of the extension point interface in META-INF/services. In our example, that would be: META-INF/services/com.company.application.Myservice This file must be UTF-8 encoded, or ServiceLoader will not be able to read it. Each line of this file should contain the fully qualified name of one extension implementing the extension point, for instance: com.company.application.impl.MyServiceImpl OSGi Services The SPI approach described above only works when the extension point files are on the classpath. In an OSGi environment, this is not the case. Luckily, OSGi has its own solution to the extensibility problem: OSGi services. With Declarative Services, OSGi services are easy to implement, especially when using the annotations of Apache Felix Service Component Runtime (SCR): @Service @Component public class MyServiceImpl implements MyService { // ... } With OSGi and SCR, it is also very easy to use a service: @Component public class Client {@Reference private MyService myService;protected void bindMyService(MyService bound) { myService = bound; }protected void unbindMyService(MyService bound) { if (myService == bound) { myService = null; } }public void useService() { // ... use myService ... }} Best of Both Worlds So which of the two options should you chose? It depends on your situation, of course. When you’re in an OSGi environment, the choice should obviously be OSGi services. If you’re not in an OSGi environment, you can’t use those, so you’re left with SPI. But what if you’re writing a framework or library and you don’t know whether your code will be used in an OSGi or classpath based environment? You will want to serve as many uses of your library as possible, so the best would be to support both models. This can be done if you’re careful. Note that adding a Declarative Services service component file like OSGI-INF/myServiceComponent.xml to your jar (which is what the SCR annotations end up doing when they are processed) will only work in an OSGi environment, but is harmless outside OSGi. Likewise, the SPI service file will work in a traditional classpath environment, but is harmless in OSGi. So the two approaches are actually mutually exclusive and in any given environment, only one of the two approaches will find anything. Therefore, you can write code that uses both approaches. It’s a bit of duplication, but it allows your code to work in both types of environments, so you can have your cake and eat it too.   Reference: How to Create Extensible Java Applications from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Of Hacking Enums and Modifying ‘final static’ Fields

In this newsletter, originally published in The Java Specialists’ Newsletter Issue 161 we examine how it is possible to create enum instances in the Sun JDK, by using the reflection classes from the sun.reflect package. This will obviously only work for Sun’s JDK. If you need to do this on another JVM, you’re on your own. This all started with an email from Ken Dobson of Edinburgh, which pointed me in the direction of the sun.reflect.ConstructorAccessor, which he claimed could be used to construct enum instances. My previous approach (newsletter #141) did not work in Java 6.       I was curious why Ken wanted to construct enums. Here is how he wanted to use it: public enum HumanState { HAPPY, SAD }public class Human { public void sing(HumanState state) { switch (state) { case HAPPY: singHappySong(); break; case SAD: singDirge(); break; default: new IllegalStateException("Invalid State: " + state); } }private void singHappySong() { System.out.println("When you're happy and you know it ..."); }private void singDirge() { System.out.println("Don't cry for me Argentina, ..."); } } The above code needs a unit test. Did you spot the mistake? If you did not, go over the code again with a fine comb to try to find it. When I first saw this, I did not spot the mistake either. When we make bugs like this, the first thing we should do is produce a unit test that shows it. However, in this case we cannot cause the default case to happen, because the HumanState only has the HAPPY and SAD enums. Ken’s discovery allowed us to make an instance of an enum by using the ConstructorAccessor class from the sun.reflect package. It would involve something like: Constructor cstr = clazz.getDeclaredConstructor( String.class, int.class ); ReflectionFactory reflection = ReflectionFactory.getReflectionFactory(); Enum e = reflection.newConstructorAccessor(cstr).newInstance("BLA",3); However, if we just do that, we end up with an ArrayIndexOutOfBoundsException, which makes sense when we see how the Java compiler converts the switch statement into byte code. Taking the above Human class, here is what is the decompiled code looks like (thanks to Pavel Kouznetsov’s JAD): public class Human { public void sing(HumanState state) { static class _cls1 { static final int $SwitchMap$HumanState[] = new int[HumanState.values().length]; static { try { $SwitchMap$HumanState[HumanState.HAPPY.ordinal()] = 1; } catch(NoSuchFieldError ex) { } try { $SwitchMap$HumanState[HumanState.SAD.ordinal()] = 2; } catch(NoSuchFieldError ex) { } } }switch(_cls1.$SwitchMap$HumanState[state.ordinal()]) { case 1: singHappySong(); break; case 2: singDirge(); break; default: new IllegalStateException("Invalid State: " + state); break; } } private void singHappySong() { System.out.println("When you're happy and you know it ..."); } private void singDirge() { System.out.println("Don't cry for me Argentina, ..."); } } You can see immediately why we would get an ArrayIndexOutOfBoundsException, thanks to the inner class _cls1. My first attempt at fixing this problem did not result in a decent solution. I tried to modify the $VALUES array inside the HumanState enum. However, I just bounced off Java’s protective code. You can modify final fields, as long as they are non-static. This restriction seemed artificial to me, so I set off on a quest to discover the holy grail of static final fields. Again, it was hidden in the chamber of sun.reflect. Setting ‘final static’ Fields Several things are needed in order to set a final static field. First off, we need to get the Field object using normal reflection. If we passed this to the FieldAccessor, we will just bounce off the security code, since we are dealing with a static final field. Secondly, we change the modifiers field value inside the Field object instance to not be final. Thirdly, we pass the doctored field to the FieldAccessor in the sun.reflect package and use this to set it. Here is my ReflectionHelper class, which we can use to set final static fields via reflection: import sun.reflect.*; import java.lang.reflect.*;public class ReflectionHelper { private static final String MODIFIERS_FIELD = "modifiers";private static final ReflectionFactory reflection = ReflectionFactory.getReflectionFactory();public static void setStaticFinalField( Field field, Object value) throws NoSuchFieldException, IllegalAccessException { // we mark the field to be public field.setAccessible(true); // next we change the modifier in the Field instance to // not be final anymore, thus tricking reflection into // letting us modify the static final field Field modifiersField = Field.class.getDeclaredField(MODIFIERS_FIELD); modifiersField.setAccessible(true); int modifiers = modifiersField.getInt(field); // blank out the final bit in the modifiers int modifiers &= ~Modifier.FINAL; modifiersField.setInt(field, modifiers); FieldAccessor fa = reflection.newFieldAccessor( field, false ); fa.set(null, value); } } With this ReflectionHelper, I could thus set the $VALUES array inside the enum to contain my new enum. This worked, except that I had to do this before the Human class was loaded for the first time. This would introduce a racing condition into our test cases. By themselves each test would work, but collectively they could fail. Not a good scenario! Rewiring Enum Switches The next idea was to rewire the actual switch statement’s $SwitchMap$HumanState field. It would be fairly easy to find this field inside the anonymous inner class. All you need is the prefix $SwitchMap$ followed by the enum class name. If the enum is switched several times in one class, then the inner class is only created once. One of the other solutions that I wrote yesterday did a check on whether our switch statement was dealing with all the possible cases. This would be useful in discovering bugs when a new type is introduced into the system. I discarded that particular solution, but you should be able to easily recreate that based on the EnumBuster that I will show you later. The Memento Design Pattern I recently rewrote my Design Patterns Course (warning, the website might not have the up-to-date structure up yet – please enquire for more information), to take into account the changes in Java, to throw away some outdated patterns and to introduce some that I had excluded previously. One of the ‘new’ patterns was the Memento, often used with undo functionality. I thought it would be a good pattern to use to undo the damage done to the enum in our great efforts to test our impossible case. Publishing a Specialists’ newsletter gives me certain liberties. I do not have to explain every line that I write. So, without further ado, here is my EnumBuster class, which allows you to make enums, add them to the existing values[], delete enums from the array, whilst at the same time maintaining the switch statement of any class that you specify. import sun.reflect.*;import java.lang.reflect.*; import java.util.*;public class EnumBuster<E extends Enum<E>> { private static final Class[] EMPTY_CLASS_ARRAY = new Class[0]; private static final Object[] EMPTY_OBJECT_ARRAY = new Object[0];private static final String VALUES_FIELD = "$VALUES"; private static final String ORDINAL_FIELD = "ordinal";private final ReflectionFactory reflection = ReflectionFactory.getReflectionFactory();private final Class<E> clazz;private final Collection<Field> switchFields;private final Deque<Memento> undoStack = new LinkedList<Memento>();/** * Construct an EnumBuster for the given enum class and keep * the switch statements of the classes specified in * switchUsers in sync with the enum values. */ public EnumBuster(Class<E> clazz, Class... switchUsers) { try { this.clazz = clazz; switchFields = findRelatedSwitchFields(switchUsers); } catch (Exception e) { throw new IllegalArgumentException( "Could not create the class", e); } }/** * Make a new enum instance, without adding it to the values * array and using the default ordinal of 0. */ public E make(String value) { return make(value, 0, EMPTY_CLASS_ARRAY, EMPTY_OBJECT_ARRAY); }/** * Make a new enum instance with the given ordinal. */ public E make(String value, int ordinal) { return make(value, ordinal, EMPTY_CLASS_ARRAY, EMPTY_OBJECT_ARRAY); }/** * Make a new enum instance with the given value, ordinal and * additional parameters. The additionalTypes is used to match * the constructor accurately. */ public E make(String value, int ordinal, Class[] additionalTypes, Object[] additional) { try { undoStack.push(new Memento()); ConstructorAccessor ca = findConstructorAccessor( additionalTypes, clazz); return constructEnum(clazz, ca, value, ordinal, additional); } catch (Exception e) { throw new IllegalArgumentException( "Could not create enum", e); } }/** * This method adds the given enum into the array * inside the enum class. If the enum already * contains that particular value, then the value * is overwritten with our enum. Otherwise it is * added at the end of the array. * * In addition, if there is a constant field in the * enum class pointing to an enum with our value, * then we replace that with our enum instance. * * The ordinal is either set to the existing position * or to the last value. * * Warning: This should probably never be called, * since it can cause permanent changes to the enum * values. Use only in extreme conditions. * * @param e the enum to add */ public void addByValue(E e) { try { undoStack.push(new Memento()); Field valuesField = findValuesField();// we get the current Enum[] E[] values = values(); for (int i = 0; i < values.length; i++) { E value = values[i]; if (value.name().equals(e.name())) { setOrdinal(e, value.ordinal()); values[i] = e; replaceConstant(e); return; } }// we did not find it in the existing array, thus // append it to the array E[] newValues = Arrays.copyOf(values, values.length + 1); newValues[newValues.length - 1] = e; ReflectionHelper.setStaticFinalField( valuesField, newValues);int ordinal = newValues.length - 1; setOrdinal(e, ordinal); addSwitchCase(); } catch (Exception ex) { throw new IllegalArgumentException( "Could not set the enum", ex); } }/** * We delete the enum from the values array and set the * constant pointer to null. * * @param e the enum to delete from the type. * @return true if the enum was found and deleted; * false otherwise */ public boolean deleteByValue(E e) { if (e == null) throw new NullPointerException(); try { undoStack.push(new Memento()); // we get the current E[] E[] values = values(); for (int i = 0; i < values.length; i++) { E value = values[i]; if (value.name().equals(e.name())) { E[] newValues = Arrays.copyOf(values, values.length - 1); System.arraycopy(values, i + 1, newValues, i, values.length - i - 1); for (int j = i; j < newValues.length; j++) { setOrdinal(newValues[j], j); } Field valuesField = findValuesField(); ReflectionHelper.setStaticFinalField( valuesField, newValues); removeSwitchCase(i); blankOutConstant(e); return true; } } } catch (Exception ex) { throw new IllegalArgumentException( "Could not set the enum", ex); } return false; }/** * Undo the state right back to the beginning when the * EnumBuster was created. */ public void restore() { while (undo()) { // } }/** * Undo the previous operation. */ public boolean undo() { try { Memento memento = undoStack.poll(); if (memento == null) return false; memento.undo(); return true; } catch (Exception e) { throw new IllegalStateException("Could not undo", e); } }private ConstructorAccessor findConstructorAccessor( Class[] additionalParameterTypes, Class<E> clazz) throws NoSuchMethodException { Class[] parameterTypes = new Class[additionalParameterTypes.length + 2]; parameterTypes[0] = String.class; parameterTypes[1] = int.class; System.arraycopy( additionalParameterTypes, 0, parameterTypes, 2, additionalParameterTypes.length); Constructor<E> cstr = clazz.getDeclaredConstructor( parameterTypes ); return reflection.newConstructorAccessor(cstr); }private E constructEnum(Class<E> clazz, ConstructorAccessor ca, String value, int ordinal, Object[] additional) throws Exception { Object[] parms = new Object[additional.length + 2]; parms[0] = value; parms[1] = ordinal; System.arraycopy( additional, 0, parms, 2, additional.length); return clazz.cast(ca.newInstance(parms)); }/** * The only time we ever add a new enum is at the end. * Thus all we need to do is expand the switch map arrays * by one empty slot. */ private void addSwitchCase() { try { for (Field switchField : switchFields) { int[] switches = (int[]) switchField.get(null); switches = Arrays.copyOf(switches, switches.length + 1); ReflectionHelper.setStaticFinalField( switchField, switches ); } } catch (Exception e) { throw new IllegalArgumentException( "Could not fix switch", e); } }private void replaceConstant(E e) throws IllegalAccessException, NoSuchFieldException { Field[] fields = clazz.getDeclaredFields(); for (Field field : fields) { if (field.getName().equals(e.name())) { ReflectionHelper.setStaticFinalField( field, e ); } } }private void blankOutConstant(E e) throws IllegalAccessException, NoSuchFieldException { Field[] fields = clazz.getDeclaredFields(); for (Field field : fields) { if (field.getName().equals(e.name())) { ReflectionHelper.setStaticFinalField( field, null ); } } }private void setOrdinal(E e, int ordinal) throws NoSuchFieldException, IllegalAccessException { Field ordinalField = Enum.class.getDeclaredField( ORDINAL_FIELD); ordinalField.setAccessible(true); ordinalField.set(e, ordinal); }/** * Method to find the values field, set it to be accessible, * and return it. * * @return the values array field for the enum. * @throws NoSuchFieldException if the field could not be found */ private Field findValuesField() throws NoSuchFieldException { // first we find the static final array that holds // the values in the enum class Field valuesField = clazz.getDeclaredField( VALUES_FIELD); // we mark it to be public valuesField.setAccessible(true); return valuesField; }private Collection<Field> findRelatedSwitchFields( Class[] switchUsers) { Collection<Field> result = new ArrayList<Field>(); try { for (Class switchUser : switchUsers) { Class[] clazzes = switchUser.getDeclaredClasses(); for (Class suspect : clazzes) { Field[] fields = suspect.getDeclaredFields(); for (Field field : fields) { if (field.getName().startsWith("$SwitchMap$" + clazz.getSimpleName())) { field.setAccessible(true); result.add(field); } } } } } catch (Exception e) { throw new IllegalArgumentException( "Could not fix switch", e); } return result; }private void removeSwitchCase(int ordinal) { try { for (Field switchField : switchFields) { int[] switches = (int[]) switchField.get(null); int[] newSwitches = Arrays.copyOf( switches, switches.length - 1); System.arraycopy(switches, ordinal + 1, newSwitches, ordinal, switches.length - ordinal - 1); ReflectionHelper.setStaticFinalField( switchField, newSwitches ); } } catch (Exception e) { throw new IllegalArgumentException( "Could not fix switch", e); } }@SuppressWarnings("unchecked") private E[] values() throws NoSuchFieldException, IllegalAccessException { Field valuesField = findValuesField(); return (E[]) valuesField.get(null); }private class Memento { private final E[] values; private final Map<Field, int[]> savedSwitchFieldValues = new HashMap<Field, int[]>();private Memento() throws IllegalAccessException { try { values = values().clone(); for (Field switchField : switchFields) { int[] switchArray = (int[]) switchField.get(null); savedSwitchFieldValues.put(switchField, switchArray.clone()); } } catch (Exception e) { throw new IllegalArgumentException( "Could not create the class", e); } }private void undo() throws NoSuchFieldException, IllegalAccessException { Field valuesField = findValuesField(); ReflectionHelper.setStaticFinalField(valuesField, values);for (int i = 0; i < values.length; i++) { setOrdinal(values[i], i); }// reset all of the constants defined inside the enum Map<String, E> valuesMap = new HashMap<String, E>(); for (E e : values) { valuesMap.put(e.name(), e); } Field[] constantEnumFields = clazz.getDeclaredFields(); for (Field constantEnumField : constantEnumFields) { E en = valuesMap.get(constantEnumField.getName()); if (en != null) { ReflectionHelper.setStaticFinalField( constantEnumField, en ); } }for (Map.Entry<Field, int[]> entry : savedSwitchFieldValues.entrySet()) { Field field = entry.getKey(); int[] mappings = entry.getValue(); ReflectionHelper.setStaticFinalField(field, mappings); } } } } The class is quite long and probably still has some bugs. I wrote it en route from San Francisco to New York. Here is how we could use it to test our Human class: import junit.framework.TestCase;public class HumanTest extends TestCase { public void testSingingAddingEnum() { EnumBuster<HumanState> buster = new EnumBuster<HumanState>(HumanState.class, Human.class);try { Human heinz = new Human(); heinz.sing(HumanState.HAPPY); heinz.sing(HumanState.SAD);HumanState MELLOW = buster.make("MELLOW"); buster.addByValue(MELLOW); System.out.println(Arrays.toString(HumanState.values()));try { heinz.sing(MELLOW); fail("Should have caused an IllegalStateException"); } catch (IllegalStateException success) { } } finally { System.out.println("Restoring HumanState"); buster.restore(); System.out.println(Arrays.toString(HumanState.values())); } } } This unit test now shows the mistake in our Human.java file, shown earlier. We forgot to add the throw keyword! When you're happy and you know it ... Don't cry for me Argentina, ... [HAPPY, SAD, MELLOW] Restoring HumanState [HAPPY, SAD]AssertionFailedError: Should have caused an IllegalStateException at HumanTest.testSingingAddingEnum(HumanTest.java:23) The EnumBuster class can do more than that. We can use it to delete enums that we don’t want. If we specify which classes the switch statements are, then these will be maintained at the same time. Plus, we can undo right back to the initial state. Lots of functionality! One last test case before I sign off, where we add the test class to the switch classes to maintain. import junit.framework.TestCase;public class EnumSwitchTest extends TestCase { public void testSingingDeletingEnum() { EnumBuster<HumanState> buster = new EnumBuster<HumanState>(HumanState.class, EnumSwitchTest.class); try { for (HumanState state : HumanState.values()) { switch (state) { case HAPPY: case SAD: break; default: fail("Unknown state"); } }buster.deleteByValue(HumanState.HAPPY); for (HumanState state : HumanState.values()) { switch (state) { case SAD: break; case HAPPY: default: fail("Unknown state"); } }buster.undo(); buster.deleteByValue(HumanState.SAD); for (HumanState state : HumanState.values()) { switch (state) { case HAPPY: break; case SAD: default: fail("Unknown state"); } }buster.deleteByValue(HumanState.HAPPY); for (HumanState state : HumanState.values()) { switch (state) { case HAPPY: case SAD: default: fail("Unknown state"); } } } finally { buster.restore(); } } } The EnumBuster even maintains the constants, so if you remove an enum from the values(), it will blank out the final static field. If you add it back, it will set it to the new value. It was thoroughly entertaining to use the ideas by Ken Dobson to play with reflection in a way that I did not know was possible. (Any Sun engineers reading this, please don’t plug these holes in future versions of Java!) Kind regards Heinz JavaSpecialists offers all of the courses onsite at your company. More information … Be sure to check out our new course on Java concurrency. Please contact me for more information. About Dr Heinz M. Kabutz I have been writing for the Java specialist community since 2000. It’s been fun. It’s even more fun when you share this writing with someone you feel might enjoy it. And they can get it fresh each month if they head for www.javaspecialists.eu and add themselves to the list. Meta: this post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on! Want to write for the blog? We are looking for contributors to fill all 24 slot and would love to have your contribution! Contact Attila Balazs to contribute!   Reference: Of Hacking Enums and Modifying “final static” Fields from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: