Featured FREE Whitepapers

What's New Here?

json-logo

Testing client side of RESTful services

Develop an application that uses RESTful web API may imply developing server and client side. Writing integration tests for server side can be as easy as using Arquillian to start up server and REST-assured to test that the services works as expected. The problem is how to test the client side. In this post we are going to see how to test the client side apart from using mocks. As a brief description, to test client part, what we need is a local server which can return recorded JSON responses. The rest-client-driver is a library which simulates a RESTful service. You can set expectations on the HTTP requests you want to receive during a test. So it is exactly what we need for our java client side. Note that this project is really helpful to write tests when we are developing RESTful web clients for connecting to services developed by third parties like Flickr Rest API, Jira Rest API, Github … First thing to do is adding rest-client-driver dependency: <dependency> <groupId>com.github.rest-driver<groupId> <artifactId>rest-client-driver<artifactId> <version>1.1.27<version> <scope>test<scope> <dependency>Next step we are going to create a very simple Jersey application which simply invokes a get method to required URI. public class GithubClient { private static final int HTTP_STATUS_CODE_OK = 200; private String githubBaseUri; public GithubClient(String githubBaseUri) { this.githubBaseUri = githubBaseUri; } public String invokeGetMethod(String resourceName) { Client client = Client.create(); WebResource webResource = client.resource(githubBaseUri+resourceName); ClientResponse response = webResource.type('applicationjson') .accept('applicationjson').get(ClientResponse.class); int statusCode = response.getStatus(); if(statusCode != HTTP_STATUS_CODE_OK) { throw new IllegalStateException('Error code '+statusCode); } return response.getEntity(String.class); } }And now we want to test that invokeGetMethod really gets the required resource. Let’s suppose that this method in production code will be responsible of getting all issues name from a project registered on github. Now we can start to write the test: @Rule public ClientDriverRule driver = new ClientDriverRule(); @Test public void issues_from_project_should_be_retrieved() { driver.addExpectation( onRequestTo('reposlordofthejarsnosqlunitissues'). withMethod(Method.GET), giveResponse(GET_RESPONSE)); GithubClient githubClient = new GithubClient(driver.getBaseUrl()); String issues = githubClient.invokeGetMethod('reposlordofthejarsnosqlunitissues'); assertThat(issues, is(GET_RESPONSE)); }We use ClientDriverRule @Rule annotation to add the client-driver to a test. And then using methods provided by RestClientDriver class, expectations are recorded. See how we are setting the base URL using driver.getBaseUrl()With rest-client-driver we can also record http status response using giveEmptyResponse method: @Test(expected=IllegalStateException.class) public void http_errors_should_throw_an_exception() { driver.addExpectation( onRequestTo('reposlordofthejarsnosqlunitissues') .withMethod(Method.GET), giveEmptyResponse().withStatus(401)); GithubClient githubClient = new GithubClient(driver.getBaseUrl()); githubClient.invokeGetMethod('reposlordofthejarsnosqlunitissues'); } And obviously we can record a put action: driver.addExpectation( onRequestTo('reposlordofthejarsnosqlunitissues'). .withMethod(Method.PUT).withBody(PUT_MESSAGE, 'applicationjson'), giveEmptyResponse().withStatus(204));Note that in this example, we are setting that our request should contain given message body to response a 204 status code. This is a very simple example, but keep in mind that also works with libraries like gson or jackson. Also rest-driver project comes with a module that can be used to assert server responses (like REST-assured project) but this topic will be addressed into another post. Reference: Testing client side of RESTful services from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
java-logo

Is Java Dead or Invincible?

Writer Isaac Asimov once said that ‘the only constant is change’. That isn’t just a phrase in the software industry, it is an absolute fact. Once, there was a day when Corba was king but it was usurped by Web Services. Even within the world of Web Services, that used to be all about SOAP but now it’s REST style services which are much more popular today. Now somethings will obviously hang around a bit longer than others. Relational databases have been around 40 years and aren’t going to be kicked out by NoSql just yet. The HTTP protocol has been at version 1.1 since 1999 and has helped us use a thing called the Internet. As for Java well that has been a pretty popular computer programming language for the last decade and a half. According to Dutch research firm Tiobe in terms of overall popularity, Java ranked 5th in 1997, 1st in 2007 and 2nd in Sept 2012. At the time of writing there are over 2,000 Java programming books on Amazon in English and there are almost 300,000 threads on Stackoverflow related to Java. However, as George Orwell once said: ‘Whoever is winning at the moment will always seem to be invincible’. But is Java invincible or beginning to die? That’s the question being asked more and more now. In my humble opinion, the challenges to Java can be split into three categories:The rise of alternative languages Scalability / Multi-Core processors The return of the fat client.Let’s elaborate…The rise of alternative languages Alternative languages can be split into two groups: those that run on the JVM (Scala, Groovy etc) and those that don’t (Python, Ruby). One interesting thing is that the first group is pretty large. The languages that run on the JVM aren’t mutual exclusive to Java and in a sense strengthen it reminding us what a remarkable piese of software engineering the JVM is. Development teams can get that extra bit of expressiveness in a niche language like Groovy, but can still call out to Java when they need some cool Java library or just need that extra bit of performance. Remember the advantages in Groovy 2.0 speed it up but it is still not as fast as Java. As for the features some of these languages provide that are not in Java, well that is the case but it won’t always be the case. Take a look at the roadmap for Java 8 and the features it will include. Just like Java EE 5 and 6 took ideas from Spring / Seam, the Java lanuage in its 8th major release will be taking ideas from other languages. For example literal functions will be facilitated by Lambdas. Java 8 Lamdas will have support for type inference and because they are just literals it will be possible to pass them around (and return them) just like a String literal or any anonymous Object. That means instead of having to write an implementation of Comparator to pass to the Collections sort utility to sort a list of Strings, in Java 8 we will just do: Collections.sort(list, (s1, s2) -> s1.length() - s2.length()); So, the alternative JVM languages don’t exactly kick Java out of the party. It is still there, but in a party that has a better selection of music played and in a party where the guests encourages their host to be a better host.Scaling on the multi-core platforms As for multi-core and JVM – we all know that with a JVM running on a single core it was possible to spawn threads in the very first release of Java. But these threads weren’t executing in parallel, the CPU switched between them very quickly to create the impression that they were running in parallel. JStack may tell you that 50 threads have state ‘runnable’ on your single core machine but this just means they are either running or eligible to run. With multi-core CPUs it is possible to get true parallelism. The JVM decides when to execute threads in parallel. So what’s the deal here? Firstly, even though concurrency and threads were a feature of Java from the very beginning the language support was limited meaning development teams were writing a lot of their own thread management code – which could get ugly very quickly. This was alleviated greatly in JDK 1.5 with the arrival of a range of thread management features in the java.util.concurrent package. Secondly, to get better parallelism something else was needed. This came in Java 7 with Doug Lea’s Fork / Join framework which uses clever techniques such as work stealing and double sided queues to increase parallelism. However, even with this Framework decomposing (and re-arranging) data is still a task that is needed to be done by the programmer. Functional progamming gives us another option to perform computations on data sets in parallel. In Scala, for example, you just pass the function you wish to operate on the data and tell scala you want the computation to be parallelised. outputAnswer((1 to 5).par.foreach(i => longComputation)) And guess what? The same will be available in Java 8. Array.asList(1,2,3,4,5).parallel().foreach(int i ->heavyComputation())Since scalability and performance are architectural cousins, it is worth stating that in many experiements Java still out performns other languages. The excellent Computer Language Benchmark Game shows Java outperformaning many languages. It hammers the likes Perl, PHP, Python3, Erlang in many tests, beats Clojure, C# in nearly all tests and is only just behind C++ in terms in the performance results. Now, performance tests can’t cover everything and a context will always have some bias which favours one language over another but going by these tests it is not as if Java is a slow coach.The return of the fat client Since the advent of AJAX, Doug Crockford telling people how to use JavaScript and the rise of an excellent selection of JavaScript libraries the fat client is truely back. Close your eyes and imagine what a cool single page web application such as gmail would look and feel like if it was just thin client web framework based on Spring MVC, JSF or Struts – you just cannot beat the performance of a well designed fat client. One saving grace is that JavaScript is a lot more difficult to be good than some people think. It takes a lot more thinking to really understand Closures, Modules and the various JavaScript best practises than it does to know your way around a web framework like Spring MVC and Struts. In addition, building a single page web application (again such as gmail) doesn’t just require excellent JavaScript understanding it requires understanding of how the web works. For example, browsers don’t put Ajax requests in the browser history. So you gotta do something smart with fragment identifiers if you want the back and forward buttons to be usable and meaningful for the User. There is a probably some room here for a hybrid approach which uses both a web framework and JavaScript and of course some JavaScript libraries. This gives developers a structure to build an application and then the opportunity to use JavaScript, JQuery or whatever cool library takes there fancy for important parts of the Application that need to be jazzed up. In a true fat web client approach, there should be no HTML served from the server (that means no JSPs), the only thing coming back from the server is data (in the form of JSON). However, using a hybrid approach you can make a transition from thin to fat easier and you can still put your JavaScript libraries on a CDN, you just won’t get all the advantages of a full fat web client approach.Summary In summary, Java has had some bad moments. AWT was a rush job, Swing had performance problems, the early iterations of EJB were cumbersome and JSF was problematic in comparison to other frameworks such as Struts and Spring MVC. But, even today, extremely innovative projects such as Hadoop are built using Java. It still has massive support from the open source community. This support has not only helped Java but also show Java some its problems and things it needs to get better at. Java has shown it has the ability to evolve further and while other languages challenge it I don’t think the game is over just yet. It goes without saying, a lot will of the future of Java will depend on Oracle but let’s hope whatever happens that the winner will be technology. Related LinksYammer and their migration to scala James Gosling taking about the state and future of Java at Google tech talk Article from Oracle describingFork and Join in Java 7 Eric Bruno:Building Java Multi-Core applications Edgardo Hernandez:Parallel processing in Java IEEE top ten programming languages JDK 8 downloads Java Code Geeks article on Fork Join Good Fork Join article by Edward Harned Fork / Join paper from Doug Lea Fork / Join Java updates information from Doug Lea Scala Java Myths – great article by Urs Peter and Sander van der BergReference: Is Java Dead or Invincible? from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog....
jcg-logo

JCG Flashback – 2011 – W38

Hello guys, Time for another post in the JCG Flashback series. These posts will take you back in time and list the hot articles on Java Code Geeks from one year ago. So, let’s see what was popular back then: 5. Configuration Management in Java EE This article discusses configuration management in a Java EE environment and examines the various approaches to it. 4. Java Concurrency Tutorial – CountDownLatch Concurrency utilities in Java again and this is a tutorial on CountDownLatch. Learn how to use it in order to coordinate code execution among multiple threads.. 3. Database schema navigation in Java This is a tutorial on JOOQ (a fluent API for typesafe SQL query construction and execution) and how to use it for database schema navigation. 2. Measuring Code Complexity This article talks about Technical Debt, Code complexity and the various tools used to measure it. 1. Don’t rewrite Your Application An article discussing the all time classic question of whether you should rewrite a legacy application or choose to carry it along, perhaps applying a bit of refactoring! That’s all guys. Stay tuned for more, here at Java Code Geeks. And don’t forget to share! Cheers, Ilias ...
spring-logo

Spring Testing Support and Context caching

Spring provides a comprehensive support for unit and integration testing – through annotations to load up a Spring application context, integrate with unit testing frameworks like JUnit and TestNG. Since loading up a large application context for every test takes time, Spring intelligently caches the application context for a test suite – typically when we execute tests for a project, say through ant or maven, a suite is created encompassing all the tests in the project. There are a few points to note with caching which is what I intend to cover here, this is not likely to be comprehensive but is based on some situations which I have encountered: 1. Caching is based on the locations of Spring application context files Consider a sample Spring configuration file: <?xml version='1.0' encoding='UTF-8' standalone='no'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:context='http://www.springframework.org/schema/context' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:p='http://www.springframework.org/schema/p' xsi:schemaLocation=' http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd'> <bean id='user1' class='org.bk.lmt.domain.TaskUser' p:username='user1' p:fullname='testUser1' /> <bean name='user2' class='org.bk.lmt.domain.TaskUser' p:username='user2' p:fullname='testUser' /> <bean class='org.bk.contextcaching.DelayBean'/> </beans>And a sample test to load up this context file and verify something.: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { 'contexttest.xml' }) public class Test1 { @Autowired Map<String, TaskUser> usersMap;@Test public void testGetAUser() { TaskUser user = usersMap.get('user1'); assertThat(user.getFullname(), is('testUser1')); } } I have deliberately adding in a bean(DelayBean) which takes about 2 seconds to instantiate, to simulate Spring Application Contexts which are slow to load up. If I now run a small test suite with two tests, both using the same application context, the behavior is that the first test takes about 2 seconds to run through, but the second test runs through quickly because of context caching.If there were a third test using a different application context, this test would again take time to run through as the new application context has to be loaded up: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { 'contexttest2.xml' }) public class Test3 { ... }2. Caching of application contexts respects the active profile under which the test is run – essentially the profile is also part of the internal key that Spring uses to cache the context, so if two tests use the exact same application context, but different profiles are active for each of the tests, then the cached application context will not be used for the second test: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { 'contexttest.xml' }) @ActiveProfiles('dev1') public class Test1 { .... @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { 'contexttest.xml' }) @ActiveProfiles('dev2') public class Test2 { ....3. Caching of application context applies even with the new @Configuration style of defining a application context and using it in tests: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes={TestConfiguration.class}) public class Test1 { ... @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes={TestConfiguration.class}) public class Test2 { ....One implication of caching is that if a test class modifies the state of a bean, then another class in the test suite which uses the cached application context will end up seeing the modified bean instead of the bean the way it was defined in the application context: For eg. consider two tests, both of which modify a bean in the context, but are asserting on a state the way it is defined in the application context – Here one of the tests would end up failing(based on the order in which Junit executes the tests): @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes={TestConfiguration.class}) public class Test1 { @Autowired Map<String, TaskUser> usersMap;@Test public void testGetAUser1() { TaskUser user = usersMap.get('user1'); assertThat(user.getFullname(), is('testUser1')); user.setFullname('New Name'); } @Test public void testGetAUser2() { TaskUser user = usersMap.get('user1'); assertThat(user.getFullname(), is('testUser1')); user.setFullname('New Name'); } } The fix is to instruct Spring test support that the application context is now dirty and needs to be reloaded for other tests, and this is done with @DirtiesContext annotation which can specified at the test class level or test method level. @Test @DirtiesContext public void testGetAUser2() { ...Happy coding and don’t forget to share! Reference: Spring Testing Support and Context caching from our JCG partner Biju Kunjummen at the all and sundry blog....
spring-logo

Spring MVC 3 Template and Apache Tiles

An efficient design consideration for any web application is the use of a template engine (or tool), and with Spring’s “pluggable” nature, it is indeed much more easier to integrate template mechanisms such as Apache Tiles. In this simple post, I will give you a brief intro and basics of using Tiles as a Template engine for your Web Application!Get it Ready: Web Application Setup Setup Maven and Import the Spring-MVC libraires plus the Apache Tiles Configuration File Tiles Use it!1st: Web Layout and Application Setup: Get your Web Application Framework ready. For this example, I used Spring 3 MVC with all the minimal components readily injected. Download it here. The project is eclipse ready, so you can just import and load it on your STS (Spring Tool Suite) workspace.2nd: Setup Maven and generate sources - STS already has a Maven Plugin support. Put a Maven nature first on the project by right clicking on > project > configure > Convert to Maven project. 3rd: POM Configuration – Load the Tiles on the pom.xml. – You need to include the following dependencies to add Apache Tiles libraries to the project. <!-- For Tiles --> <dependency> <groupId>org.apache.tiles</groupId> <artifactId>tiles-core</artifactId> <version>2.2.2</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.tiles</groupId> <artifactId>tiles-template</artifactId> <version>2.2.2</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.tiles</groupId> <artifactId>tiles-jsp</artifactId> <version>2.2.2</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.tiles</groupId> <artifactId>tiles-servlet</artifactId> <version>2.2.2</version> <type>jar</type> <scope>compile</scope> </dependency> 4th: XML Configuration for Class loaded beans - Make sure to setup the tiles xml and call it either directly or from another xml bean configuration file.5th: Templates – Create the templates. tiles-definition: – Define the page using the template (mainTemplate.jsp) mainTemplate.jsp – is the page layout – put the definition attributes.The registerUser is the page that will be called, the body-position attribute is replaced by a body we defined: jsp/userregistration.jsp 6th: Configure database. Go to data-access-config.xml in your META-INF folder.SQL Script: delimiter $$ CREATE DATABASE `MDCDB` /*!40100 DEFAULT CHARACTER SET latin1 */$$ delimiter $$ CREATE TABLE `MDC_USERS` ( `ID` int(11) unsigned zerofill NOT NULL AUTO_INCREMENT, `NAME` varchar(45) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1$$Run the Application!With the quality and quantity of application development tools, using templates is not new. Creation of these are now strictly mandatory as it will really help the development team to create quality UI faster and better. It also allows developers and designers to work in parallel. Designers using a themeing API, let say JQuery and developers creating the backbone and logic of the application – using EJBs, makes the definition of “ease of development” more apparent. Download my sample and open it in your STS (Spring Tool Suite) here. Make sure you have the Hibernate and Maven Plugin installed. Reference: Spring MVC 3 with Template using Apache Tiles from our JCG partner Alvin Reyes at the Alvin “Jay” Reyes Blog blog....
java-logo

Chain Of Responsibility Design Pattern Example

Avoid coupling the sender of a request to the receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it. The main intention in Chain Of Responsibility is to decouple the origin of the request and the handling of the request such that the origin of the request need not worry who and how its request is being handled as long as it gets the expected outcome. By decoupling the origin of the request and the request handler we make sure that both can change easily and new request handlers can be added without the origin of the request i.e client being aware of the changes. In this pattern we create a chain of objects such that each object will have a reference to another object which we call it as successor and these objects are responsible for handling the request from the client. All the objects which are part of the chain are created from classes which confirm to a common interface there by the client only needs to be aware of the interface and not necessarily the types of its implementations. The client assigns the request to first object part of the chain and the chain is created in such a way that there would be atleast one object which can handle the request or the client would be made aware of the fact that its request couldn’t be handled. With this brief introduction I would like to put forth a very simple example to illustrate this pattern. In this example we create a chain of file parsers such that depending on the format of the file being passed to the parser, the parser has to decide whether its going to parse the file or pass the request to its successor parser to take action. The parser we would chain are: Simple text file parser, JSON file parser, CSV file parser and XML file parser. The parsing logic in each of these parser doesn’t parse any file, instead it just prints out a message stating who is handing the request for which file. We then populate file names of different formats into a list and then iterate through them passing the file name to the first parser in the list. Lets define the Parser class, first let me show the class diagram for Parser class:The Java code for the same is: public class Parser { private Parser successor; public void parse(String fileName){ if ( getSuccessor() != null ){ getSuccessor().parse(fileName); } else{ System.out.println('Unable to find the correct parser for the file: '+fileName); } } protected boolean canHandleFile(String fileName, String format){ return (fileName == null) || (fileName.endsWith(format)); }Parser getSuccessor() { return successor; }void setSuccessor(Parser successor) { this.successor = successor; } } We would now create different handlers for parsing different file formats namely- Simple text file, JSON file, CSV file, XML file and these extend from the Parser class and override the parse method. I have kept the implementation of different parser simple and these methods evaluate if the file has the format they are looking for. If a particular handler is unable to process the request i.e. the file format is not what it is looking for then the parent method handles such requests. The handler method in the parent class just invokes the same method on the successor handler. The simple text parser: public class TextParser extends Parser{public TextParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, '.txt')){ System.out.println('A text parser is handling the file: '+fileName); } else{ super.parse(fileName); } }} The JSON parser: public class JsonParser extends Parser {public JsonParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, '.json')){ System.out.println('A JSON parser is handling the file: '+fileName); } else{ super.parse(fileName); }}} The CSV parser: public class CsvParser extends Parser {public CsvParser(Parser successor){ this.setSuccessor(successor); } @Override public void parse(String fileName) { if ( canHandleFile(fileName, '.csv')){ System.out.println('A CSV parser is handling the file: '+fileName); } else{ super.parse(fileName); } }} The XML parser: public class XmlParser extends Parser { @Override public void parse(String fileName) { if ( canHandleFile(fileName, '.xml')){ System.out.println('A XML parser is handling the file: '+fileName); } else{ super.parse(fileName); } }} Now that we have all the handlers setup, we need to create a chain of handlers. In this example the chain we create is: TextParser -> JsonParser -> CsvParser -> XmlParser. And if XmlParser is unable to handle the request then the Parser class throws out a message stating that the request was not handled. Lets see the code for the client class which creates a list of files names and then creates the chain which I just described. import java.util.List; import java.util.ArrayList;public class ChainOfResponsibilityDemo {/** * @param args */ public static void main(String[] args) { //List of file names to parse. List<String> fileList = populateFiles(); //No successor for this handler because this is the last in chain. Parser xmlParser = new XmlParser();//XmlParser is the successor of CsvParser. Parser csvParser = new CsvParser(xmlParser); //CsvParser is the successor of JsonParser. Parser jsonParser = new JsonParser(csvParser); //JsonParser is the successor of TextParser. //TextParser is the start of the chain. Parser textParser = new TextParser(jsonParser); //Pass the file name to the first handler in the chain. for ( String fileName : fileList){ textParser.parse(fileName); }} private static List<String> populateFiles(){ List<String> fileList = new ArrayList<>(); fileList.add('someFile.txt'); fileList.add('otherFile.json'); fileList.add('xmlFile.xml'); fileList.add('csvFile.csv'); fileList.add('csvFile.doc'); return fileList; }} In the file name list above I have intentionally added a file name for which there is no handler created. Running the above code gives us the output: A text parser is handling the file: someFile.txt A JSON parser is handling the file: otherFile.json A XML parser is handling the file: xmlFile.xml A CSV parser is handling the file: csvFile.csv Unable to find the correct parser for the file: csvFile.docHappy coding and don’t forget to share! Reference: Simple example to illustrate Chain Of Responsibility Design Pattern from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....
agile-logo

Use Cases for Iterative Development

Almost everything I’ve read about use cases focuses on describing what needs to be added to your product. Agile development says “get it working first, make it better second.” That means changing the way the software enables a user to do something they can already do. How do you manage requirements for incremental improvement? Iterative Development and Incremental Improvement Iterative development is the process by which you release software in iterations – small, incrementally better versions of your software. People usually think about this solely in terms of enable the most valuable capabilities first, then the next most valuable capability next. That’s fine, when your product is new. Eventually (or quickly!) you will reach the point when the next most valuable improvement is not to add a new capability, but rather, to make an existing capability more valuable.Everything is an Upgrade In a couple earlier articles about organizing migration projects and requirements for migration projects, I wrote about how migration is a bit of a misnomer. Everything is a migration.My argument in those earlier articles is that the problems you’re solving with your “new” solution are not new. Your customers are currently solving them in different ways. The relevant question for migration projects is around how much your users are changing the way they solve their problems. [As a slight segue, failing to appreciate this distinction can lead you to a project that is based on a faulty problem statement.] When you are delivering incremental improvements to your product, through iterative development, you are upgrading your software. You are migrating your users from their old solution to their new solution. It just so happens that you are replacing yourself, as you migrate them from the old version to the new one. A viable strategy, that appeals to me personally, is to continuously innovate, disrupting your market, before someone else does. With this approach, you are intentionally reinventing your solution, making it difficult for someone else to out-innovate you. This is a great way to approach developing your roadmap strategically. Your market will change. Rapidly. Do you want to react to your competitors, or keep them on their heels while you stay on the balls of your feet? The reverberations of the “new” Twitter still haven’t settled, and many of Twitter’s competitors are on their heels – some of them only now realizing that Twitter has always been a competitor and not just “a platform.” The new Twitter doesn’t really change anything about how people use Twitter, except make it (markedly) better.Avoiding Featuritis When is it more valuable to improve what your software already enables, versus enabling users to accomplish more with your software? Kathy Sierra introduced us to the concept of featuritis, the idea that too much is too much. In an article on viral product management, I proposed that by making your software better, you can tap into an altruistic mechanism by which people will promote your product. As an example, I proposed improving the usability of your software as a means to cross this viral tipping point.One of the key ideas is that making it better for users makes your product better for your company. Managing “Improvement” Requirements Now that we all agree that there is value in making your software better [comment below if you disagree] the question becomes “How?!” Use Cases can be used as the keystone to your requirements arch. Sometimes, user stories are better than use cases, but use cases are perfectly agile. When you’re starting a project, you want to apply an outside-in approach to defining the scope of that project – or more particularly, to define the scope of problems you are solving in your roadmap. This article will talk in the language of use cases, but identical concepts and approaches apply when using user stories as well. When you have prioritized a use case as “the next most valuable capability to introduce” you add it to the backlog for your sprint. The implementation team then provides feedback that use case is “too big” and you have to slice it up. It is important that you don’t split the use case in such a way that you only solve half the problem. You also shouldn’t deliver a “bad” product. In fact, the ideal way to introduce a new capability is to satisfice, and not introduce the “perfect” solution in your first iteration. Make your first solution “good enough.” Part of the art is defining “good enough,” but rest assured, it is not the same as “the best you can possibly do.” Summarizing some key elements from above demonstrates the eventual reality you will face:Every problem* your users face is already being solved, you are just providing a better solution. There is diminishing, and eventually negative return on adding more capabilities to your product. Your first release of a given capability will only be “good enough.” That leaves room for improvement.*Yes, you can pedantically define “the problem” as “the existing solution is not good enough” – but you still end up in the same place, so why bother? Eventually, the value of improving something you’ve already released will be greater than the value of releasing something new. What do you do? You implement the same use case again. If your product is Software as a Service (SaaS), you absolutely should be thinking about things this way. Becoming better is a continuous improvement objective that is implicit in the economics of SaaS products. Revisiting Use Cases Not to be confused with re-use of use cases, revisiting a use case is specifically revisiting – with the goal of improving – the implementation that is already in place within your product. This is a special case of a migration project – you’re migrating your users from your old solution to your new and improved solution.You may be migrating to an identical process, perhaps with faster performance than your previous release was capable of. Or you may be improving the procedure (improving usability, interaction design, visual design, etc) without affecting the process. Generally, a “make it better” use case improvement effort will have no more than a minor process impact. Your users are still doing the same thing, they are just enjoying it more, or doing it more effectively. When slicing use cases to make them fit within a single sprint, you may design them for a single user persona in the first release, knowing that you won’t meet the expectations of a different user persona until the next release. In this case, your non-functional requirements, constraints, or acceptance criteria will be different. Your use case may also be different (in the details) while appearing to be the same. This also common scenario can be addressed with the same approach. Find that old use case (from several iterations ago) and put it back in the backlog. Remember, agile is about conversation, not artifacts. If you need to add an explanation to the use case, because your team fixates on the fact that it is “already working,” add a qualifier that it needs to be “better.” Then have a conversation, and explain how it needs to be better.Is Design Part of Implementation? Some teams organize such that designers (both architects designing code and designers designing interfaces are being addressed here) are part of the implementation team. Other companies treat designers as stakeholders – providing guidance and input to the product managers and product owners. When a “new design” is being requested from outside the team, include that design guidance as part of the input to the implementation team – through artifact and / or clarification. When a “new design” is being requested of the team (the designer is part of the team), express the new acceptance criteria to the team. Note that these need to be measurable requirements if they are to be considered good requirements.Summary There will come a time when the most valuable thing you can do is improve the user experience for a capability your product already embodies. When that time comes, use the same use case again, updated with the new constraints, non-functional requirements, and acceptance criteria. Reference: Use Cases for Iterative Development from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....
java-duke-logo

Transparent JFrame using JNA

In Make JFrame transparent I had shown a way to making frame’s transparent using AWTUtilities class. But using that class results in access restriction compile time error, resolution in Eclipse is also shown in that post. Now here is the version using java natives. I have used Java Native Access (JNA) library to call native functions to get the things done. What is Java Native Access (JNA)? JNA provides Java programs easy access to native shared libraries (DLLs on Windows) without writing anything but Java code—no JNI or native code is required.JNA allows you to call directly into native functions using natural Java method invocation.The Code import javax.swing.JFrame; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener;import com.sun.jna.platform.WindowUtils;public class TransparentFrame extends JFrame { public TransparentFrame() { setTitle('Transparent Frame'); setSize(400,400); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JSlider slider = new JSlider(JSlider.HORIZONTAL, 30, 100, 100);slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { JSlider slider = (JSlider) e.getSource(); if(!slider.getValueIsAdjusting()){ WindowUtils.setWindowAlpha(TransparentFrame.this, slider.getValue()/100f); } } }); add(slider); setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new TransparentFrame(); } }); } } Here WindowUtils class is provided in JNA jar (platform.jar). The method setWindowAlpha of WindowUtils class is used to make a window transparent. First argument of this method is your frame/window and second argument is alpha value. This class also has a method called setWindowTransparent, which can also be used to make window transparent .Dependencies You will need following 2 jars to run this program: (Both jar files are available to download on GitHub for JNA.)jna.jar platform.jarTo run above code on Windows, you will need to set “sun.java2d.noddraw” system property before calling the WindowUtils function. System.setProperty('sun.java2d.noddraw', 'true');OutputAdditional Notes I have tested this code on following machines:Windows XP service pack 3 (32 bit) Windows 7 (32 bit) Cent OS 5 (32 bit)If you test it on other machines or have code for other machines using JNA for the same functionality then feel free to share it as comment to this post. Happy coding and don’t forget to share! Reference: Transparent JFrame using JNA from our JCG partner Harsh Raval at the harryjoy blog....
spring-logo

Spring 3.1 Caching and Config

I’ve recently being blogging about Spring 3.1 and its new caching annotations @Cacheable and @CacheEvict. As with all Spring features you need to do a certain amount of setup and, as usual, this is done with Spring’s XML configuration file. In the case of caching, turning on @Cacheable and @CacheEvict couldn’t be simpler as all you need to do is to add the following to your Spring config file: <cache:annotation-driven />…together with the appropriate schema definition in your beans XML element declaration: <beans xmlns='http://www.springframework.org/schema/beans' xmlns:p='http://www.springframework.org/schema/p' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:cache='http://www.springframework.org/schema/cache' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsdhttp://www.springframework.org/schema/cachehttp://www.springframework.org/schema/cache/spring-cache.xsd'> …with the salient lines being: xmlns:cache='http://www.springframework.org/schema/cache' …and: http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd However, that’s not the end of the story, as you also need to specify a caching manager and a caching implementation. The good news is that if you’re familiar with the set up of other Spring components, such as the database transaction manager, then there’s no surprises in how this is done. A cache manager class seems to be any class that implements Spring’s org.springframework.cache.CacheManager interface. It’s responsible for managing one or more cache implementations where the cache implementation instance(s) are responsible for actually caching your data. The XML sample below is taken from the example code used in my last two blogs. <bean id='cacheManager' class='org.springframework.cache.support.SimpleCacheManager'> <property name='caches'> <set> <bean class='org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean' p:name='employee'/> <!-- TODO Add other cache instances in here --> </set> </property> </bean> In the above configurtion, I’m using Spring’s SimpleCacheManager to manage an instance of their ConcurrentMapCacheFactoryBean with a cache implementation named: “ employee”. One important point to note is that your cache manager MUST have a bean id of cacheManager. If you get this wrong then you’ll get the following exception: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.interceptor.CacheInterceptor#0': Cannot resolve reference to bean 'cacheManager' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'cacheManager' is defined at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. populateBean(AbstractAutowireCapableBeanFactory.java:1118) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. doCreateBean(AbstractAutowireCapableBeanFactory.java:517) : : trace details removed for clarity : at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. main(RemoteTestRunner.java:197) Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'cacheManager' is defined at org.springframework.beans.factory.support.DefaultListableBeanFactory. getBeanDefinition(DefaultListableBeanFactory.java:553) at org.springframework.beans.factory.support.AbstractBeanFactory. getMergedLocalBeanDefinition(AbstractBeanFactory.java:1095) at org.springframework.beans.factory.support.AbstractBeanFactory. doGetBean(AbstractBeanFactory.java:277) at org.springframework.beans.factory.support.AbstractBeanFactory. getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:322)As I said above, in my simple configuration, the whole affair is orchestrated by the SimpleCacheManager. This, according to the documentation, is normally “Useful for testing or simple caching declarations”. Although you could write your own CacheManager implementation, the Guys at Spring have provided other cache managers for different situationsSimpleCacheManager – see above. NoOpCacheManager – used for testing, in that it doesn’t actually cache anything, although be careful here as testing your code without caching may trip you up when you turn caching on. CompositeCacheManager – allows the use multiple cache managers in a single application. EhCacheCacheManager – a cache manager that wraps an ehCache instance. See http://ehcache.orgSelecting which cache manager to use in any given environment seems like a really good use for Spring Profiles. See:?Using Spring Profiles in XML Config Using Spring Profiles and Java ConfigurationAnd, that just about wraps things up, although just for completeness, below is the complete configuration file used in my previous two blogs: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:p='http://www.springframework.org/schema/p' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:cache='http://www.springframework.org/schema/cache' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd'> <!-- Switch on the Caching --> <cache:annotation-driven /><!-- Do the component scan path --> <context:component-scan base-package='caching' /><!-- simple cache manager --> <bean id='cacheManager' class='org.springframework.cache.support.SimpleCacheManager'> <property name='caches'> <set> <bean class='org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean' p:name='employee'/> <!-- TODO Add other cache instances in here --> </set> </property> </bean></beans>As a Lieutenant Columbo is fond of saying “And just one more thing, you know what bothers me about this case…”; well there are several things that bother me about cache managers, for example:What do the Guys at Spring mean by “Useful for testing or simple caching declarations” when talking about the SimpleCacheManager? Just exactly when should you use it in anger rather than for testing? Would it ever be advisable to write your own CacheManager implementation or even a Cache implementation? What exactly are the advantages of using the EhCacheCacheManager? How often would you really need CompositeCacheManager?All of which I may be looking into in the future… Happy coding and don’t forget to share! Reference: Spring 3.1 Caching and Config from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
software-development-2-logo

Do Web 2.0 Companies Really Have The Best Technical Talent?

There are a lot of cool companies with products on the web that millions of people are using. I’ll wondered whether I should label them “web 2.0?, “silicon valley”, “cool startups”, or something else, but I think it’s clear which ones I’m writing about. The assumption is that these companies attract the best technical talent. And even despite my criticism of their interview process, I still used to think that indeed the best developers and software engineers go to these popular web companies. But due to a couple of observations, I’m no longer certain. These companies have been making junior-developer mistakes in many areas and I can’t imagine that a “top talent” could allow this to happen. What raised my concerns is security. And it’s not computer-science heavy-cryptography-algorithm-design type of security. It’s the simple web security that every developer out there should actually know – how to store passwords. I was astonished to learn that Yahoo, LinkedIn, and recently Pandora were revealed not to be hashing and salting the passwords, which lead to password leaks. (Note: the case with Pandora is a bit less trivial as noted in the HN comments). It’s the bare minimum thing you should do, and any sane developer who sees that the company stores passwords in plaintext, or uses MD5 hashes, or doesn’t add salt, should just go and fix that (with management approval, yes). “Legacy” is not an issue here – you can hash&salt plaintext passwords, and you can add salt to a hash the next time a user logs in (by getting his actual password on login in order to generate a new hash). But many companies haven’t done that. I would be ashamed to work in a project that doesn’t follow these practices widely known for years, especially if it has millions of users. But enough with the security. Let’s talk about API design. API design is hard, but it is manageable by top engineers. And yet, there are many instances of unstable, not well-designed APIs. Facebook, for example. They are improving it now, but it used to be horrible. The Android core APIs look (or used to look in the first versions) as if written by a freshman (just a simple example from my android experience: cursor.getInt(cursor.getColumnIndex(CallLog.Calls.TYPE));. Many methods with more than 5 arguments, etc.) Related to APIs – salesforce XSDs sometimes cannot be parsed, because they are invalid – we have been fixing their XSDs in order to communicate with them. And these are the things that we see on the surface. I’ve heard “horror” stories like writing web projects in C++, and the other day I took a look at the code of reddit, which (even though I’m not a python developer) struck me with some really odd stuff (won’t go into details). I guess many people have heard or seen a lot of “wtf” moments, that a “best developer” just wouldn’t do. So is it really the case that these silicon valley/web 2.0 companies have the best developers, or they are just regular companies that have average developers doing stupid things? There are certainly some great developers in these companies that do “magic” and “insane” stuff, but apart from the stars, are the rest of the developers also “the best”? I’m no longer sure this is the case. Don’t forget to share! Reference: Do Web 2.0 Companies Really Have The Best Technical Talent? from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close