Featured FREE Whitepapers

What's New Here?


Java Examples & Code Snippets by Java Code Geeks – Official Launch

Hi all, Here at Java Code Geeks we are striving to create the ultimate Java to Java developers resource center. In that direction and during the past few months we have made partnerships, we have set up a Java and Android tutorials page and we have created open source software. But we did not stop there. We are now proud to announce the official launch of our Java Examples & Code Snippets dedicated site. There you will find a wealth of Java snippets that will help you understand basic Java concepts, use the JDK API and kick start your applications by leveraging existing Java technologies.    The main categories currently are:Java Basics Core Java Enterprise Java Desktop Java AndroidOur goal was to make the snippets as easy to use as possible. For this reason, the vast majority of them can be used as a standalone application that showcases how to use the particular API. The snippets are ready to go, just copy and paste, run the application and see the results. We hope that this effort will be a great aid to the community and we are really glad to have helped be created. We would be delighted if you helped spreading the word and allowing more and more developers to come in contact with our content. Don’t forget to share!Java Examples & Code Snippets Happy coding everyone! Cheers, The Java Code Geeks team...

Musing on mis-usings: ‘Powerful use, Damaging misuse’

There’s an old phrase attributed to the former British Prime Minister Benjamin Disraeli which states there are three types of lies: “lies, damn lies and statistics”.  The insinuation here is that statistics are so easy to make up they are unreliable.  However, statistics are extensively used in empiracle science so surely they have some merit? In fact, they have a lot of merit.  But only when they are used corrrectly.  The problem is they are easy to misuse.  And when misused, misinformation happens which in turn does more more harm than good. There are strong parallels to this narrative in the world of software engineering. Object orientated languages introduced the notion of inheritance, a clever idea to promote code reuse. However, inheritance – when misused – can easily lead to complex hierarchies and can make it difficult to change objects. The misuse of inheritance can reek havoc and since all it takes to use inheritance (in Java) is to be able to spell the word “extends”, it’s very easy to reek such havoc if you don’t know what you are doing. A similar story can be told with polymorphism and with design patterns. We all know the case of someone hell bent on using a pattern and thinking more about the pattern than the problem they are trying to solve.  Even if they understand the difference between a Bridge and an Adapter it is still quite possible that some part of the architecture may be over engineered. Perhaps it’s worth bearing in mind that every single one of the GOF design pattern is already in JDK, so if you really want it in your architecture you don’t have to look very far – otherwise only use when it makes sense to use it. This ‘Powerful use, damaging misuse’ anti-pattern is ubiquitous in Java systems.  Servlet Filters are a very handy feature for manipulating requests and reponses, but that’s all they are meant to do.  There is nothing in the language to stop a developer treating the Filter as a classical object, adding public APIs and business logic to the Filter.  Of course the filter is never meant to be used this way and when they are trouble inevitably happens.  But the key point is that it’s easy for a developer to take such a powerful feature, misuse it and damage architectures.  ‘Powerful use, damaging misuse’ happens very easy with Aspects, even Exceptions (we have all seen cases where exceptions were thrown and it would have made more sense to just return a boolean) and with many other features.  When it is so easy to make mistakes, inevitably they will happen.  The Java compiler isn’t going to say – ‘wait a sec do you really understand this concept?’ and codestyle tools aren’t sophisticated enough to spot misuse of advanced concepts. In addition, no company has the time to get the most senior person to review every line of code.  And even the most Senior Engineer will make mistakes.  Now, much of what has been written here is obvious and has already been well documentated.   Powerful features generally have to be well understood to be properly used.  The question I think worth asking is if there is any powerful feature or engineering concept in a Java centric architecture which is not so easy to misuse? I suggest there is at least one, namely:  Encapsulation.  Firstly, let’s consider if encapsulation didn’t exist.  Everything would be public or global (as in Javascript).  As soon as access scope narrows, encapsulation is happening which is usually a good thing. Is it possible to make an architecture worse by encapsulating behaviour?  Well it’s damn hard to think of a case where it could.  If you make a method private, it may be harder to unit test. But is it really?  It’s always easy to unit test the method which calls it, which will be in the same class and logical unit. There’s a lesson to be learnt here. As soon as you design anything which something else uses, whether it be a core component in your architecture, a utility library class or a REST API you are going to tell the world about, ask youself:How easy is it for people to misuse this? Is it at the risky levels of inheritance or the safer levels of encapsulation? What are the consequences of misuse? And what can you do to minimise misuse and its consequences? Aim to increase ‘powerful use’ and minimise ‘damaging misuse’! Reference: Musing on mis-usings: ‘Powerful use, Damaging misuse’ from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. Related Articles :Java 7 Feature Overview Java SE 7, 8, 9 – Moving Java Forward Recycling objects to improve performance Java Secret: Loading and unloading static fields Java Best Practices Series Laws of Software Design...

Multitenancy in Google AppEngine (GAE)

Multitenancy is a topic that has been discussed for many years, and there are many excellent references that readily available, so I will just present a brief introduction. Multitenancy is a software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants). With a multitenant architecture, an application can be designed to virtually partition its data and configuration (business logic), and each client organization works with a customized virtual application instance. It suits SaaS (Software as a Service) cloud computing very well; however, they can be very complex to implement. The architect must be aware of security, access control, etc. Multitenancy can exist in several different flavors: Multitenancy in DeploymentFully isolated business logic (dedicated server customized business process) Virtualized Application Servers (dedicated application server, single VM per app server) Shared virtual servers (dedicated application server on shared VM) Shared application servers (threads and sessions)This spectrum of different installations can be seen here:Multitenancy and DataDedicated physical server (DB resides in isolated physical hosts) Shard virtualized host (separate DBs on virtual machines) Database on shared host (separate DB on same physical host) Dedicated schema within shared databases (same DB, dedicated schema/table) Shared tables (same DB and schema, segregated by keys – rows)Before jumping into the APIs, it is important to understand how Google’s internal data storage solution work. Introducing Google’s BigTable technology: It is a storage solution for Google’s own applications such as Search, Google Analytics, gMail, AppEngine, etc BigTable is NOT:A database A horizontally sharded data A distributed hash tableIt IS: a sparse, distributed, persistent multidimensional sorted map. In basic terms, it is a hash of hashes (map of maps, or a dict of dicts). AppEngine data is in one “table” distributed across multiple computers. Every entity has a Key by which it is uniquely identified (Parent + Child + ID), but there is also metadata that tells which GAE application (appId) an Entity belongs to.From the graph above, BigTable distributes its data in a format called tablets, which are basically slices of the data. These tablets live on different servers in the cloud. To index into a specific record (record and entity mean pretty much the same thing) you use a 64KB string, called a Key. This key has information about the specific row and column value you want to read from. It also contains a timestamp to allow for multiple versions of your data to be stored. In addition, records for a specific entity group are located contiguously. This facilitates scanning for records. Now we can dive into how Google implements Multitenancy. Implemented in release 1.3.6 of App Engine, the Namespace API (see resources) is designed to be very customizable, with hooks into your code that you can control, so you can set up multi-tenancy tailored to your application’s needs. The API works with all of the relevant App Engine APIs (Datastore, Memcache, Blobstore, and Task Queues). In GAE terms, namespace == tenant At the storage level of datastore, a namespace is just like an app-id. Each namespace essentially looks to the datastore as another view into the application’s data. Hence, queries cannot span namespaces (at least for now) and key ranges are different per namespace. Once an entity is created, it’s namespace does not change, so doing a namespace_manager.set(…) will have no effect on its key. Similarly, once a query is created, its namespace is set. Same with memcache_service() and all other GAE APIS. Hence it’s important to know which objects have which namespaces. In my mind, since all of GAE user’s data lives in BigTable, it helps to visualize a GAE Key object as: Application ID | Ancestor Keys | Kind Name | Key Name or ID All these values provide an address to locate your application’s data. Similarly, you can imagine the multitenant key as: Application ID | Namespace| Ancestor Keys | Kind Name | Key Name or ID Now let’s briefly discuss the API (Python):Function Name Arguments APIget_namespace None Returns the current namespace, or returns an empty string if the namespace is unset.set_namespace namespace: A value of None unsets the default namespace value. Otherwise, ([0-9A-Za-z._-]{0,100}) Sets the namespace for the current HTTP requestvalidate_namespace value: string containing the namespace being evaluated. Raises the BadValueError if not ([0-9A-Za-z._-]{0,100}). exception=BadValueError Raises the BadValueError exception if the namespace string is not valid.Here is a quick example:Datastore Example tid = getTenant()namespace = namespace_manager.get_namespace()try: namespace_manager.set_namespace('tenant-' + str(tid)) # Any datastore operations done here user = User('Luis', 'Atencio') user.put()finally:# Restore the saved namespace namespace_manager.set_namespace(namespace)The important thing to notice here is the pattern that GAE provides. It will the exact same thing for the Java APIs. The finally block is immensely important as it restores the namespace to what is was originally (before the request). Omitting the finally block will cause the namespace to be set for the duration of the request. That means that any API access whether it is datastore queries or Memcache retrieval will use the namespace previously set. Furthermore, to query for all the namespaces created, GAE provides some meta queries, as such:Metaqueries from google.appengine.ext.db.metadata import Namespaceq = Namespace.all() if start_ns: q.filter('__key__ >=', Namespace.key_for_namespace(start_ns)) ifend_ns: q.filter('__key__ <=', Namespace.key_for_namespace(end_ns))results = q.fetch(limit) # Reduce the namespace objects into a list of namespace names tenants = map(lambda ns: ns.namespace_name, results) return tenantsResources: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=related BigTable. https://docs.google.com/viewer?url=http%3A%2F%2Flabs.google.com%2Fpapers%2Fbigtable-osdi06.pdf http://www.youtube.com/watch?v=tx5gdoNpcZM http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_topic2 http://www.slideshare.net/pnicolas/cloudmultitenancy http://code.google.com/appengine/articles/life_of_write.htmlReference: Multitenancy in Google AppEngine (GAE) from our JCG partner Luis Atencio at the Reflective Thought blog.Related Articles :Google App Engine Java Capabilities and Namespaces API Google App Engine: Host application in your own domain App Engine Java Development with Netbeans Android App Engine Integration...

Spring Pitfalls: Transactional tests considered harmful

One of the Spring killer-features is an in-container integration testing. While EJB lacked this functionality for many years (Java EE 6 finally addresses this, however I haven’t, ekhem, tested it), Spring from the very beginning allowed you to test the full stack, starting from web tier, through services all the way down to the database. Database is the problematic part. First you need to use in-memory self-contained database like H2 to decouple your tests from an external database. Spring helps with this to a great degree, especially now with profiles and embedded database support. The second problem is more subtle. While typical Spring application is almost completely stateless (for better or worse), database is inherently stateful. This complicates integration testing since the very first principle of writing tests is that they should be independent on each other and repeatable. If one test writes something to the database, another test may fail; also the same test may fail on subsequent call due to database changes. Obviously Spring handles this problem as well with a very neat trick: prior to running every test Spring starts a new transaction. The whole test (including its setup and tear down) runs within the same transaction which is… rolled back at the end. This means all the changes made during the test are visible in the database just like if they were persisted. However rollback after every test wipes out all the changes and the next test works on a clean and fresh database. Brilliant! Unfortunately this is not yet another article about Spring integration testing advantages. I think I have written hundreds if not thousands of such tests and I truly appreciate the transparent support Spring framework gives. But I also came across numerous quirks and inconsistencies introduces by this comfortable feature. To make matters worse, very often so-called transactional tests are actually hiding errors convincing the developer that the software works, while it fails after deployment! Here is a non-exhaustive but eye-opening collection of issues: @Test public void shouldThrowLazyInitializationExceptionWhenFetchingLazyOneToManyRelationship() throws Exception { //given final Book someBook = findAnyExistingBook(); //when try { someBook.reviews().size(); fail(); } catch(LazyInitializationException e) { //then } }This is a known issue with Hibernate and spring integration testing. Book is a database entity with one-to-many, lazy by default, relationship to Reviews. findAnyExistingBook() simply reads a test book from a transactional service. Now a bit of theory: as long as an entity is bound to a session (EntityManager if using JPA), it can lazily and transparently load relationships. In our case it means: as long as it is within a scope of a transaction. The moment an entity leaves a transaction, it becomes detached. At this lifecycle stage an entity is no longer attached to a session/EntityManager (which has been committed and closed already) and any approach to fetch lazy properties throws the dreadful LazyInitializationException. This behaviour is actually standardized in JPA (except the exception class itself, which is vendor specific). In our case we are callling .reviews() (Scala-style “getter”, we will translate our test case to ScalaTest soon as well) and expecting to see the Hibernate exception. However the exception is not thrown and the application keeps going. That’s because the whole test is running within a transaction and the Book entity never gets out of transactional scope. Lazy loading always works in Spring integration tests. To be fair, we will never see tests like this in real life (unless you are testing to make sure that a given collection is lazy – unlikely). In real life we are testing business logic which just works in tests. However after deploying we start experiencing LazyInitializationException. But we tested it! Not only Spring integration testing support hid the problem, but it also encourages the developer to throw in OpenSessionInViewFilter or OpenEntityManagerInViewFilter. In other words: our test not only didn’t discover a bug in our code, but it also significantly worsen our overall architecture and performance. Not what I would expect. My typical workflow these days while implementing some end-to-end feature is to write back-end tests, implement the back-end including REST API and when everything runs smoothly deploy it and proceed with the GUI. The latter is written using AJAX/JavaScript completely so I only need to deploy once and replace cheap client-side files often. At this stage I don’t want to go back to the server to fix undiscovered bugs. Suppressing LazyInitializationException is among the most known problems with Spring integration testing. But this is just a tip of an iceberg. Here is a bit more complex example (it uses JPA again, but this problems manifests itself with plain JDBC and any other persistence technology as well): @Test public void externalThreadShouldSeeChangesMadeInMainThread() throws Exception { //given final Book someBook = findAnyExistingBook(); someBook.setAuthor("Bruce Wayne"); bookService.save(someBook); //when final Future<Book> future = executorService.submit(new Callable<Book>() { @Override public Book call() throws Exception { return bookService.findBy(someBook.id()).get(); } }); //then assertThat(future.get().getAuthor()).isEqualTo("Bruce Wayne"); }In the first step we are loading some book from the database and modifying the author, saving an entity afterwards. Then we load the same entity by id in another thread. The entity is already saved so it is guaranteed that the thread should see the changes. This is not the case however, which is proved by the failing assertion in the last step. What happened? We have just observed “I” in ACID transaction properties. Changes made by the test thread are not visible to other threads/connections until the transaction is committed. But we know the test transaction commits! This small showcase demonstrates how hard it is to write multi-threaded integration tests with transactional support. I learnt the hard way few weeks ago when I wanted to integration-test Quartz scheduler with JDBCJobStore enabled. No matter how hard I tried the jobs were never fired. It turned out that I was scheduling a job in Spring-managed test within the scope of a Spring transaction. Because the transaction was never committed, the external scheduler and worker threads couldn’t see the new job record in the database. And how many hours have you spent debugging such issues? Talking about debugging, the same problem pop up when trouble-shooting database-related test failures. I can add this simple H2 web console (browse to localhost:8082) bean to my test configuration: @Bean(initMethod = "start", destroyMethod = "stop") def h2WebServer() = Server.createWebServer("-webDaemon", "-webAllowOthers")But I will never see changes made by my test while stepping through it. I cannot run the query manually to find out why it returns wrong results. Also I cannot modify the data on-the-fly to have a faster turn-around while trouble-shooting. My database lives in a different dimension. Please read the next test carefully, it’s not long: @Test public void shouldNotSaveAndLoadChangesMadeToNotManagedEntity() throws Exception { //given final Book unManagedBook = findAnyExistingBook(); unManagedBook.setAuthor("Clark Kent"); //when final Book loadedBook = bookService.findBy(unManagedBook.id()).get(); //then assertThat(loadedBook.getAuthor()).isNotEqualTo("Clark Kent"); }We are loading a book and modifying an author without explicitly persisting it. Then we are loading it again from the database and making sure that the change was not persisted. Guess what, somehow we have updated the object! If you are experienced JPA/Hibernate user you know exactly how could that happen. Remember when I was describing attached/detached entities above? When an entity is still attached to the underlying EntityManager/session it has other powers as well. JPA provider is obligated to track the changes made to such entities and automatically propagate them to the database when entity becomes detached (so-called dirty checking). This means that an idiomatic way to work with JPA entities modifications is to load an object from a database, perform necessary changes using setters and… that’s it. When the entity becomes detached, JPA will discover it was modified and issue an UPDATE for you. No merge()/update() needed, cute object abstraction. This works as long as an entity is managed. Changes made to detached entity are silently ignored because JPA provider knows nothing about such entities. Now the best part – you almost never know if your entity is attached or not because transaction management is transparent and almost invisible. This means that it is way too easy to only modify POJO instance in-memory while still believing that changes are persistent and vice-versa! Can we test it? Of course, we just did – and failed. In our test above transaction spans across the whole test method, so every entity is managed. Also due to Hibernate L1 cache we get the exact same book instance back, even though no database update has been yet issued. This is another case where transactional tests are hiding problems rather than revealing them (see LazyInitializationException example). Changes are propagated to the database as expected in the test, but silently ignored after deployment… BTW did I mention that all tests so far are passing once you get rid of @Transactional annotation over test case class? Have a look, sources as always are available. This one is exciting. I have a transactional deleteAndThrow(book) business method that deletes given book and throws OppsException. Here is my test that passes, proving the code is correct: @Test public void shouldDeleteEntityAndThrowAnException() throws Exception { //given final Book someBook = findAnyExistingBook(); try { //when bookService.deleteAndThrow(someBook); fail(); } catch (OppsException e) { //then final Option<Book> deletedBook = bookService.findBy(someBook.id()); assertThat(deletedBook.isEmpty()).isTrue(); } }The Scala’s Option<Book> is returned (have you noticed how nicely Java code interacts with services and and entities written in Scala?) instead of null. deletedBook.isEmpty() yielding true means that no result was found. So it seems like our code is correct: the entity was deleted and the exception thrown. Yes, you are correct, it fails silently after deployment again! This time Hibernate L1 cache knows that this particular instance of book was deleted so it returns null even before flushing changes to the database. However OppsException thrown from the services rolls back the transaction, discarding DELETE! But the test passes, only because Spring manages this tiny extra transaction and the assertion happens within that transaction. Milliseconds later transaction rolls back, resurrecting deleted entity. Obviously the solution was to add noRollbackFor attribute for OppsException (this is an actual bug I found in my code after dropping transactional tests in favour to other solution which is yet to be explained). But this is not the point. The point is – can you really afford to write and maintain tests that are generating false-positives, convincing you that your application is working whereas it’s not? Oh, and did I mention that transacational tests are actually leaking here and there and won’t prevent you from modifying the test database? The second test fails, can you see why? @Test public void shouldStoreReviewInSecondThread() throws Exception { final Book someBook = findAnyExistingBook(); executorService.submit(new Callable<Review>() { @Override public Review call() throws Exception { return reviewDao.save(new Review("Unicorn", "Excellent!!!1!", someBook)); } }).get(); }@Test public void shouldNotSeeReviewStoredInPreviousTest() throws Exception { //given //when final Iterable<Review> reviews = reviewDao.findAll(); //then assertThat(reviews).isEmpty(); }Once again threading gets into the way. It gets even more interesting when you try to clean up after external transaction in background thread that obviously was committed. The natural place would be to delete created Review in @After method. But @After is executed within the same test transaction, so the clean up will be… rolled back. Of course I am not here to complain and grumble about my favourite application stack weaknesses. I am here to give solutions and hints. Our goal is to get rid of transactional tests altogether and only depend on application transactions. This will help us to avoid all the aforementioned problems. Obviously we cannot drop test independence and repeatability features. Each test has to work on the same database to be reliable. First, we will translate JUnit test to ScalaTest. In order to have Spring dependency-injection support we need this tiny trait: trait SpringRule extends Suite with BeforeAndAfterAll { this: AbstractSuite => override protected def beforeAll() { new TestContextManager(this.getClass).prepareTestInstance(this) super.beforeAll(); } }Now it’s about time to reveal my idea (if you are impatient, full source code is here). It’s far from being ingenious or unique, but I think it deserves some attention. Instead of running everything in one huge transaction and rolling it back, just let the tested code to start and commit transactions wherever and whenever it needs and has configured. This means that the data is actually written to the database and persistence works exactly the same as it would after the deployment. Where’s the catch? We must somehow clean up the mess after each test… Turns out it’s not that complicated. Just take a dump of a clean database and import it after each test! The dump contains all the tables, constraints and records present right after the deployment and application start-up but before the first test run. It’s like taking a backup and restoring from it! Look how simple it is with H2: trait DbResetRule extends Suite with BeforeAndAfterEach with BeforeAndAfterAll { this: SpringRule => @Resource val dataSource: DataSource = null val dbScriptFile = File.createTempFile(classOf[DbResetRule].getSimpleName + "-", ".sql") override protected def beforeAll() { new JdbcTemplate(dataSource).execute("SCRIPT NOPASSWORDS DROP TO '" + dbScriptFile.getPath + "'") dbScriptFile.deleteOnExit() super.beforeAll() } override protected def afterEach() { super.afterEach() new JdbcTemplate(dataSource).execute("RUNSCRIPT FROM '" + dbScriptFile.getPath + "'") } } trait DbResetSpringRule extends DbResetRule with SpringRuleThe SQL dump (see H2 SCRIPT command) is taken once and exported to temporary file. Then the SQL script file is executed after each test. Believe it or not, that’s it! Our test is no longer transactional (so all Hibernate and multi-threading corner-cases are discovered and tested) while we didn’t sacrifice the ease of transactional-tests setup (no extra clean up needed). Also I can finally look at the database contents while debugging! Here is one of the previous tests in action: @RunWith(classOf[JUnitRunner]) @ContextConfiguration(classes = Array[Class[_]](classOf[SpringConfiguration])) class BookServiceTest extends FunSuite with ShouldMatchers with BeforeAndAfterAll with DbResetSpringRule { @Resource val bookService: BookService = null private def findAnyExistingBook() = bookService.listBooks(new PageRequest(0, 1)).getContent.head test("should delete entity and throw an exception") { val someBook = findAnyExistingBook() intercept[OppsException] { bookService.deleteAndThrow(someBook) } bookService findBy someBook.id should be (None) } }Keep in mind that this is not a library/utility, but an idea. For your project you might choose slightly different approach and tools, but the general idea still applies: let your code run in the exact same environment as after deployment and clean up the mess from backup afterwards. You can achieve the exact same results with JUnit, HSQLDB or whatever you prefer. Of course you can add some clever optimizations as well – mark or discover tests that are not modifying the database, choose faster dump, import approaches, etc. To be honest, there are some downsides as well, here are a few from top of my head:Performance: While it is not obvious that this approach is significantly slower than rolling back transactions all the time (some databases are particularly slow at rollbacks), it is safe to assume so. Of course in-memory databases might have some unexpected performance characteristics, but be prepared for a slowdown. However I haven’t observed huge difference (maybe around 10%) per 100 tests in a small project. Concurrency: You can no longer run tests concurrently. Changes made by one thread (test) are visible by others, making test execution unpredictable. This becomes even more painful with regards to aforementioned performance problems.That would be it. If you are interested give this approach a chance. It may take some time to adopt your existing test base, but discovering even one hidden bug is worth it, don’t you think? And also be aware of other spring pitfalls. Reference: Spring pitfalls: transactional tests considered harmful from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound Blog. Related Articles :Spring Pitfalls: Proxying Spring Declarative Transactions Example The evolution of Spring dependency injection techniques Domain Driven Design with Spring and AspectJ Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Aspect Oriented Programming with Spring AOP Java Tutorials and Android Tutorials list...

Best Of The Week – 2011 – W49

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Java 7: Project Coin in code examples: This article provides a short description of the new features included in Java 7 (Project Coin) accompanied with code examples on how to use them. Also check out Manipulating Files in Java 7 and Java 7 Feature Overview. * Apache Geronimo 3 is Java EE 6 Full Profile Certified: Apache Geronimo 3.0-beta-1 is now fully Java EE 6 Certified. Geronimo joins the rank of GlassFish 3 as an open source server that  has passed both Java EE 6.0 Full Profile and Web Profile certification tests. Geronimo 3 (with an updated kernel based on OSGi technology.) is available in 6 distributions and supports a bunch of Java EE 6 technologies. * Android User Interface Design: Creating a Numeric Keypad with GridLayout: A nice tutorial on how to leverate GridLayout, the new UI layout introduced in Android 4.0 (aka Ice Cream Sandwich). GridLayout is more  flexible than the TableLayout control, e.g. its cells can span rows, unlike with TableLayout. Its flexibility comes from the fact  that it really helps to line up objects along the virtual grid lines created while building a view with GridLayout. * Unit Testing in Java: A Sleeping Snail: This article shows how to deal with a “sleeping snail”, i.e a test that’s sluggish and takes too long to run because it relies on Thread#sleep and arbitrarily long waits to allow threads to execute before continuing with the workflow under test.  Using a CountDownLatch we are able to make the test thread immediately aware of when the worker threads have completed their work. * Software Developers Hate Worthless Tasks: A nice article that states a pretty much known idea, that software developers loathe worthless, tedious tasks and that they  lose productivity when faced with tasks they perceive as unimportant work. The countermeasure for that is using some of the  latest developments like convention over configuration, use of less boilerplate code or leveraging scripts for tedious tasks. * Google Guava – Futures: This tutorial explores the capabilitise of Google Guava regarding Futures and asynchronous processing. Also check out Google Guava Libraries Essentials. * Yammer Moving from Scala to Java: This article provides some details on Yammer’s choice to move from Scala back to Java since according to them,  the friction and complexity that comes with using Scala instead of Java isn’t offset by enough productivity benefit.  Some issues regarding Scala’s performance are also mentioned, along with examples that showcase perfomance in various scenarios. * Sleeping Under Your Desk Doesn’t Make You A Success: In this article the author argues that in order to be successful, for example in a startup, you don’t have to sacrifice  your personal life. He also warns about the dangers of burning up. Very interesting read. * Stress/Load-Testing of Asynchronous HTTP/REST Services with JMeter: This tutorial explains how to perform load testing of asynchronous HTTP/REST based services with JMeter.  The example includes uploading a file and then polling (thus the async nature) in order to retrieve the  resource once it is available. * The Importance of Database Testing: Nice article stating the importance of DB testing and providing some mistakes in that like not testing at all,  not testing the DB schema, testing without using the production engine, not testing the creation scripts,  not testing foreign keys, default values and constraints, having colliding tests etc. * Stress/Load-Testing of Asynchronous HTTP/REST Services with JMeter: This tutorial explains how to perform load testing of asynchronous HTTP/REST based services with JMeter.  The example includes uploading a file and then polling (thus the async nature) in order to retrieve the  resource once it is available. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W48 Best Of The Week – 2011 – W47 Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43 Best Of The Week – 2011 – W42 Best Of The Week – 2011 – W41 Best Of The Week – 2011 – W40 Best Of The Week – 2011 – W39...

REST Service Discoverability with Spring, part 5

This is the fifth of a series of articles about setting up a secure RESTful Web Service using Spring 3.1 and Spring Security 3.1 with Java based configuration. The previous article introduced the concept of Discoverability for the RESTful service, HATEOAS and followed with some practical scenarios driven by tests. This article will focus on the actual implementation of discoverability and satisfying the HATEOAS constraint in the REST Service using Spring 3.1. Decouple Discoverability through events Discoverability as a separate aspect or concern of the web layer should be decoupled from the controller handling the HTTP request. In order to do so, the Controller will fire off events for all the actions that require additional manipulation of the HTTP response: @RequestMapping( value = "admin/foo/{id}",method = RequestMethod.GET ) @ResponseBody public Foo get( @PathVariable( "id" ) Long id, HttpServletRequest request, HttpServletResponse response ){ Foo resourceById = RestPreconditions.checkNotNull( this.service.getById( id ) ); this.eventPublisher.publishEvent ( new SingleResourceRetrieved( this, request, response ) ); return resourceById; } @RequestMapping( value = "admin/foo",method = RequestMethod.POST ) @ResponseStatus( HttpStatus.CREATED ) public void create( @RequestBody Foo resource, HttpServletRequest request, HttpServletResponse response ){ RestPreconditions.checkNotNullFromRequest( resource ); Long idOfCreatedResource = this.service.create( resource ); this.eventPublisher.publishEvent ( new ResourceCreated( this, request, response, idOfCreatedResource ) ); }These events can then be handled by any number of decoupled listeners, each focusing on it’s own particular case and each moving towards satisfying the overall HATEOAS constraint. Also, the listeners should be the last objects in the call stack and no direct access to them is necessary; as such they are not public. Make the URI of a newly created resource discoverable As discussed in the previous post, the operation of creating a new resource should return the URI of that resource in the Location HTTP header of the response. : @Component class ResourceCreatedDiscoverabilityListener implements ApplicationListener< ResourceCreated >{ @Override public void onApplicationEvent( ResourceCreated resourceCreatedEvent ){ Preconditions.checkNotNull( resourceCreatedEvent ); HttpServletRequest request = resourceCreatedEvent.getRequest(); HttpServletResponse response = resourceCreatedEvent.getResponse(); long idOfNewResource = resourceCreatedEvent.getIdOfNewResource(); this.addLinkHeaderOnResourceCreation( request, response, idOfNewResource ); } void addLinkHeaderOnResourceCreation ( HttpServletRequest request, HttpServletResponse response, long idOfNewResource ){ String requestUrl = request.getRequestURL().toString(); URI uri = new UriTemplate( "{requestUrl}/{idOfNewResource}" ) .expand( requestUrl, idOfNewResource ); response.setHeader( HttpHeaders.LOCATION, uri.toASCIIString() ); } }Unfortunately, dealing with the low level request and response objects is inevitable even in Spring 3.1, because first class support for specifying the Location is still in the works. Get of single resource Retrieving a single resource should allow the client to discover the URI to get all resources of that particular type: @Component class SingleResourceRetrievedDiscoverabilityListener implements ApplicationListener< SingleResourceRetrieved >{ @Override public void onApplicationEvent( SingleResourceRetrieved resourceRetrievedEvent ){ Preconditions.checkNotNull( resourceRetrievedEvent ); HttpServletRequest request = resourceRetrievedEvent.getRequest(); HttpServletResponse response = resourceRetrievedEvent.getResponse(); this.addLinkHeaderOnSingleResourceRetrieval( request, response ); } void addLinkHeaderOnSingleResourceRetrieval ( HttpServletRequest request, HttpServletResponse response ){ StringBuffer requestURL = request.getRequestURL(); int positionOfLastSlash = requestURL.lastIndexOf( "/" ); String uriForResourceCreation = requestURL.substring( 0, positionOfLastSlash );String linkHeaderValue = RESTURLUtil .createLinkHeader( uriForResourceCreation, "collection" ); response.addHeader( LINK_HEADER, linkHeaderValue ); } }Note that the semantics of the link relation make use of the “collection” relation type, specified and used in several microformats, but not yet standardized. The Link header is one of the most used HTTP header for the purposes of discoverability. Because of this, some simple utilities are needed to ease the creation of it’s values on the server and to avoid introducing a third party library. Discoverability at the root The root is the entry point in the RESTful web service – it is what the client comes into contact with when consuming the API for the first time. If the HATEOAS constraint is to be considered and implemented throughout, then this is the place to start. The fact that most of the main URIs of the system have to be discoverable from the root shouldn’t come as much of a surprise by this point. This is a sample controller method to provide discoverability at the root: @RequestMapping( value = "admin",method = RequestMethod.GET ) @ResponseStatus( value = HttpStatus.NO_CONTENT ) public void adminRoot( HttpServletRequest request, final response ){ String rootUri = request.getRequestURL().toString(); URI fooUri = new UriTemplate( "{rootUri}/{resource}" ).expand( rootUri, "foo" ); String linkToFoo = RESTURIUtil.createLinkHeader ( fooUri.toASCIIString(), REL_COLLECTION ); response.addHeader( HttpConstants.LINK_HEADER, linkToFoo ); }This is of course an illustration of the concept, to be read in the context of the proof of concept RESTful service of the series. In a more complex system there would be many more links, each with it’s own semantics defined by the type of link relation. Discoverability is not about changing URIs One of the more common pitfalls related to discoverability is the misunderstanding that, since the URIs are now discoverable, then they can be subject to change. This is however simply not the case, and for good reason: first, this is not how the web works – clients will bookmark the URIs and will expect them to work in the future. Second, the client shouldn’t have to navigate through the API to get to a certain state that could have been reached directly. Instead, all URIs of the RESTful web service should be considered cool URIs, and cool URIs don’t change. Instead, versioning of the API can be used to solve the problem of a URI reorganization. Caveats of Discoverability As some of the discussions around the previous articles state, the first goal of discoverability is to make minimal or no use of documentation and have the client learn and understand how to use the API via the responses it gets. In fact, this shouldn’t be regarded as such a far fetched ideal – it is how we consume every new web page – without any documentation. So, if the concept is more problematic in the context of REST, then it must be a matter of technical implementation, not of a question of whether or not it’s possible. That being said, technically, we are still far from the a fully working solution – the specification and framework support are still evolving, and because of that, some compromises may have to be made; these are nevertheless compromises and should be regarded as such. Conclusion This article covered the implementation of some of the traits of discoverability in the context of a RESTful Service with Spring MVC and touched on the concept of discoverability at the root. In the next articles I will focus on custom link relations and the Atom Publishing Protocol. In the meantime, check out the github project. Reference: REST Service Discoverability with Spring, part 5 from our JCG partner Eugen Paraschiv at the baeldung blog. Related Articles :Bootstrapping a web application with Spring 3.1 and Java based Configuration, part 1 Building a RESTful Web Service with Spring 3.1 and Java based Configuration, part 2 Securing a RESTful Web Service with Spring Security 3.1, part 3 RESTful Web Service Discoverability, part 4 Basic and Digest authentication for a RESTful Service with Spring Security 3.1, part 6 Spring & Quartz Integration with Custom Annotation Spring MVC Interceptors Example Swapping out Spring Bean Configuration at Runtime...

RESTful Web Service Discoverability, part 4

This is the fourth of a series of articles about setting up a secure RESTful Web Service using Spring 3.1 and Spring Security 3.1 with Java based configuration. The article will focus on Discoverability of the REST API, HATEOAS and practical scenarios driven by tests. Introducing REST Discoverability Discoverability of an API is a topic that doesn’t get enough well deserved attention, and as a consequence very few APIs get it right. It is also something that, if done right, can make the API not only RESTful and usable but also elegant. To understand discoverability, one needs to understand that constraint that is Hypermedia As The Engine Of Application State (HATEOAS); this constraint of a RESTful API is about full discoverability of actions/transitions on a Resource from Hypermedia (Hypertext really), as the only driver of application state. If interaction is to be driven by the API through the conversation itself, concretely via Hypertext, then there can be no documentation, as that would coerce the client to make assumptions that are in fact outside of the context of the API. Also, continuing this logical train of thought, the only way an API can indeed be considered RESTful is if it is fully discoverable from the root and with no prior knowledge – meaning the client should be able to navigate the API by doing a GET on the root. Moving forward, all state changes are driven by the client using the available and discoverable transitions that the REST API provides in representations (hence Representational State Transfer). In conclusion, the server should be descriptive enough to instruct the client how to use the API via Hypertext only, which, in the case of a HTTP conversation, may be the Link header. Concrete Discoverability Scenarios (Driven by tests)So what does it mean for a REST service to be discoverable? Throughout this section, we will test individual traits of discoverability using Junit, rest-assured and Hamcrest. Since the REST Service has been secured in part 3 of the series, each tests need to first authenticate before consuming the API. Some utilities for parsing the Link header of the response are also necessary. Discover the valid HTTP methods When a RESTful Web Service is consumed with an invalid HTTP method, the response should be a 405 METHOD NOT ALLOWED; in addition, it should also help the client discover the valid HTTP methods that are allowed for that particular Resource, using the Allow HTTP Header in the response: @Test public void whenInvalidPOSTIsSentToValidURIOfResource_thenAllowHeaderListsTheAllowedActions(){ // Given final String uriOfExistingResource = this.restTemplate.createResource(); // When Response res = this.givenAuthenticated().post( uriOfExistingResource ); // Then String allowHeader = res.getHeader( HttpHeaders.ALLOW ); assertThat( allowHeader, AnyOf.<String> anyOf( containsString("GET"), containsString("PUT"), containsString("DELETE") ) ); }Discover the URI of newly created Resource The operation of creating a new Resource should always include the URI of the newly created resource in the response, using the Location HTTP Header. If the client does a GET on that URI, the resource should be available: @Test public void whenResourceIsCreated_thenURIOfTheNewlyCreatedResourceIsDiscoverable(){ // When Foo unpersistedResource = new Foo( randomAlphabetic( 6 ) ); Response createResponse = this.givenAuthenticated().contentType( MIME_JSON ) .body( unpersistedResource ).post( this.paths.getFooURL() ); final String uriOfNewlyCreatedResource = createResp .getHeader( HttpHeaders.LOCATION ); // Then Response response = this.givenAuthenticated() .header( HttpHeaders.ACCEPT, MIME_JSON ).get( uriOfNewlyCreatedResource ); Foo resourceFromServer = response.body().as( Foo.class ); assertThat( unpersistedResource, equalTo( resourceFromServer ) ); }The test follows a simple scenario: a new Foo resource is created and the HTTP response is used to discover the URI where the Resource is now accessible. The tests then goes one step further and does a GET on that URI to retrieve the resource and compares it to the original, to make sure that it has been correctly persisted. Discover the URI to GET All Resources of that type When we GET a particular Foo instance, we should be able to discover what we can do next: we can list all the available Foo resources. Thus, the operation of retrieving an resource should always include in its response the URI where to get all the resources of that type, again making use of the Link header: @Test public void whenResourceIsRetrieved_thenURIToGetAllResourcesIsDiscoverable(){ // Given String uriOfExistingResource = this.restTemplate.createResource(); // When Response getResponse = this.givenAuthenticated().get( uriOfExistingResource ); // Then String uriToAllResources = HTTPLinkHeaderUtils.extractURIByRel ( getResponse.getHeader( "Link" ), "collection" ); Response getAllResponse = this.givenAuthenticated().get( uriToAllResources ); assertThat( getAllResponse.getStatusCode(), is( 200 ) ); }The test tackles a thorny subject of Link Relations in REST: the URI to retrieve all resources uses the rel=”collection” semantics. This type of link relation has not yet been standardized, but is already in use by several microformats and proposed for standardization. Usage of non-standard link relations opens up the discussion about microformats and richer semantics in RESTful web services. Other potential discoverable URIs and microformats Other URIs could potentially be discovered via the Link header, but there is only so much the existing types of link relations allow without moving to a richer semantic markup such as defining custom link relations, the Atom Publishing Protocol or microformats, which will be the topic of another article. For example it would be good if the client could discover the URI to create new resources when doing a GET on a specific resource; unfortunately there is no link relation to model create semantics. Luckily it is standard practice that the URI for creation is the same as the URI to GET all resources of that type, with the only difference being the POST HTTP method. Conclusion This article covered the some of the traits of discoverability in the context of a RESTful web service, discussing HTTP method discovery, the relation between create and get, discovery of the URI to get all resources, etc. In the next articles I will focus on discovering the API starting with the root, pagination, custom link relations, the Atom Publishing Protocol and the actual implementation of Discoverability in the REST service with Spring. In the meantime, check out the github project. Reference: RESTful Web Service Discoverability, part 4 from our JCG partner Eugen Paraschiv at the baeldung blog. Related Articles :Bootstrapping a web application with Spring 3.1 and Java based Configuration, part 1 Building a RESTful Web Service with Spring 3.1 and Java based Configuration, part 2 Securing a RESTful Web Service with Spring Security 3.1, part 3 REST Service Discoverability with Spring, part 5 Basic and Digest authentication for a RESTful Service with Spring Security 3.1, part 6 Spring & Quartz Integration with Custom Annotation Spring MVC Interceptors Example Swapping out Spring Bean Configuration at Runtime...

Ignoring Self-Signed Certificates in Java

A problem that I’ve hit a few times in my career is that we sometimes want to allow self-signed certificates for development or testing purposes. A quick Google search shows the trouble that countless Java developers have run into over the years. Depending on the exact certificate issue, you may get an error like one of the following, though I’m almost positive there are other manifestations: java.security.cert.CertificateException: Untrusted Server Certificate Chainjavax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetGetting around this often requires modifying JDK trust store files which can be painful and often you end up running into trouble. On top of that, every developer on your team will have to do the same thing, and every new environment you’ll have the same issues recurring. Fortunately, there is a way to deal with the problem in a generic way that won’t put any burden on your developers. We’re going to focus on vanilla HttpURLConnection type connections since it is the most general and should still help you understand the direction to take with other libraries. If you are using Apache HttpClient, see here. Warning: Know what you are doing! Be aware of what using this code means: it means you don’t care at all about host verification and are using SSL just to encrypt communications. You are not preventing man-in-the-middle attacks or verifying you are connected to the host you think you are. This generally comes down to a few valid cases:you are operating in a locked down LAN environment. You are not susceptible to having your requests intercepted by an attacker (or if you are, you have bigger issues). You are in a test or development environment where securing communication isn’t important.If this matches your needs, then go ahead and proceed. Otherwise, maybe think twice about what you are trying to accomplish. Solution: Modifying Trust Managers Now that we’re past that disclaimer, we can solve the actual problem at hand. Java allows us to control the objects responsible for verifying a host and certificate for a HttpsURLConnection. This can be done globally but I’m sure those of you with experience will cringe at the thought of making such a sweeping change. Luckily we can also do it on a per-request basis, and since examples of this are hard to find on the web, I’ve provided the code below. This approach is nice since you don’t need to mess with swapping out SSLSocketFactory implementations globally. Feel free to grab it and use it in your project. package com.mycompany.http;import java.net.*; import javax.net.ssl.*; import java.security.*; import java.security.cert.*;public class TrustModifier { private static final TrustingHostnameVerifier TRUSTING_HOSTNAME_VERIFIER = new TrustingHostnameVerifier(); private static SSLSocketFactory factory;/** Call this with any HttpURLConnection, and it will modify the trust settings if it is an HTTPS connection. */ public static void relaxHostChecking(HttpURLConnection conn) throws KeyManagementException, NoSuchAlgorithmException, KeyStoreException {if (conn instanceof HttpsURLConnection) { HttpsURLConnection httpsConnection = (HttpsURLConnection) conn; SSLSocketFactory factory = prepFactory(httpsConnection); httpsConnection.setSSLSocketFactory(factory); httpsConnection.setHostnameVerifier(TRUSTING_HOSTNAME_VERIFIER); } }static synchronized SSLSocketFactory prepFactory(HttpsURLConnection httpsConnection) throws NoSuchAlgorithmException, KeyStoreException, KeyManagementException {if (factory == null) { SSLContext ctx = SSLContext.getInstance("TLS"); ctx.init(null, new TrustManager[]{ new AlwaysTrustManager() }, null); factory = ctx.getSocketFactory(); } return factory; }private static final class TrustingHostnameVerifier implements HostnameVerifier { public boolean verify(String hostname, SSLSession session) { return true; } }private static class AlwaysTrustManager implements X509TrustManager { public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { } public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { } public X509Certificate[] getAcceptedIssuers() { return null; } }}Usage To use the above code, just call the relaxHostChecking() method before you open the stream: URL someUrl = ... // may be HTTPS or HTTP HttpURLConnection connection = (HttpURLConnection) someUrl.openConnection(); TrustModifier.relaxHostChecking(connection); // here's where the magic happens// Now do your work! // This connection will now live happily with expired or self-signed certificates connection.setDoOutput(true); OutputStream out = connection.getOutputStream(); ...There you have it, a complete example of a localized approach to supporting self-signed certificates. This does not affect the rest of your application which will continue to have strict hosting checking semantics. This example could be extended to use a configuration setting to determine whether relaxed host checking should be used, and I recommend you do so if using this code is primarily a way to facilitate development with self-signed certificates. Reference: Ignoring Self-Signed Certificates in Java from our JCG partners at Carfey Software Blog. Related Articles :Securing a RESTful Web Service with Spring Security 3.1 Securing GWT apps with Spring Security JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Debugging a Production Server – Eclipse and JBoss showcase Java EE6 CDI, Named Components and Qualifiers Java Tutorials and Android Tutorials list...

Iterationless Development – the latest New New Thing

Thanks to the Lean Startup movement, Iterationless Development and Continuous Deployment have become the New New Thing in software development methods. Apparently this has gone so far that “there are venture firms in Silicon Valley that won’t even fund a company unless they employ Lean startup methodologies”. Although most of us don’t work in a Web 2.0 social media startup, or anything like one, it’s important to cut through the hype and see what we can learn from these ideas. One of the most comprehensive descriptions I’ve seen so far of Iterationless Development is a (good, but buzzword-heavy) presentation by Erik Huddleston that explains how development is done at Dachis Group, which builds online social communities. The development team’s backlog is updated on a just-in-time basis, and includes customer business requirements (defined as minimum features), feedback from Operations (data from analytics and results of Devops retrospectives), and minimally required technical architecture. Work is managed using Kanban WIP limits and queues. Developers create tests for each change or fix up front. Every check-in kicks off automated tests and static analysis checks for complexity and code duplication as part of Continuous Integration. If it passes these steps, the change is promoted to a test environment, and the code must then be reviewed for architectural oversight (they use Atlassian’s Crucible online code review tool to do this). Once all of the associated change sets have been reviewed, the code changes are deployed to staging for acceptance testing and review by product management, before being promoted to production. All production changes (code change sets, environment changes and database migration sets) are packaged into Chef recipes and progressively rolled out online. It’s a disciplined and well-structured approach that depends a lot on automation and a good tool set. Death to Time Boxing What makes Iterationless Development different is obviously the lack of time boxing – instead of being structured in sprints or spikes, work is done in a continuous flow. According to Huddleston, iterationless Kanban is “here to stay” and is “much more productive than artificial time boxing”. In a separate blog post, he talks about the death of iterations. While he agrees that iterations have benefits – providing a fixed and consistent routine for the team to follow, a forcing function to drive work to conclusion (nothing focuses the mind like a deadline), and logical points for the team to synch up with the rest of the business – Huddleston asserts that working in time boxes is unnatural and unnecessary. That the artificial and arbitrary boundaries defined by time boxes force people to compromise on solutions, and force them to cut corners in order to meet deadlines. I agree that time boxes are arbitrary – but no more arbitrary than a work day or work week, or a month or a financial quarter; all cycles that businesses follow. In business we are always working towards a deadline, whether it is hard and real or soft and arbitrary. This is how work gets done. And this doesn’t change if we are working in time boxes or without them. In iterationless Kanban, the pressure to meet periodic time box deadlines is replaced with the constant pressure to deliver work as fast as possible, to meet individual task deadlines. Rapid cycling in short time boxes is hard enough on teams over a long period of time. Continuous, interrupt-driven development with a tight focus on optimizing cycle time is even harder. The dials are set to on and they stay that way. Kanban makes this easy, giving the team, and the customer and management, the tools to continuously visualize work in progress, identify bottlenecks and delays and squeeze out waste – to maximize efficiency. This is a manufacturing process model remember. The emphasis on tactical optimization and fast-feedback loops, and the “myopic focus on eliminating waste” is just that – short-sighted and difficult to sustain. With time boxes there are at least built-in synch points, chances for the team to review and reset, so that people can reflect on what they have done, look for ways to improve, look ahead at what they need to do next, and then build up again to an optimal pace. This isn’t waste. Cycling up and down is important and necessary to keep people from getting burnt out and to give them a chance to think and to get better at what they do. Risk is managed in the same tactical, short-sighted way. Teams working on one issue at a time end up managing risk one issue at a time, relying heavily on automated testing and in-stream controls like code reviews. This is good, but not good enough for many environments: security and reliability risks need to be managed in a more comprehensive, systemic way. Even integrating feedback from Ops isn’t enough to find and prevent deep problems. Working in Agile time boxes is already trading technical risks for speed and efficiency. Iterationless Development and Continuous Deployment, focused on eliminating waste and on accelerating cycle time, pushes these tradeoffs even further, into the danger zone. Huddleston is also critical of “boxcaring” – batching different pieces of work together in a time box – because it interferes with simple prioritization and introduces unnecessary delays. But batching work together that makes sense to do together can be a useful way to reduce risk and cost. Take a simple example. The team is working on feature 1a . Once it’s done, they move on to feature 1b, then 1c. All of this work requires changing the same parts of code, the same or similar testing and reviews, and has a similar impact on operations. By batching this work together, you might deliver it slower, but you can reduce waste and minimize risk by delivering it once, rather than 3 times. Iterationless Development Makes Sense… Iterationless Development using Kanban as a control structure is an effective way to deal with excessive pressure and uncertainty – like in an early-stage startup, or a firefighting support team. It’s good for rapid innovation and experimental prototyping, building on continuous feedback from customers and from Operations – situations where speed and responsiveness to the business and customers is critical, more important than minimizing technical and operational risks. It formalizes the way that most successful web startups work – come up with a cool idea, build a prototype as quickly as possible, and then put it out and find out what customers actually want before you run out of cash. But it’s not a one-size-fits-all solution to software development problems. All software development methods are compromises – imperfect attempts at managing risks and uncertainty. Sequential or serial development methods attempt to specify and fix the solution space upfront, and then manage to this fixed scope. Iterative, time-boxed development helps teams deal with uncertainty by breaking business needs down into small, concrete problems and delivering a working solution in regular steps. And iterationless, continuous-flow allows teams to rapidly test ideas and alternatives, when the problem isn’t clear and nobody is sure yet what direction to go in. There’s no one right answer. What approach you follow depends on what your priorities and circumstances are, and what kind of problems and risks you need to solve today. Reference: Iterationless Development – the latest New New Thing from our JCG partner Jim Bird at the “Building Real Software” blog. Related Articles :Save money from Agile Development Standups – take them or leave them Agile software development recommendations for users and new adopters Breaking Down an Agile process Backlog Not doing Code Reviews? What’s your excuse? Even Backlogs Need Grooming Java Tutorials and Android Tutorials list...

Decompiling Mega Vendors behaviour and future strategics (Microsoft, IBM, Oracle and SAP)

IT News have an excellent article about the latest Gartner Symposium, where one of Gartner analyst Dennis Gaughan gave a broad overview of the strategic direction of the world’s largest application vendors IBM, Microsoft, Oracle and SAP. Below are some keynotes and related “quotes” from Mr. Dennis Gaughan. IBMIBM wants to manage you. Quoting Mr. Gaughan “The number one question is: how do you avoid being managed by IBM? Their account managers are very good at influencing strategy,” IBM is risking by moving into applications as the acquisition of Sterling Commerce implies. “IBM was always selling software tools and hardware, but would shy away from being in the software applications business,” . “We are starting to see IBM compete directly with SAP and Oracle around business applications – especially with Sterling Commerce or business analytics and optimisation.” IBM generates enormous revenue from using and selling SAP and Oracle products and now as it moves into application, it must find a way not to cut off this revenue stream “The challenge IBM has is that they generate enormous revenue from partnering with Oracle and SAP,” he noted. “IBM makes US$15 to US$20 billion a year alone in implementing, hosting, and managing SAP landscapes for its customers. They have to sort out how to compete and cooperate to not impact that significant revenue stream.”MICROSOFTMicrosoft crown jewels are their monopolies: Windows and Office. “This is a platform story,…”. “They are a platform company first and foremost. They continually ask themselves, how do we drive platform and how do we protect our cash cows, Office and Windows?When approaching Microsoft, consider that Windows is the core – and Microsoft will do everything to protect that core.” Their core is attacked by Apple on OS and Google on Office, Mail and Collaboration and probably Microsoft will loose market share in the next years. “Google is going after Office, mail and collaboration,” he said. “These are Microsoft’s crown jewels.” (Strangely enough neither did he mention any Linux vendor although some Linux distributions become very user friendly nor Open Office which has become very good)ORACLEOracle products do not integrate well.“Just because I am buying multiple products from Oracle, it doesn’t mean it will come pre-integrated,”. “Oracle’s marketing phrase is ‘engineered systems to work together’. But it should be ‘engineered systems to buy together’. It’s a bundling on paper, not in the architecture.“Total integration across the portfolio… won’t provide software license growth for investors. Profitability influences and colours every aspect of Oracle’s product strategy. Oracle offers a collection of products that might be best of breed, but in most cases the integration burden will always remain with you.” That leads as to the next bullet note One of Oracles main revenues is Maintenance. “Maintenance is king for Oracle – they generate a tonne of money on maintenance revenue,” “Maintenance accounts for 92 percent of Oracle’s profitability – and they wont do anything to disrupt that maintenance stream. So if you are considering Fusion Apps, you need to continually look to their roadmaps to consider longer-term strategy for the products you have already invested in. Unfortunately, Oracle isn’t always the best at disclosing long-term roadmaps, and maintenance isn’t a guarantee of future investment.” Which also leads us to the next bullet It does not not reveal long term product roadmaps, because of the fear that future products will canibalize with their current ones. Nevertheless, Oracle’s strategy works for highly profitable, volume businesses, customersSAP“SAP is one of Oracle’s largest database resellers – and is naturally interested in lessening its reliance on Oracle.” In-memory technology although very per formant is very expensive too. Moreover “SAP haven’t sorted out the pricing,” which “It has given customers pause.” Strict licensing policies “SAP has interesting licensing terms for getting data in and out of the system for use in other applications,” There is no other opportunities for upgrading its customer from R/3 to Business Suite, which will have a major impact to its revenue.. “New revenue from that strategy is largely over, and that will impact total license revenue,” So probably SAP maintenance license in the future will be even more strict and heavy.Source :The truth about IBM, Microsoft, Oracle and SAP What Microsoft, Oracle, IBM, And SAP Don’t Tell CustomersReference: Decompiling Mega Vendors behaviour and future strategics (Microsoft, IBM, Oracle and SAP) from our JCG partner Spyros Doulgeridis at the “Spyros’ Log” blog. Related Articles :Big Company vs. Small Company Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Official Java 7 for Mac OS X – Status Java Tutorials and Android Tutorials list...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books