Featured FREE Whitepapers

What's New Here?

software-development-2-logo

The Cloud Winners and Losers?

The cloud is revolutionising IT. However there are two sides to every story: the winners and the losers. Who are they going to be and why? If you can’t wait here are the losers: HP, Oracle, Dell, SAP, RedHat, Infosys, VMWare, EMC, Cisco, etc. Survivors: IBM, Accenture, Intel, Apple, etc. Winners: Amazon, Salesforce, Google, CSC, Workday, Canonical, Metaswitch, Microsoft, ARM, ODMs. Now the question is why and is this list written in stone?       What has cloud changed? If you are working in a hardware business (storage, networking, etc. is also included) then cloud computing is a value destroyer. You have an organisation that is assuming small, medium and large enterprises have and always will run their own data centre. As such you have been blown out of the water by the fact that cloud has changed this fundamental rule. All of a sudden Amazon, Google and Facebook go and buy specialised webscale hardware from your suppliers, the ODMs. Facebook all of a sudden open sources hardware, networking, rack and data centre designs and makes it that anybody can compete with you. Cloud is all about scale out and open source hence commodity storage, software defined networks and network virtualisation functions are converting your portfolio in commodity products. If you are an enterprise software vendor then you always assumed that companies will buy an instance of your product, customise it and manage it themselves. You did not expect that software can be offered as a service and that one platform can offer individual solutions to millions of enterprises. You also did not expect that software can be sold by the hour instead of licensed forever. If you are an outsourcing company then you assume that companies that have invested in customising Siebel will want you to run this forever and not move to Salesforce. Reviewing the losers HP’s Cloud Strategy HP has been living from printers and hardware. Meg rightfully has taken the decision to separate the cashcow, stop subsidising other less profitable divisions and let it be milked till it dies. The other group will focus on Cloud, Big Data, etc. However HP Cloud is more expensive and slower moving than any of the big three so economies of scale will push it into niche areas or make it die. HP’s OpenStack is a product that came 2-3 years late to the market. A market as we will see later that is about to be commoditised. HP’s Big Data strategy? Overpay for Vertica and Autonomy and focus your marketing around the lawsuits with former owners, not any unique selling proposition. Also Big Data can only be sold if you have an open source solution that people can test. Big Data customers are small startups that quickly have become large dotcoms. Most enterprises would not know what to do with Hadoop even if they could download it for free [YES you can actually download it for free!!!]. Oracle’s Cloud Strategy Oracle has been denying Cloud existed until their most laggard customer started asking questions. Until very recently you could only buy Oracle databases by the hour from Amazon. Oracle has been milking the enterprise software market for years and paying surprise visits to audit your usage of their database and send you an unexpected bill. Recently they have started to cloud-wash [and Big Data wash] their software portfolio but Salesforce and Workday already are too far ahead to catch them. A good Christmas book Larry could buy from Amazon would be “The Innovator’s Dilemma“. Dell’s Cloud Strategy Go to the main Dell page and you will not find the word Big Data or Cloud. I rest my case. SAP’s Cloud Strategy Workday is working hard on making SAP irrelevant. Salesforce overtook Siebel. Workday is likely to do the same with SAP. People don’t want to manage their ERP themselves. RedHat’s Cloud Strategy [I work for their biggest competitor] RedHat salesperson to its customers: There are three versions. Fedora if you need innovation but don’t want support. CentOS if you want free but no security updates. RHEL is expensive and old but with support. Compare this to Canonical. There is only one Ubuntu, it is innovative, free to use and if you want support you can buy it extra. For Cloud the story is that RedHat is three times cheaper than VMWare and your old stuff can be made to work as long as you want it according to a prescribed recipe. Compare this with an innovator that wants to completely commoditise OpenStack [ten times cheaper] and bring the most innovative and flexible solution [any SDN, any storage, any hypervisor, etc.] that instantly solves your problems [deploy different flavours of OpenStack in minutes without needing any help]. Infosys or any outsourcing company If the data centre is going away then the first thing that will go away is that CRM solution we bought in the 90’s from a company that no longer exists. VMWare For the company that brought virtualisation into the enterprise it is hard to admit that by putting a rest API in front of it, you don’t need their solution in each enterprise any more. EMC Commodity storage means that scale out storage can be offered at a fraction of the price of a regular EMC SAN solution. However the big killer is Amazon’s S3 that can give you unlimited storage in minutes without worries. Cisco A Cisco router is an extremely expensive device that is hard to manage and build on top of proprietary hardware, a proprietary OS and proprietary software. What do you think will happen in a world where cheap ASIC + commodity CPU, general purpose OS and many thousands of network apps from an app store become available? Or worse, a network will no longer need many physical boxes because most of it is virtualised. What does a cloud loser mean? A cloud loser means that their existing cash cows will be crunched by disruptive innovations. Does this mean that losers will disappear or can not recuperate? Some might disappear. However if smart executives in these losing companies would be given the freedom to bring to market new solutions that build on top of the new reality then they might come out stronger. IBM has shown they were able to do so many times. Let’s look at the cloud survivors. IBM IBM has shown over and over again that it can reinvent itself. It sold its x86 servers in order to show its employees and the world that the future is no longer there. In the past it bought PWC’s consultancy which will keep on reinventing new service offerings for customers that are lost in the cloud. Accenture Just like PWC’s consultancy arm within IBM, Accenture will have consultants that help people make the transition from data centre to the cloud. Accenture will not be leading the revolution but will be a “me-to” player that can put more people faster than others. Intel X86 is not going to die soon. The cloud just means others will be buying it. Intel will keep on trying to innovate in software and go nowhere [e.g. Intel's Hadoop was going to eat the world] but at least its processors will keep it above the water. Apple Apple knows what consumers want but they still need to prove they understand enterprises. Having a locked-in world is fine for consumers but enterprises don’t like it. Either they come up with a creative solution or the billions will not keep on growing. What does a cloud survivor mean? A cloud survivor means that the key cash cows will not be killed by the cloud. It does not give a guarantee that the company will grow. It just means that in this revolution, the eye of the tornado rushed over your neighbours house, not yours. You can still have lots of collateral damage… Amazon IaaS = Amazon. No further words needed. Amazon will extend Gov Cloud into Health Cloud, Bank Cloud, Energy Cloud, etc. and remove the main laggard’s argument: “for legal & security reasons I can’t move to the cloud”. Amazon currently has 40-50 Anything-as-a-Service offerings in 36 months they will have 500. Salesforce PaaS & SaaS = Salesforce. Salesforce will become more than a CRM on steroids, it will be the world’s business solutions platform. If there is no business solution for it on Salesforce then it is not a business problem worth solving. They are likely to buy competitors like Workday. Google Google is the king of the consumer cloud. Google Apps has taken the SME market by storm. Enterprise cloud is not going anywhere soon however. Google was too late with IaaS and is not solving on-premise transitional problems unlike its competitors. With Kubernetes Google will re-educate the current star programmers and over time will revolutionise the way software is written and managed and might win in the long run. Google’s cloud future will be decided in 5-10 years. They invented most of it and showed the world 5 years later in a paper. CSC CSC has moved away from being a bodyshop to having several strategic important products for cloud orchestration and big data. They have a long-term future focus, employing cloud visionaries like Simon Wardley, that few others match. You don’t win a cloud war in the next quarter. It took Simon 4 years to take Ubuntu from 0% to 70% on public clouds. Workday What Salesforce did to Oracle’s Siebel, Workday is doing to SAP. Companies that have bought into Salesforce will easily switch to Workday in phase 2. Canonical Since RedHat is probably reading this blog post, I can’t be explicit. But a company of 600 people that controls up to 70% of the operating systems on public clouds, more than 50% of OpenStack, brings out a new server OS every 6 months, a phone OS in the next months, a desktop every 6 months, a complete cloud solution every 6 months, can convert bare-metal into virtual-like cloud resources in minutes, enables anybody to deploy/integrate/scale any software on any cloud or bare-metal server [Intel, IBM Power 8, ARM 64] and is on a mission to completely commoditise cloud infrastructure via open source solutions in 2015 deserves to make it to the list. Metaswitch Metaswitch has been developing network software for the big network guys for years. These big network guys would put it in a box and sell it extremely expensive. In a world of commodity hardware, open source and scale out, Clearwater and Calico have catapulted Metaswitch to the list of most innovative telecom supplier. Telecom providers will be like cloud providers, they will go to the ODM that really knows how things work and will ignore the OEM that just puts a brand on the box. The Cloud still needs WAN networks. Google Fibre will not rule the world in one day. Telecom operators will have to spend their billions with somebody. Microsoft If you are into Windows you will be on Azure and it will be business as usual for Microsoft. ARM In an ODM dominated world, ARM processors are likely to move from smart phones into network and into cloud. ODM Nobody knows them but they are the ones designing everybody’s hardware. Over time Amazon, Google and Microsoft might make their own hardware but for the foreseeable future they will keep on buying it “en masse” from ODMs. What does a cloud winner mean? Billions and fame for some, large take-overs or IPOs for others. But the cloud war is not over yet. It is not because the first battles were won that enemies can’t invent new weapons or join forces. So the war is not over, it is just beginning. History is written today…Reference: The Cloud Winners and Losers? from our JCG partner Maarten Ectors at the Telruptive blog....
apache-camel-logo

Easy REST endpoints with Apache Camel 2.14

Apache Camel has a new release recently, and some of the new features were blogged about by my colleague Claus Ibsen. You really should check out his blog entry and dig into more detail, but one of the features I was looking forward to trying was the new REST DSL. So what is this new DSL? Actually, it’s an extension to Camel’s routing DSL, which is a powerful domain language for declaratively describing integration flows and is available in many flavors. It’s pretty awesome, and is a differentiator between integration libraries. If you haven’t seen Camel’s DSL, you should check it out. Have I mentioned that Camel’s DSL is awesome? k.. back to the REST story here.. Before release 2.14, creating rest endpoints meant using camel-cxfrs which can be difficult to approach for a new user just trying to expose a simple REST endpoint. Actually, it’s a very powerful approach to doing contract-first REST design, but I’ll leave that for the next blog post. However, in a previous post I did dive into using camel-cxfrs for REST endpoints so you can check it out. With the 2.14, the DSL has been extended to make it easier to create REST endpoints. For example: rest("/user").description("User rest service") .consumes("application/json").produces("application/json").get("/{id}").description("Find user by id").outType(User.class) .to("bean:userService?method=getUser(${header.id})").put().description("Updates or create a user").type(User.class) .to("bean:userService?method=updateUser").get("/findAll").description("Find all users").outTypeList(User.class) .to("bean:userService?method=listUsers"); In this example, we can see we use the DSL to define REST endpoints, and it’s clear, intuitive and straight forward. All you have to do is set up the REST engine with this line: restConfiguration().component("jetty") .bindingMode(RestBindingMode.json) .dataFormatProperty("prettyPrint", "true") .port(8080); Or this in your Spring context XML: <camelContext> ... <restConfiguration bindingMode="auto" component="jetty" port="8080"/> ... </camelContext> The cool part is you can use multiple HTTP/servlet engines with this approach, including a micrservices style with embedded jetty (camel-jetty) or through an existing servlet container (camel-servlet). Take a look at the REST DSL documentation for the complete of HTTP/servlet components you can use with this DSL. Lastly, some might ask, what about documenting the REST endpoint? Eg, WADL? Well, luckily, the new REST DSL is integrated out of the box with awesome Swagger library and REST documenting engine ! So you can auto document your REST endpoints and have the docs/interface/spec generated for you! Take a look at the camel-swagger documentation and the camel-example-servlet-rest-tomcat example that comes with the distribution to see more. Give it a try, and let us know (Camel mailing list, comments, stackoverflow, somehow!!!) how it works for you.Reference: Easy REST endpoints with Apache Camel 2.14 from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
software-development-2-logo

Validate Configuration on Startup

Do you remember that time when you spent a whole day trying to fix a problem, only to realize that you have mistyped a configuration setting? Yes. And it was not just one time. Avoiding that is not trivial, as not only you, but also the frameworks that you use should take care. But let me outline my suggestion. Always validate your configuration on startup of your application. This involves three things: First, check if your configuration values are correct. Test database connection URLs, file paths, numbers and periods of time. If a directory is missing, a database is unreachable, or you have specified a non-numeric value where a number or period of time is expected, you should know that immediately, rather the application has been used for a while. Second, make sure all required parameters are set. If a property is required, fail if it has not been set, and fail with a meaningful exception, rather than an empty NullPointerException (e.g. throw new IllegalArgumentException("database.url is required")). Third, check if only allowed values are set in the configuration file. If a configuration is not recognized, fail immediately and report it. This will save you from spending a whole day trying to find out why setting the “request.timeuot” property didn’t have effect. This is applicable to optional properties that have default values, and comes with the extra step of adding new properties to a predefined list of allowed properties, and possibly forgetting to do that leading to an exception, but that is unlikely to waste more than a minute. A simple implementation of the last suggestion would like like this: Properties properties = loadProperties(); for (Object key : properties.keySet()) { if (!VALID_PROPERTIES.contains(key)) { throw new IllegalArgumentException("Property " + key + " is not recognized as a valid property. Maybe a typo?"); } } Implementing the first one is a bit harder, as it needs some logic – in your generic properties loading mechanism you don’t know if a property is a database connection url, a folder, a timeout. So you have to do these checks in the classes that know the purpose if each property. Your database connection handler knows how to work with a database url, your file storage handler knows what a backup directory is, and so on. This can be combined with the required property verification. Here, a library like Typesafe config may come handy, but it won’t solve all problems. This is not only useful for development, but also for newcomers to the project that try to configure their local server, and most importantly – production, where you can immediately find out if there has been a misconfiguration in this release. Ultimately, the goal is to fail as early as possible if there is any problem with the supplied configuration, rather than spending hours chasing typos, missing values and services that are accidentally not running.Reference: Validate Configuration on Startup from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-logo

Java Minor Releases Scheme Tweaked Again

In 2013, Oracle announced the Java SE – Change in Version Numbering Scheme. The announcement stated that Limited Update releases (those “that include new functionality and non-security fixes”) and Critical Patch Updates (CPUs) [those "that only include fixes for security vulnerabilities"] would be released with specific version number schemes. In particular, Limited Use Releases would have version numbers with multiples of 20 while Critical Patch Updates would have version numbers that are multiples of 5 and come after the latest Limited Use Release version number. The purpose of this scheme change was to allow room for versions with numbers between these, which allows Oracle “to insert releases – for example security alerts or support releases, should that become necessary – without having to renumber later releases.” Yesterday’s announcement (“Java CPU and PSU Releases Explained“) states, “Starting with the release of Java SE 7 Update 71 (Java SE 7u71) in October 2014, Oracle will release a Critical Patch Update (CPU) at the same time as a corresponding Patch Set Update (PSU) for Java SE 7.” This announcement explains the difference between a CPU and a PSU:Critical Patch Update CPU “Fixes to security vulnerabilities and critical bug fixes.” Minimum recommended for everyone.Patch Set Update PSU “All fixes in the corresponding CPU” and “additional non-critical fixes.” Recommended only for those needing bugs fixed by PSU additional fixes.Yesterday’s announcement states that PSU releases (which are really CPU+ releases) will be released along with their corresponding CPU releases. Because the additional fixes that a PSU release contains beyond what’s in the CPU release are expected to be part of the next CPU release, developers are encouraged to experiment with PSU releases to ensure that coming CPU features work well for them.Reference: Java Minor Releases Scheme Tweaked Again from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
jboss-hibernate-logo

How to use Hibernate to generate a DDL script from your Play! Framework project

Ok, so you have been using the hiber­nate prop­erty name=“hibernate.hbm2ddl.auto” value=“update” to con­tin­u­ously update your data­base schema, but now you need a com­plete DDL script? Use this method from you Global Class onStart to export the DDL scripts.  Just give it the pack­age name (with path) of your Enti­ties as well as a file name: public void onStart(Application app) { exportDatabaseSchema("models", "create_tables.sql"); }public void exportDatabaseSchema(String packageName, String scriptFilename) {final Configuration configuration = new Configuration(); final Reflections reflections = new Reflections(packageName); final Set<Class<?>> classes = reflections.getTypesAnnotatedWith(Entity.class); // iterate all Entity classes in the package indicated by the name for (final Class<?> clazz : classes) { configuration.addAnnotatedClass(clazz); } configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.PostgreSQL9Dialect");SchemaExport schema = new SchemaExport(configuration); schema.setOutputFile(scriptFilename); schema.setDelimiter(";"); schema.execute(Target.SCRIPT, SchemaExport.Type.CREATE ); // just export the create statements in the script } That is it! Thanks to @MonCalamari for answer­ing my Ques­tion on Stack­over­flow here.Reference: How to use Hibernate to generate a DDL script from your Play! Framework project from our JCG partner Brian Porter at the Poornerd blog....
eclipse-logo

Eclipse Extension Point Evaluation Made Easy

Coding Eclipse Extension Point evaluations comes in a bit verbose and sparsely self-explaining. As I got round to busy myself with this topic recently, I wrote a little helper with the intent to reduce boilerplate code for common programming steps, while increasing development guidance and readability at the same time. It turned out to be not that easy to find an expressive solution, which matches all the use cases I could extract from current projects. So I thought it might be a good idea to share my findings and see what other people think of it.       Eclipse Extension Point Evaluation Consider a simple extension point definition that supports an unbounded contribution of extensions. Each of these contributions should provide a Runnable implementation to perform some sort of operation:An usual evaluation task could be to retrieve all contributions, create the executable extensions and invoke each of those: public class ContributionEvaluation { private static final String EP_ID = "com.codeaffine.post.contribution";public void evaluate() { IExtensionRegistry registry = Platform.getExtensionRegistry(); IConfigurationElement[] elements = registry.getConfigurationElementsFor( EP_ID ); Collection<Runnable> contributions = new ArrayList<Runnable>(); for( IConfigurationElement element : elements ) { Object extension; try { extension = element.createExecutableExtension( "class" ); } catch( CoreException e ) { throw new RuntimeException( e ); } contributions.add( ( Runnable )extension ); } for( Runnable runnable : contributions ) { runnable.run(); } } } While evaluate could be split into smaller methods to clarify its responsibilities, the class would also be filled with more glue code. As I find such sections hard to read and awkward to write I was pondering about a fluent interface approach that should guide a developer through the various implementation steps. Combined with Java 8 lambda expressions I was able to create an auxiliary that boils down the evaluate functionality to: public void evaluate() { new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .process(); } Admittedly I cheated a bit, since it is possible to improve the first example also a little by using the java 8 Collection#forEach feature instead of looping explicitly. But I think this still would not make the code really great! For general information on how to extend Eclipse using the extension point mechanism you might refer to the Plug-in Development Environment Guide of the online documentation. RegistryAdapter The main class of the helper implementation is the RegistryAdapter, which encapsulates the system’s IExtensionRegistry instance and provides a set of methods to define what operations should be performed with respect to a particular extension point. At the moment the adapter allows to read contribution configurations or to create executable extensions. Multiple contributions are evaluated as shown above using methods that are denoted in plural – to evaluate exactly one contribution element, methods denoted in singular are appropriate. This means to operate on a particular runnable contribution you would use createExecutableExtension instead of createExecutableExtensions. Depending on which operation is selected different configuration options are available. This is made possible as the fluent API implements a kind of grammar to improve guidance and programming safety. For example the readExtension operation does not allow to register a ExecutableExtensionConfigurator, since this would be an inept combination. The method withConfiguration allows to configure or initialize each executable extension after its creation. But as shown in the example above it can also be used to invoke the runnable extension directly. Due to the type safe implementation of createExecutableExtension(s) it is possible to access the extension instance within the lambda expression without cast. Finally the method process() executes the specified operation and returns a typed Collection of the created elements in case they are needed for further processing: Collection<Extension> extensions = new RegistryAdapter().readExtensions( EP_ID ).process(); Predicate But how is is possible to select a single eclipse extension point contribution element with the adapter? Assume that we add an attribute id to our contribution definition above. The fluent API of RegistryAdapter allows to specify a Predicate that can be used to select a particular contribution: public void evaluate() { new RegistryAdapter() .createExecutableExtension( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .thatMatches( attribute( "id", "myContribution" ) ) .process(); } There is a utility class Predicates that provides a set of predefined implementations to ease common use cases like attribute selection. The code above is a shortcut using static imports for: .thatMatches( Predicates.attribute( "id", "myContribution" ) ) where “myContribution” stands for the unique id value declared in the extension contribution: <extension point="com.codeaffine.post.contribution"> <contribution id="myContribution" class="com.codeaffine.post.MyContribution"> </contribution> </extension>Of course it is possible to implement custom predicates in case the presets are not sufficient: public void evaluate() { Collection<Extension> extensions = new RegistryAdapter() .readExtensions( EP_ID, Description.class ) .thatMatches( (extension) -> extension.getValue() != null ) .process(); } Extension Usually Eclipse Extension Point evaluation operates most of the time on IConfigurationElement. The adapter API is unsharp in distinguishing between extension point and configuration element and provides a simple encapsulation called Extension. But for more sophisticated tasks Extension instances make the underlying configuration element accessible. In general Extension provides accessors to the attribute values, contribution names, contribution values, nested contributions and allows to create an executable extension. One of the major reasons to introduce this abstraction was to have an API that converts checked CoreException implicitly to runtime exceptions as I am accustomed to work with the Fail Fast approach without bulky checked exeption handling. Exception Handling However in case that the eclipse extension evaluation is invoked at startup time of a plug-in or gets executed in background, Fail Fast is not an option. And it is surely not reasonable to ignore remaining contributions after a particular contribution has caused a problem. Because of this the adapter API allows to replace the Fail Fast mechanism with explicit exception handling: public void evaluate() { Collection<Runnable> contributions = new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withExceptionHandler( (cause) -> handle( cause ) ) .process(); [...] } private void handle( CoreException cause ) { // do what you gotta do } Note that the returned collection of contributions contains of course only those elements that did not run into any trouble. Where to get it? For those who want to check it out, there is a P2 repository that contains the feature com.codeaffine.eclipse.core.runtime providing the RegistryAdapter and its accompanying classes. The repository is located at:http://fappel.github.io/xiliary/and the source code and issue tracker is hosted at:https://github.com/fappel/xiliaryAlthough documentation is missing completely at this moment, it should be quite easy to get started with the given explanations of this post. But please keep in mind that the little tool is in a very early state and probably will undergo some API changes. In particular dealing only with CoreExceptions while looping over the contributions still is a bit too weak. Conclusion The sections above introduced the basic functionality of the RegistyAdapter and focused on how it eases Eclipse extension point evaluation. I replaced old implementations in my current projects with the adapter and did not run into any trouble, which means that the solution looks promising to me so far… But there is still more than meets the eye. With this little helper in place, combined with an additional custom assert type, writing integration tests for an extension point’s evaluation functionality really gets a piece of cake. That topic is however out of scope for this post and will be covered next time. So stay tuned and do not forget to share the knowledge, in case you find the approach described above useful – thanks!Reference: Eclipse Extension Point Evaluation Made Easy from our JCG partner Rudiger Herrmann at the Code Affine blog....
software-development-2-logo

Microservices

A microservice is a small, focussed piece of software that can be developed, deployed and upgraded independently. Commonly, it exposes it functionality via a synchronous protocol such as HTTP/REST. That is my understanding of microservices, at least. There is no hard definition of what they are, but they currently seem to be the cool kid on the block, attracting increasing attention and becoming a mainstream approach to avoiding the problem with monolithic architectures. Like any architectural solution, they are not without their downsides too, such as increased deployment and monitoring complexity. This post will have a look at some of the common characteristics of microservices and contrast them with monolithic architectures. Definition and Characteristics Let’s start with some definitions from folks wiser than I: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. – Microservices by Martin Fowler and and James Lewis [1] Functionally decompose an application into a set of collaborating services, each with a set of narrow, related functions, developed and deployed independently, with its own database. – Microservices Architecture by Chris Richardson [2] Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services. – Microservices – Not A Free Lunch by Benjamin Wootton [6] Some if this may not sound new. Since way back in 1984, the Unix Philosophy [8] has advocated writing programs that do one thing well, working together with other programs through standard interfaces. So perhaps more useful than definitions are some common characteristics of a microservice:Single purposeEach service should be focussed, doing one thing well. Cliff Moon [4] defined a microservice as “any isolated network service that will only perform operations on a single type of resource”, and gives the example of a user microservice that can perform operations such as new signups, password resets, etc.Loosely coupledA microservice should be able to operate without relying on other services. That is not to say that microservices cannot communicate with other microservices, it’s just that should just be the exception rather than the rule. A microservice should be, where possible, self-sufficient.Independently deployableWith monoliths, a change to any single piece of the application requires the entire app to be deployed. With microservices, each one should be deployable by itself, independently of any other services or apps. This can provide great flexibility, or ‘agility’. Ideally this should be done in a fully automated way; you’ll want a solid Continuous Integration pipeline, and devops culture, behind you. As discussed below in the disadvantages section, the one caveat here is when you are changing your interfaces.SmallThe ‘micro’ in microservice isn’t too important. 10 to 1000 lines of code might be a reasonable ball park but a much better definition might be ‘Small enough to fit in your head’ [3], that is, the project should be small enough to be easily understood by one developer. Another might be ‘Small enough to throw away’, or rewrite over maintain [3]; At one end of the scale, a single developer could create a microservice in a day. At the other end, Fowler [1] suggests that “The largest follow Amazon’s notion of the Two Pizza Team (i.e. the whole team can be fed by two pizzas), meaning no more than a dozen people”. The main point is that size is not the most important characteristic of a microservice – a single, focused purpose is. However, perhaps the best way to understand microservices is to consider an alternative, contrasting architectural style: the monolith. The Monolithic Alternative A monolithic application is one that is built and deployed as a single artifact (e.g. war file). In many ways this is the opposite of the microservice architecture. Applications often start out life as a monolith, and for good reason. Monoliths are:Easy to setup and develop – single project in an IDE Easy to deploy – a single war file Can be scaled horizontally by adding more servers, typically behind a load balancerIn fact it is probably advisable to start your applications as a monolith . Keep things simple until you have a good reason for changes (avoiding YAGNI architectural decisions). That being said, as monoliths grow, you may well start running into problems… Problems with MonolithsCodebase can be difficult to setup, and understandA large monolithic app can overload your IDE, be slow to build (and hence run tests), and it can be difficult to understand the whole application. This can have a downward spiral on software quality.Forced team dependenciesTeams are forced to coordinate (e.g. on technology choices, release cycles, shared resources etc), even if what they are working on has little, if anything, in common. For example, two teams working on separate functionality within the same monolith may be forced to use the same versions of libraries. Team A need to use Spring 3 for legacy code reasons. Team B want to use Spring 4. With both Spring3 and Spring4 in your list of dependencies, which one actually gets used? In java-world it is surprisingly easy to run into these conflicts.How do you split up teams when using a monolithic architecture?Often teams are split by technology e.g. UI teams, server-side logic teams, and database teams. Even simple changes can require a lot of different teams. It may often be easier to hack the required functionality into the area your own team is responsible for, rather than deal with cross team coordination, even if it was better placed elsewhere –  Conway’s Law [9] in action. This is especially true the larger, more dispersed and more bureaucratic a team is.Obstacle to frequent deploymentsWhen you deploy your entire codebase in one go, each deployment becomes a much bigger, likely organizational wide, deal. Deployment cycles becomes slower and less frequently which makes each deployment more risky.A long-term commitment to a technology stackWhether you like it or not! Would you like to start using Ruby in your project? If the whole app is written in Java, then you will probably be forced to rewrite it all! A daunting, and unlikely, possibility. This all or nothing type setup is closely tied to the next point of share libraries… Why use Microservices? In contrast to monolithic applications, the microservice approach is to focus on a specific area of business functionality, not technology. Such services are often developed by teams that are cross-functional. This is perhaps one of the reasons why so many job descriptions these days say ‘full stack’. So, what are the advantages of using microservices? Many of the advantages of microservices relate to the problems mentioned for monolithic architectures above and include:Being smaller and focussed means microservices are easier to understand for developers, and faster to build, deploy and startup Independently deployableEach can be deployed without impacting other services (with interface changes being a notable exception)Independently scalableEasy to add more instances the services that are experiencing heaviest loadIndependent technology stackEach microservice can use a completely independent technology stack allowing easier migrate your technology stack. I think it is worth pointing out here that just because you can use a different technology for each microservice doesn’t mean you should! Increasingly heterogeneous stacks bring increasing complexity. Exercise caution and be driven by business needs.Improved resiliency;If one service goes down (e.g. a memory leak), the rest of the app should continue to run unaffected. DisadvantagesDistributed applications are always more complicated! While monoliths typically use in-memory calls, microservices typically require inter-process calls, often to different boxes but in the same data center. As well as being more expensive, the APIs associated with remote calls are usually coarser-grained, which can be more awkward to use. Refactoring code in a monolith is easy. Doing it in a microservice can be much more difficult e.g. moving code between microservices Although microservices allow you to independently release, that is not so straightforward when you are changing the interfaces – requires coordination across the clients and the service that is change. That being said, some ways to mitigate this are:Use flexible, forgiving, broad interfaces Be as tolerant as possible when reading data from a service. Use design patterns like the Tolerant Reader be conservative in what you do, be liberal in what you accept from others — Jon PostelWhere things can start to get hard with microservices is at an operations level. Runtime management and monitoring of microservices in particular can be problematic. A good ops/devops team is necessary, particularly when you are deploying large numbers of microservices at scale. Where as detecting problems in a single monolithic application can be dealt with by attached a monitor to the single process, doing the same when you have dozens of processes interacting is much more difficult. Microservices vs SOA SOA, or Service Oriented Architecture, is an architectural design pattern that seems to have somewhat fallen out of favor. SOA also involved a collection of services, so what are the difference between SOA and microservices? It is a difficult question to answer, but Fowler here used the term ‘SOA done right’, which I like. Adrian Cockroft [15] described Microservice as being like SOA but with a bounded context. Wikipedia distinguishes the two by saying that SOA aims at integrating various (business) applications whereas several microservices belong to one application only [14]. A related aspect is that many SOAs use ESB (Enterprise Service Buses), where as microservices tend to smart endpoints, dumb pipes [1]. Finally, although neither microservices and SOAs are tied to any one protocol or data format, SOAs did seem to frequently involve Simple Object Access Protocol (SOAP)-based Web services, using XML and WSDL etc, whereas microservices seem to commonly favour REST and JSON. Who is using Microservices? Most large scale web sites including Netflix, Amazon and eBay have evolved from a monolithic architecture to a microservices architecture. Amazon was on of the pioneers of using microservices. Between 100-150 services are accessed to build a single page [10]. If for example, the recommendation service is down, default recommendations can be used. These may be less effective at tempting you to buy, but is a better alternative to errors or no recommendations at all. Netflix are also pioneers in the microservice world, not only using microservices extensively, but also releasing many useful tools back into the open source world, including Chaos Monkey for testing web application resiliency and Janitor Monkey for cleaning up unused resources. See more at netflix.github.io. TicketMaster, the ticket sales and distribution company, is also making increasing use of microservices to give them “Boardroom agility or the process of quickly reacting to the marketplace.” [12] Best practices Some best practices for microservices might be:Separate codebasesEach microservice has its own repository and CI build setupSeparate interface and implementationSeparate the API and implementation modules, using a Maven multi-module project or similar. For example, clients should depend on CustomerDao rather than CustomerDaoImpl, or JpaCustomerDao.Use monitoring!For example AppDynamics and New RelicHave health checks built into your services Have standard templates availableIf many developers are creating microservices, have a template they can use that gets them up and running quickly and implements corporate standards for logging and the aforementioned monitoring and health checks.Support multiple versionsLeave multiple old microservice versions running. Fast introduction vs. slow retirement asymmetry. [11] Summary As an architectural approach, and particularly as an alternative to monolithic architectures, microservices are an attractive choice. They allow independent technology stacks to be used, with each service being independently built and deployed, meaning you are much more likely to be able to follow the ‘deploy early, deploy often’ mantra. That being said, they do bring their own complexities, including deployment and monitoring. It is advisable to start with the relative simplicity of a monolithic approach and only consider microservices when you start running into problems. Even then, migrating slowly to microservices is likely a sensible approach. For example, introducing new areas of functionality as microservices, and slowly migrating old as they need updates and rewrites anyway. And all the while, bear in mind that while each microservice itself may be simple, some of the complexity is simply moved up a level. The coordination of dozens or even hundreds of microservices brings many new challenges including build, deployment and monitoring and shouldn’t be undertaken without a solid Continuos Delivery infrastructure in place, and a good devops mentality within in the team. Cross functional and multidisciplinary teams using automation are automation are essential. Used judiciously and with the right infrastructure in place, microservices seem to be thriving. I like Martin Fowlers’s guarded optimism: “We write with cautious optimism that microservices can be a worthwhile road to tread”. [12] References and reading materials:Microservices by Martin Fowler and James Lewis Microservices Architecture by Chris Richardson Micro services – Java, the Unix Way by James Lewis Microservices, or How I Learned To Stop Making Monoliths and Love Conway’s Law by Cliff Moon Micro service architecure by Fred George Microservices are not a free lunch by Benjamin Wootton Antifragility and Microservices by Russ Miles The Unix Philosophy Conway’s Law Amazon Architecture Migrating to microservices by Adrian Cockroft Microservices with Spring Boot  Microservices for the Grumpy Neckbeard Microservices definition on Wikipedia Microservices and DevOps by Adrian CockcroftReference: Microservices from our JCG partner Shaun Abram at the Shaun Abram’s blog blog....
ceylon_logo

Typesafe APIs for the browser

A new feature in Ceylon 1.1, that I’ve not blogged about before, is dynamic interfaces. This was something that Enrique and I worked on together with Corbin Uselton, one of our GSoC students. Ordinarily, when we interact with JavaScript objects, we do it from within a dynamic block, where Ceylon’s usual scrupulous typechecking is suppressed. The problem with this approach is that if it’s an API I use regularly, my IDE can’t help me get remember the names and signatures of all the operations of the API. Dynamic interfaces make it possible to ascribe static types to an untyped JavaScript API. For example, we could write a dynamic interface for the HTML 5 CanvasRenderingContext2D like this: dynamic CanvasRenderingContext2D { shared formal variable String|CanvasGradient|CanvasPattern fillStyle; shared formal variable String font;shared formal void beginPath(); shared formal void closePath();shared formal void moveTo(Integer x, Integer y); shared formal void lineTo(Integer x, Integer y);shared formal void fill(); shared formal void stroke();shared formal void fillText(String text, Integer x, Integer y, Integer maxWidth=-1);shared formal void arc(Integer x, Integer y, Integer radius, Float startAngle, Float endAngle, Boolean anticlockwise); shared formal void arcTo(Integer x1, Integer y1, Integer x2, Float y2, Integer radius);shared formal void bezierCurveTo(Integer cp1x, Integer cp1y, Integer cp2x, Float cp2y, Integer x, Integer y);shared formal void strokeRect(Integer x, Integer y, Integer width, Integer height); shared formal void fillRect(Integer x, Integer y, Integer width, Integer height); shared formal void clearRect(Integer x, Integer y, Integer width, Integer height);shared formal CanvasGradient createLinearGradient(Integer x0, Integer y0, Integer x1, Integer y1); shared formal CanvasGradient createRadialGradient(Integer x0, Integer y0, Integer r0, Integer x1, Integer y1, Integer r1); shared formal CanvasPattern createPattern(dynamic image, String repetition);//TODO: more operations!! }dynamic CanvasGradient { shared formal void addColorStop(Integer offset, String color); }dynamic CanvasPattern { //todo } Now, if we assign an instance of JavaScript’s CanvasRenderingContext2D to this interface type, we won’t need to be inside a dynamic block when we call its methods. You can try it out in your own browser by clicking the “TRY ONLINE” button! CanvasRenderingContext2D ctx; dynamic { //get the CanvasRenderingContext2D from the //canvas element using dynamically typed code ctx = ... ; }//typesafe code, checked at compile time ctx.fillStyle = "navy"; ctx.fillRect(50, 50, 235, 60); ctx.beginPath(); ctx.moveTo(100,50); ctx.lineTo(60,5); ctx.lineTo(75,75); ctx.fill(); ctx.fillStyle = "orange"; ctx.font = "40px PT Sans"; ctx.fillText("Hello world!", 60, 95); Notice that we don’t need to ascribe an explicit type to every operation of the interface. We can leave some methods, or even just some parameters of a method untyped, by declaring them dynamic. Such operations may only be called from within a dynamic block, however. A word of caution: dynamic interfaces are a convenient fiction. They can help make it easier to work with an API in your IDE, but at runtime there is nothing Ceylon can do to ensure that the object you assign to the dynamic interface type actually implements the operations you’ve ascribed to it.Reference: Typesafe APIs for the browser from our JCG partner Gavin King at the Ceylon Team blog blog....
rabbitmq-logo

Spring @Configuration – RabbitMQ connectivity

I have been playing around with converting an application that I have to use Spring @Configuration mechanism to configure connectivity to RabbitMQ – originally I had the configuration described using an xml bean definition file. So this was my original configuration:             <beans ...;><context:property-placeholder/> <rabbit:connection-factory id="rabbitConnectionFactory" username="${rabbit.user}" host="localhost" password="${rabbit.pass}" port="5672"/> <rabbit:template id="amqpTemplate" connection-factory="rabbitConnectionFactory" exchange="rmq.rube.exchange" routing-key="rube.key" channel-transacted="true"/><rabbit:queue name="rmq.rube.queue" durable="true"/><rabbit:direct-exchange name="rmq.rube.exchange" durable="true"> <rabbit:bindings> <rabbit:binding queue="rmq.rube.queue" key="rube.key"></rabbit:binding> </rabbit:bindings> </rabbit:direct-exchange></beans> This is a fairly simple configuration that :sets up a connection to a RabbitMQ server, creates a durable queue(if not available) creates a durable exchange and configures a binding to send messages to the exchange to be routed to the queue based on a routing key called “rube.key”This can be translated to the following @Configuration based java configuration: @Configuration public class RabbitConfig {@Autowired private ConnectionFactory rabbitConnectionFactory;@Bean DirectExchange rubeExchange() { return new DirectExchange("rmq.rube.exchange", true, false); }@Bean public Queue rubeQueue() { return new Queue("rmq.rube.queue", true); }@Bean Binding rubeExchangeBinding(DirectExchange rubeExchange, Queue rubeQueue) { return BindingBuilder.bind(rubeQueue).to(rubeExchange).with("rube.key"); }@Bean public RabbitTemplate rubeExchangeTemplate() { RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory); r.setExchange("rmq.rube.exchange"); r.setRoutingKey("rube.key"); r.setConnectionFactory(rabbitConnectionFactory); return r; } } This configuration should look much more simpler than the xml version of the configuration. I am cheating a little here though, you should be seeing a missing connectionFactory which is just being injected into this configuration, where is that coming from..this is actually part of a Spring Boot based application and there is a Spring Boot Auto configuration for RabbitMQ connectionFactory based on whether the RabbitMQ related libraries are present in the classpath. Here is the complete configuration if you are interested in exploring further – https://github.com/bijukunjummen/rg-si-rabbit/blob/master/src/main/java/rube/config/RabbitConfig.java References:Spring-AMQP project here Spring-Boot starter project using RabbitMQ hereReference: Spring @Configuration – RabbitMQ connectivity from our JCG partner Biju Kunjummen at the all and sundry blog....
netbeans-logo

NetBeans 8.0’s Five New Performance Hints

NetBeans 8.0 introduces several new Java hints. Although there are a large number of these new hints related to Java Persistence API, I focus on five new hints in the Performance category. The five new “Performance Hints” introduced with NetBeans 8.0 are:            Boxing of already boxed value Redundant String.toString() Replace StringBuffer/StringBuilder by String Unnecessary temporary during conversion from String Unnecessary temporary during conversion to StringEach of these five performance-related Java hints are illustrated in this post with screen snapshots taken from NetBeans 8.0 with code that demonstrates these hints. There are two screen snapshots for each hint, one each showing the text displayed when the cursor hovers over the line of code marked with yellow underlining and one each showing the suggested course of action to be applied to address that hint (shown when clicking on the yellow light bulb to the left of the flagged line). Some of the captured screen snapshots include examples of code that avoid the hint. Boxing of Already Boxed ValueRedundant String.toString()Replace StringBuffer/StringBuilder by StringUnnecessary Temporary During Conversion from StringUnnecessary Temporary During Conversion to String Unless I’ve done something incorrectly, there appears to be a minor bug with this hint in that it reports “Unnecessary temporary when converting from String” when, in this case, it should really be “Unnecessary temporary when converting to String”. This is not a big deal as the condition is flagged and the action to fix it seems appropriate.Conclusion The five performance-related hints introduced by NetBeans 8.0 and illustrated here can help Java developers avoid unnecessary object instantiations and other unnecessary runtime cost. Although the benefit of this optimization as shown in my simple examples is almost negligible, it could lead to much greater savings when used in code with loops that perform these same unnecessary instantiations thousands of times. Even without consideration of the performance benefit, these hints help to remind Java developers and teach developers new to Java about the most appropriate mechanisms for acquiring instances and primitive values.Reference: NetBeans 8.0’s Five New Performance Hints from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close