Featured FREE Whitepapers

What's New Here?


Java EE 7 Batch Processing and World of Warcraft – Part 1

This was one of my sessions at the last JavaOne. This post is going to expand the subject and look into a real application using the Batch JSR-352 API. This application integrates with the MMORPG World of Warcraft. Since the JSR-352 is a new specification in the Java EE world, I think that many people don’t know how to use it properly. It may also be a challenge to identify the use cases to which this specification apply. Hopefully this example can help you understand better the use cases.       Abstract World of Warcraft is a game played by more than 8 million players worldwide. The service is offered by region: United States (US), Europe (EU), China and Korea. Each region has a set of servers called Realm that you use to connect to be able to play the game. For this example, we are only looking into the US and EU regions.One of the most interesting features about the game is that allows you to buy and sell in-game goods called Items, using an Auction House. Each Realm has two Auction House’s. On average each Realm trades around 70.000 Items. Let’s crunch some numbers:512 Realm’s (US and EU) 70 K Item’s per Realm More than 35 M Item’s overallThe Data Another cool thing about World of Warcraft is that the developers provide a REST API to access most of the in-game information, including the Auction House’s data. Check here the complete API. The Auction House’s data is obtained in two steps. First we need to query the correspondent Auction House Realm REST endpoint to get a reference to a JSON file. Next we need to access this URL and download the file with all the Auction House Item’s information. Here is an example: http://eu.battle.net/api/wow/auction/data/aggra-portugues The Application Our objective here is to build an application that downloads the Auction House’s, process it and extract metrics. These metrics are going to build a history of the Items price evolution through time. Who knows? Maybe with this information we can predict price fluctuation and buy or sell Items at the best times. The Setup For the setup, we’re going to use a few extra things to Java EE 7:Java EE 7 Angular JS Angular ng-grid UI Bootstrap Google Chart WildflyJobs The main work it’s going to be performed by Batch JSR-352 Jobs. A Job is an entity that encapsulates an entire batch process. A Job will be wired together via a Job Specification Language. With JSR-352, a Job is simply a container for the steps. It combines multiple steps that belong logically together in a flow. We’re going to split the business login into three jobs:Prepare – Creates all the supporting data needed. List Realms, create folders to copy files. Files – Query realms to check for new files to process. Process – Downloads the file, process the data, extract metrics.The Code Back-end – Java EE 7 with Java 8 Most of the code is going to be in the back-end. We need Batch JSR-352, but we are also going to use a lot of other technologies from Java EE: like JPA, JAX-RS, CDI and JSON-P. Since the Prepare Job is only to initialize application resources for the processing, I’m skipping it and dive into the most interesting parts. Files Job The Files Job is an implementation of AbstractBatchlet. A Batchlet is the simplest processing style available in the Batch specification. It’s a task oriented step where the task is invoked once, executes, and returns an exit status. This type is most useful for performing a variety of tasks that are not item-oriented, such as executing a command or doing file transfer. In this case, our Batchlet is going to iterate on every Realm make a REST request to each one and retrieve an URL with the file containing the data that we want to process. Here is the code: LoadAuctionFilesBatchlet @Named public class LoadAuctionFilesBatchlet extends AbstractBatchlet { @Inject private WoWBusiness woWBusiness;@Inject @BatchProperty(name = "region") private String region; @Inject @BatchProperty(name = "target") private String target;@Override public String process() throws Exception { List<Realm> realmsByRegion = woWBusiness.findRealmsByRegion(Realm.Region.valueOf(region)); realmsByRegion.parallelStream().forEach(this::getRealmAuctionFileInformation);return "COMPLETED"; }void getRealmAuctionFileInformation(Realm realm) { try { Client client = ClientBuilder.newClient(); Files files = client.target(target + realm.getSlug()) .request(MediaType.TEXT_PLAIN).async() .get(Files.class) .get(2, TimeUnit.SECONDS);files.getFiles().forEach(auctionFile -> createAuctionFile(realm, auctionFile)); } catch (Exception e) { getLogger(this.getClass().getName()).log(Level.INFO, "Could not get files for " + realm.getRealmDetail()); } }void createAuctionFile(Realm realm, AuctionFile auctionFile) { auctionFile.setRealm(realm); auctionFile.setFileName("auctions." + auctionFile.getLastModified() + ".json"); auctionFile.setFileStatus(FileStatus.LOADED);if (!woWBusiness.checkIfAuctionFileExists(auctionFile)) { woWBusiness.createAuctionFile(auctionFile); } } } A cool thing about this is the use of Java 8. With parallelStream() invoking multiple REST request at once is easy as pie! You can really notice the difference. If you want to try it out, just run the sample and replace parallelStream() with stream() and check it out. On my machine, using parallelStream() makes the task execute around 5 or 6 times faster. Update Usually, I would not use this approach. I’ve done it, because part of the logic involves invoking slow REST requests and parallelStreams really shine here. Doing this using batch partitions is possible, but hard to implement. We also need to pool the servers for new data every time, so it’s not terrible if we skip a file or two. Keep in mind that if you don’t want to miss a single record a Chunk processing style is more suitable. Thank you to Simon Martinelli for bringing this to my attention. Since the Realms of US and EU require different REST endpoints to invoke, these are perfect to partitioned. Partitioning means that the task is going to run into multiple threads. One thread per partition. In this case we have two partitions. To complete the job definition we need to provide a JoB XML file. This needs to be placed in the META-INF/batch-jobs directory. Here is the files-job.xml for this job: files-job.xml <job id="loadRealmAuctionFileJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <step id="loadRealmAuctionFileStep"> <batchlet ref="loadAuctionFilesBatchlet"> <properties> <property name="region" value="#{partitionPlan['region']}"/> <property name="target" value="#{partitionPlan['target']}"/> </properties> </batchlet> <partition> <plan partitions="2"> <properties partition="0"> <property name="region" value="US"/> <property name="target" value="http://us.battle.net/api/wow/auction/data/"/> </properties> <properties partition="1"> <property name="region" value="EU"/> <property name="target" value="http://eu.battle.net/api/wow/auction/data/"/> </properties> </plan> </partition> </step> </job> In the files-job.xml we need to define our Batchlet in batchlet element. For the partitions just define the partition element and assign different properties to each plan. These properties can then be used to late bind the value into the LoadAuctionFilesBatchlet with the expressions #{partitionPlan['region']} and #{partitionPlan['target']}. This is a very simple expression binding mechanism and only works for simple properties and Strings. Process Job Now we want to process the Realm Auction Data file. Using the information from the previous job, we can now download the file and do something with the data. The JSON file has the following structure: item-auctions-sample.json { "realm": { "name": "Grim Batol", "slug": "grim-batol" }, "alliance": { "auctions": [ { "auc": 279573567, // Auction Id "item": 22792, // Item for sale Id "owner": "Miljanko", // Seller Name "ownerRealm": "GrimBatol", // Realm "bid": 3800000, // Bid Value "buyout": 4000000, // Buyout Value "quantity": 20, // Numbers of items in the Auction "timeLeft": "LONG", // Time left for the Auction "rand": 0, "seed": 1069994368 }, { "auc": 278907544, "item": 40195, "owner": "Mongobank", "ownerRealm": "GrimBatol", "bid": 38000, "buyout": 40000, "quantity": 1, "timeLeft": "VERY_LONG", "rand": 0, "seed": 1978036736 } ] }, "horde": { "auctions": [ { "auc": 278268046, "item": 4306, "owner": "Thuglifer", "ownerRealm": "GrimBatol", "bid": 570000, "buyout": 600000, "quantity": 20, "timeLeft": "VERY_LONG", "rand": 0, "seed": 1757531904 }, { "auc": 278698948, "item": 4340, "owner": "Celticpala", "ownerRealm": "Aggra(Português)", "bid": 1000000, "buyout": 1000000, "quantity": 10, "timeLeft": "LONG", "rand": 0, "seed": 0 } ] } } The file has a list of the Auction’s from the Realm it was downloaded from. In each record we can check the item for sale, prices, seller and time left until the end of the auction. Auction’s are algo aggregated by Auction House type: Alliance and Horde. For the process-job we want to read the JSON file, transform the data and save it to a database. This can be achieved by Chunk Processing. A Chunk is an ETL (Extract – Transform – Load) style of processing which is suitable for handling large amounts of data. A Chunk reads the data one item at a time, and creates chunks that will be written out, within a transaction. One item is read in from an ItemReader, handed to an ItemProcessor, and aggregated. Once the number of items read equals the commit interval, the entire chunk is written out via the ItemWriter, and then the transaction is committed. ItemReader The real files are so big that they cannot be loaded entirely into memory or you may end up running out of it. Instead we use JSON-P API to parse the data in a streaming way. AuctionDataItemReader @Named public class AuctionDataItemReader extends AbstractAuctionFileProcess implements ItemReader { private JsonParser parser; private AuctionHouse auctionHouse;@Inject private JobContext jobContext; @Inject private WoWBusiness woWBusiness;@Override public void open(Serializable checkpoint) throws Exception { setParser(Json.createParser(openInputStream(getContext().getFileToProcess(FolderType.FI_TMP))));AuctionFile fileToProcess = getContext().getFileToProcess(); fileToProcess.setFileStatus(FileStatus.PROCESSING); woWBusiness.updateAuctionFile(fileToProcess); }@Override public void close() throws Exception { AuctionFile fileToProcess = getContext().getFileToProcess(); fileToProcess.setFileStatus(FileStatus.PROCESSED); woWBusiness.updateAuctionFile(fileToProcess); }@Override public Object readItem() throws Exception { while (parser.hasNext()) { JsonParser.Event event = parser.next(); Auction auction = new Auction(); switch (event) { case KEY_NAME: updateAuctionHouseIfNeeded(auction);if (readAuctionItem(auction)) { return auction; } break; } } return null; }@Override public Serializable checkpointInfo() throws Exception { return null; }protected void updateAuctionHouseIfNeeded(Auction auction) { if (parser.getString().equalsIgnoreCase(AuctionHouse.ALLIANCE.toString())) { auctionHouse = AuctionHouse.ALLIANCE; } else if (parser.getString().equalsIgnoreCase(AuctionHouse.HORDE.toString())) { auctionHouse = AuctionHouse.HORDE; } else if (parser.getString().equalsIgnoreCase(AuctionHouse.NEUTRAL.toString())) { auctionHouse = AuctionHouse.NEUTRAL; }auction.setAuctionHouse(auctionHouse); }protected boolean readAuctionItem(Auction auction) { if (parser.getString().equalsIgnoreCase("auc")) { parser.next(); auction.setAuctionId(parser.getLong()); parser.next(); parser.next(); auction.setItemId(parser.getInt()); parser.next(); parser.next(); parser.next(); parser.next(); auction.setOwnerRealm(parser.getString()); parser.next(); parser.next(); auction.setBid(parser.getInt()); parser.next(); parser.next(); auction.setBuyout(parser.getInt()); parser.next(); parser.next(); auction.setQuantity(parser.getInt()); return true; } return false; }public void setParser(JsonParser parser) { this.parser = parser; } } To open a JSON Parse stream we need Json.createParser and pass a reference of an inputstream. To read elements we just need to call the hasNext() and next() methods. This returns a JsonParser.Event that allows us to check the position of the parser in the stream. Elements are read and returned in the readItem() method from the Batch API ItemReader. When no more elements are available to read, return null to finish the processing. Note that we also implements the method open and close from ItemReader. These are used to initialize and clean up resources. They only execute once. ItemProcessor The ItemProcessor is optional. It’s used to transform the data that was read. In this case we need to add additional information to the Auction. AuctionDataItemProcessor @Named public class AuctionDataItemProcessor extends AbstractAuctionFileProcess implements ItemProcessor { @Override public Object processItem(Object item) throws Exception { Auction auction = (Auction) item;auction.setRealm(getContext().getRealm()); auction.setAuctionFile(getContext().getFileToProcess());return auction; } } ItemWriter Finally we just need to write the data down to a database: AuctionDataItemWriter @Named public class AuctionDataItemWriter extends AbstractItemWriter { @PersistenceContext protected EntityManager em;@Override public void writeItems(List<Object> items) throws Exception { items.forEach(em::persist); } } The entire process with a file of 70 k record takes around 20 seconds on my machine. I did notice something very interesting. Before this code, I was using an injected EJB that called a method with the persist operation. This was taking 30 seconds in total, so injecting the EntityManager and performing the persist directly saved me a third of the processing time. I can only speculate that the delay is due to an increase of the stack call, with EJB interceptors in the middle. This was happening in Wildfly. I will investigate this further. To define the chunk we need to add it to a process-job.xml file: process-job.xml <step id="processFile" next="moveFileToProcessed"> <chunk item-count="100"> <reader ref="auctionDataItemReader"/> <processor ref="auctionDataItemProcessor"/> <writer ref="auctionDataItemWriter"/> </chunk> </step> In the item-count property we define how many elements fit into each chunk of processing. This means that for every 100 the transaction is committed. This is useful to keep the transaction size low and to checkpoint the data. If we need to stop and then restart the operation we can do it without having to process every item again. We have to code that logic ourselves. This is not included in the sample, but I will do it in the future. Running To run a job we need to get a reference to a JobOperator. The JobOperator provides an interface to manage all aspects of job processing, including operational commands, such as start, restart, and stop, as well as job repository related commands, such as retrieval of job and step executions. To run the previous files-job.xml Job we execute: Execute Job JobOperator jobOperator = BatchRuntime.getJobOperator(); jobOperator.start("files-job", new Properties()); Note that we use the name of job xml file without the extension into the JobOperator. Next Steps We still need to aggregate the data to extract metrics and display it into a web page. This post is already long, so I will describe the following steps in a future post. Anyway, the code for that part is already in the Github repo. Check the Resources section. Resources You can clone a full working copy from my github repository and deploy it to Wildfly. You can find instructions there to deploy it.Reference: Java EE 7 Batch Processing and World of Warcraft – Part 1 from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Rapid Mobile App Development With Appery.io – Creating Vacation Request App

I just got back from Stamford, CT where I did a talk at the Web, Mobile and Backend Developers Meetup. In about 90 minutes we built a prototype version of a productivity app called Vacation Request using Appery.io cloud development platform. The app helps employees request and submit time off from their mobile phones. The app has the following functionality:            User login and registration Submit vacation request. The request is saved into Appery.io database Send an SMS message to the manager notifying him/her of a new request Send an email to the manger notifying him/her of a new request Push notifications to notify of a new request, or notify the employ when the request is approved Customer Console for the manager to view/approve requests from a user-friendly console Package the app for AndroidLet me walk you through the app in more details. The first page we designed was the Login page:Appery.io Backend Services comes with out-of-the-box User Management built-in so you can register users, sign in users, and logout users. This is how the Users collection looks:As the App Builder and the Database are integrated, it’s fast to generate the login/sign up services automatically:Then the service is added to the page and mapped to the page. This is request mapping (from page to service):and this is response mapping (from service to page or local storage). In our example, we are saving the user id and the user session into local storage:The steps are identical for registration. In case login or registration fail for some reason, we will display a basic error:Next we built the Vacation Request page where you make the actual request. This page is based on a template which has a Panel menu that slides from the left:And this is how it looks when the menu (from the template) is opened in development:The Save button saves the request into Appery.io Database (into Vacation collection):The Email button sends an email to the manager using the SendGrid API. The functionality was imported as plugin. The SMS buttons sends an SMS message to the manager using the Twilio API. Once we were done building the app, we added push notifications capability:To send a push notification, the app has to be installed on the device. Packaging for various native platforms is as simple as clicking a button:Lastly, we activated the Customer Console which allows the manager to view the data (vacation requests or any other app data) and approve the requests there. The Customer Console is a user-friendly app that allows editing the app data without asking the developer to do that. It also allows to send push notifications. Access to data and whether you can send push messages is configurable.The goal was to show how rapidly you can build a mobile app using Appery.io. In about 90 minutes, we were able to build a prototype or a first version of an app that saves vacation requests, allows sending an email or an SMS message, with push notifications. And we built a binary for Android.Reference: Rapid Mobile App Development With Appery.io – Creating Vacation Request App from our JCG partner Max Katz at the Maxa blog blog....

The Cloud Winners and Losers?

The cloud is revolutionising IT. However there are two sides to every story: the winners and the losers. Who are they going to be and why? If you can’t wait here are the losers: HP, Oracle, Dell, SAP, RedHat, Infosys, VMWare, EMC, Cisco, etc. Survivors: IBM, Accenture, Intel, Apple, etc. Winners: Amazon, Salesforce, Google, CSC, Workday, Canonical, Metaswitch, Microsoft, ARM, ODMs. Now the question is why and is this list written in stone?       What has cloud changed? If you are working in a hardware business (storage, networking, etc. is also included) then cloud computing is a value destroyer. You have an organisation that is assuming small, medium and large enterprises have and always will run their own data centre. As such you have been blown out of the water by the fact that cloud has changed this fundamental rule. All of a sudden Amazon, Google and Facebook go and buy specialised webscale hardware from your suppliers, the ODMs. Facebook all of a sudden open sources hardware, networking, rack and data centre designs and makes it that anybody can compete with you. Cloud is all about scale out and open source hence commodity storage, software defined networks and network virtualisation functions are converting your portfolio in commodity products. If you are an enterprise software vendor then you always assumed that companies will buy an instance of your product, customise it and manage it themselves. You did not expect that software can be offered as a service and that one platform can offer individual solutions to millions of enterprises. You also did not expect that software can be sold by the hour instead of licensed forever. If you are an outsourcing company then you assume that companies that have invested in customising Siebel will want you to run this forever and not move to Salesforce. Reviewing the losers HP’s Cloud Strategy HP has been living from printers and hardware. Meg rightfully has taken the decision to separate the cashcow, stop subsidising other less profitable divisions and let it be milked till it dies. The other group will focus on Cloud, Big Data, etc. However HP Cloud is more expensive and slower moving than any of the big three so economies of scale will push it into niche areas or make it die. HP’s OpenStack is a product that came 2-3 years late to the market. A market as we will see later that is about to be commoditised. HP’s Big Data strategy? Overpay for Vertica and Autonomy and focus your marketing around the lawsuits with former owners, not any unique selling proposition. Also Big Data can only be sold if you have an open source solution that people can test. Big Data customers are small startups that quickly have become large dotcoms. Most enterprises would not know what to do with Hadoop even if they could download it for free [YES you can actually download it for free!!!]. Oracle’s Cloud Strategy Oracle has been denying Cloud existed until their most laggard customer started asking questions. Until very recently you could only buy Oracle databases by the hour from Amazon. Oracle has been milking the enterprise software market for years and paying surprise visits to audit your usage of their database and send you an unexpected bill. Recently they have started to cloud-wash [and Big Data wash] their software portfolio but Salesforce and Workday already are too far ahead to catch them. A good Christmas book Larry could buy from Amazon would be “The Innovator’s Dilemma“. Dell’s Cloud Strategy Go to the main Dell page and you will not find the word Big Data or Cloud. I rest my case. SAP’s Cloud Strategy Workday is working hard on making SAP irrelevant. Salesforce overtook Siebel. Workday is likely to do the same with SAP. People don’t want to manage their ERP themselves. RedHat’s Cloud Strategy [I work for their biggest competitor] RedHat salesperson to its customers: There are three versions. Fedora if you need innovation but don’t want support. CentOS if you want free but no security updates. RHEL is expensive and old but with support. Compare this to Canonical. There is only one Ubuntu, it is innovative, free to use and if you want support you can buy it extra. For Cloud the story is that RedHat is three times cheaper than VMWare and your old stuff can be made to work as long as you want it according to a prescribed recipe. Compare this with an innovator that wants to completely commoditise OpenStack [ten times cheaper] and bring the most innovative and flexible solution [any SDN, any storage, any hypervisor, etc.] that instantly solves your problems [deploy different flavours of OpenStack in minutes without needing any help]. Infosys or any outsourcing company If the data centre is going away then the first thing that will go away is that CRM solution we bought in the 90’s from a company that no longer exists. VMWare For the company that brought virtualisation into the enterprise it is hard to admit that by putting a rest API in front of it, you don’t need their solution in each enterprise any more. EMC Commodity storage means that scale out storage can be offered at a fraction of the price of a regular EMC SAN solution. However the big killer is Amazon’s S3 that can give you unlimited storage in minutes without worries. Cisco A Cisco router is an extremely expensive device that is hard to manage and build on top of proprietary hardware, a proprietary OS and proprietary software. What do you think will happen in a world where cheap ASIC + commodity CPU, general purpose OS and many thousands of network apps from an app store become available? Or worse, a network will no longer need many physical boxes because most of it is virtualised. What does a cloud loser mean? A cloud loser means that their existing cash cows will be crunched by disruptive innovations. Does this mean that losers will disappear or can not recuperate? Some might disappear. However if smart executives in these losing companies would be given the freedom to bring to market new solutions that build on top of the new reality then they might come out stronger. IBM has shown they were able to do so many times. Let’s look at the cloud survivors. IBM IBM has shown over and over again that it can reinvent itself. It sold its x86 servers in order to show its employees and the world that the future is no longer there. In the past it bought PWC’s consultancy which will keep on reinventing new service offerings for customers that are lost in the cloud. Accenture Just like PWC’s consultancy arm within IBM, Accenture will have consultants that help people make the transition from data centre to the cloud. Accenture will not be leading the revolution but will be a “me-to” player that can put more people faster than others. Intel X86 is not going to die soon. The cloud just means others will be buying it. Intel will keep on trying to innovate in software and go nowhere [e.g. Intel’s Hadoop was going to eat the world] but at least its processors will keep it above the water. Apple Apple knows what consumers want but they still need to prove they understand enterprises. Having a locked-in world is fine for consumers but enterprises don’t like it. Either they come up with a creative solution or the billions will not keep on growing. What does a cloud survivor mean? A cloud survivor means that the key cash cows will not be killed by the cloud. It does not give a guarantee that the company will grow. It just means that in this revolution, the eye of the tornado rushed over your neighbours house, not yours. You can still have lots of collateral damage… Amazon IaaS = Amazon. No further words needed. Amazon will extend Gov Cloud into Health Cloud, Bank Cloud, Energy Cloud, etc. and remove the main laggard’s argument: “for legal & security reasons I can’t move to the cloud”. Amazon currently has 40-50 Anything-as-a-Service offerings in 36 months they will have 500. Salesforce PaaS & SaaS = Salesforce. Salesforce will become more than a CRM on steroids, it will be the world’s business solutions platform. If there is no business solution for it on Salesforce then it is not a business problem worth solving. They are likely to buy competitors like Workday. Google Google is the king of the consumer cloud. Google Apps has taken the SME market by storm. Enterprise cloud is not going anywhere soon however. Google was too late with IaaS and is not solving on-premise transitional problems unlike its competitors. With Kubernetes Google will re-educate the current star programmers and over time will revolutionise the way software is written and managed and might win in the long run. Google’s cloud future will be decided in 5-10 years. They invented most of it and showed the world 5 years later in a paper. CSC CSC has moved away from being a bodyshop to having several strategic important products for cloud orchestration and big data. They have a long-term future focus, employing cloud visionaries like Simon Wardley, that few others match. You don’t win a cloud war in the next quarter. It took Simon 4 years to take Ubuntu from 0% to 70% on public clouds. Workday What Salesforce did to Oracle’s Siebel, Workday is doing to SAP. Companies that have bought into Salesforce will easily switch to Workday in phase 2. Canonical Since RedHat is probably reading this blog post, I can’t be explicit. But a company of 600 people that controls up to 70% of the operating systems on public clouds, more than 50% of OpenStack, brings out a new server OS every 6 months, a phone OS in the next months, a desktop every 6 months, a complete cloud solution every 6 months, can convert bare-metal into virtual-like cloud resources in minutes, enables anybody to deploy/integrate/scale any software on any cloud or bare-metal server [Intel, IBM Power 8, ARM 64] and is on a mission to completely commoditise cloud infrastructure via open source solutions in 2015 deserves to make it to the list. Metaswitch Metaswitch has been developing network software for the big network guys for years. These big network guys would put it in a box and sell it extremely expensive. In a world of commodity hardware, open source and scale out, Clearwater and Calico have catapulted Metaswitch to the list of most innovative telecom supplier. Telecom providers will be like cloud providers, they will go to the ODM that really knows how things work and will ignore the OEM that just puts a brand on the box. The Cloud still needs WAN networks. Google Fibre will not rule the world in one day. Telecom operators will have to spend their billions with somebody. Microsoft If you are into Windows you will be on Azure and it will be business as usual for Microsoft. ARM In an ODM dominated world, ARM processors are likely to move from smart phones into network and into cloud. ODM Nobody knows them but they are the ones designing everybody’s hardware. Over time Amazon, Google and Microsoft might make their own hardware but for the foreseeable future they will keep on buying it “en masse” from ODMs. What does a cloud winner mean? Billions and fame for some, large take-overs or IPOs for others. But the cloud war is not over yet. It is not because the first battles were won that enemies can’t invent new weapons or join forces. So the war is not over, it is just beginning. History is written today…Reference: The Cloud Winners and Losers? from our JCG partner Maarten Ectors at the Telruptive blog....

Easy REST endpoints with Apache Camel 2.14

Apache Camel has a new release recently, and some of the new features were blogged about by my colleague Claus Ibsen. You really should check out his blog entry and dig into more detail, but one of the features I was looking forward to trying was the new REST DSL. So what is this new DSL? Actually, it’s an extension to Camel’s routing DSL, which is a powerful domain language for declaratively describing integration flows and is available in many flavors. It’s pretty awesome, and is a differentiator between integration libraries. If you haven’t seen Camel’s DSL, you should check it out. Have I mentioned that Camel’s DSL is awesome? k.. back to the REST story here.. Before release 2.14, creating rest endpoints meant using camel-cxfrs which can be difficult to approach for a new user just trying to expose a simple REST endpoint. Actually, it’s a very powerful approach to doing contract-first REST design, but I’ll leave that for the next blog post. However, in a previous post I did dive into using camel-cxfrs for REST endpoints so you can check it out. With the 2.14, the DSL has been extended to make it easier to create REST endpoints. For example: rest("/user").description("User rest service") .consumes("application/json").produces("application/json").get("/{id}").description("Find user by id").outType(User.class) .to("bean:userService?method=getUser(${header.id})").put().description("Updates or create a user").type(User.class) .to("bean:userService?method=updateUser").get("/findAll").description("Find all users").outTypeList(User.class) .to("bean:userService?method=listUsers"); In this example, we can see we use the DSL to define REST endpoints, and it’s clear, intuitive and straight forward. All you have to do is set up the REST engine with this line: restConfiguration().component("jetty") .bindingMode(RestBindingMode.json) .dataFormatProperty("prettyPrint", "true") .port(8080); Or this in your Spring context XML: <camelContext> ... <restConfiguration bindingMode="auto" component="jetty" port="8080"/> ... </camelContext> The cool part is you can use multiple HTTP/servlet engines with this approach, including a micrservices style with embedded jetty (camel-jetty) or through an existing servlet container (camel-servlet). Take a look at the REST DSL documentation for the complete of HTTP/servlet components you can use with this DSL. Lastly, some might ask, what about documenting the REST endpoint? Eg, WADL? Well, luckily, the new REST DSL is integrated out of the box with awesome Swagger library and REST documenting engine ! So you can auto document your REST endpoints and have the docs/interface/spec generated for you! Take a look at the camel-swagger documentation and the camel-example-servlet-rest-tomcat example that comes with the distribution to see more. Give it a try, and let us know (Camel mailing list, comments, stackoverflow, somehow!!!) how it works for you.Reference: Easy REST endpoints with Apache Camel 2.14 from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....

Validate Configuration on Startup

Do you remember that time when you spent a whole day trying to fix a problem, only to realize that you have mistyped a configuration setting? Yes. And it was not just one time. Avoiding that is not trivial, as not only you, but also the frameworks that you use should take care. But let me outline my suggestion. Always validate your configuration on startup of your application. This involves three things: First, check if your configuration values are correct. Test database connection URLs, file paths, numbers and periods of time. If a directory is missing, a database is unreachable, or you have specified a non-numeric value where a number or period of time is expected, you should know that immediately, rather the application has been used for a while. Second, make sure all required parameters are set. If a property is required, fail if it has not been set, and fail with a meaningful exception, rather than an empty NullPointerException (e.g. throw new IllegalArgumentException("database.url is required")). Third, check if only allowed values are set in the configuration file. If a configuration is not recognized, fail immediately and report it. This will save you from spending a whole day trying to find out why setting the “request.timeuot” property didn’t have effect. This is applicable to optional properties that have default values, and comes with the extra step of adding new properties to a predefined list of allowed properties, and possibly forgetting to do that leading to an exception, but that is unlikely to waste more than a minute. A simple implementation of the last suggestion would like like this: Properties properties = loadProperties(); for (Object key : properties.keySet()) { if (!VALID_PROPERTIES.contains(key)) { throw new IllegalArgumentException("Property " + key + " is not recognized as a valid property. Maybe a typo?"); } } Implementing the first one is a bit harder, as it needs some logic – in your generic properties loading mechanism you don’t know if a property is a database connection url, a folder, a timeout. So you have to do these checks in the classes that know the purpose if each property. Your database connection handler knows how to work with a database url, your file storage handler knows what a backup directory is, and so on. This can be combined with the required property verification. Here, a library like Typesafe config may come handy, but it won’t solve all problems. This is not only useful for development, but also for newcomers to the project that try to configure their local server, and most importantly – production, where you can immediately find out if there has been a misconfiguration in this release. Ultimately, the goal is to fail as early as possible if there is any problem with the supplied configuration, rather than spending hours chasing typos, missing values and services that are accidentally not running.Reference: Validate Configuration on Startup from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Java Minor Releases Scheme Tweaked Again

In 2013, Oracle announced the Java SE – Change in Version Numbering Scheme. The announcement stated that Limited Update releases (those “that include new functionality and non-security fixes”) and Critical Patch Updates (CPUs) [those “that only include fixes for security vulnerabilities”] would be released with specific version number schemes. In particular, Limited Use Releases would have version numbers with multiples of 20 while Critical Patch Updates would have version numbers that are multiples of 5 and come after the latest Limited Use Release version number. The purpose of this scheme change was to allow room for versions with numbers between these, which allows Oracle “to insert releases – for example security alerts or support releases, should that become necessary – without having to renumber later releases.” Yesterday’s announcement (“Java CPU and PSU Releases Explained“) states, “Starting with the release of Java SE 7 Update 71 (Java SE 7u71) in October 2014, Oracle will release a Critical Patch Update (CPU) at the same time as a corresponding Patch Set Update (PSU) for Java SE 7.” This announcement explains the difference between a CPU and a PSU:Critical Patch Update CPU “Fixes to security vulnerabilities and critical bug fixes.” Minimum recommended for everyone.Patch Set Update PSU “All fixes in the corresponding CPU” and “additional non-critical fixes.” Recommended only for those needing bugs fixed by PSU additional fixes.Yesterday’s announcement states that PSU releases (which are really CPU+ releases) will be released along with their corresponding CPU releases. Because the additional fixes that a PSU release contains beyond what’s in the CPU release are expected to be part of the next CPU release, developers are encouraged to experiment with PSU releases to ensure that coming CPU features work well for them.Reference: Java Minor Releases Scheme Tweaked Again from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

How to use Hibernate to generate a DDL script from your Play! Framework project

Ok, so you have been using the hiber­nate prop­erty name=“hibernate.hbm2ddl.auto” value=“update” to con­tin­u­ously update your data­base schema, but now you need a com­plete DDL script? Use this method from you Global Class onStart to export the DDL scripts.  Just give it the pack­age name (with path) of your Enti­ties as well as a file name: public void onStart(Application app) { exportDatabaseSchema("models", "create_tables.sql"); }public void exportDatabaseSchema(String packageName, String scriptFilename) {final Configuration configuration = new Configuration(); final Reflections reflections = new Reflections(packageName); final Set<Class<?>> classes = reflections.getTypesAnnotatedWith(Entity.class); // iterate all Entity classes in the package indicated by the name for (final Class<?> clazz : classes) { configuration.addAnnotatedClass(clazz); } configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.PostgreSQL9Dialect");SchemaExport schema = new SchemaExport(configuration); schema.setOutputFile(scriptFilename); schema.setDelimiter(";"); schema.execute(Target.SCRIPT, SchemaExport.Type.CREATE ); // just export the create statements in the script } That is it! Thanks to @MonCalamari for answer­ing my Ques­tion on Stack­over­flow here.Reference: How to use Hibernate to generate a DDL script from your Play! Framework project from our JCG partner Brian Porter at the Poornerd blog....

Eclipse Extension Point Evaluation Made Easy

Coding Eclipse Extension Point evaluations comes in a bit verbose and sparsely self-explaining. As I got round to busy myself with this topic recently, I wrote a little helper with the intent to reduce boilerplate code for common programming steps, while increasing development guidance and readability at the same time. It turned out to be not that easy to find an expressive solution, which matches all the use cases I could extract from current projects. So I thought it might be a good idea to share my findings and see what other people think of it.       Eclipse Extension Point Evaluation Consider a simple extension point definition that supports an unbounded contribution of extensions. Each of these contributions should provide a Runnable implementation to perform some sort of operation:An usual evaluation task could be to retrieve all contributions, create the executable extensions and invoke each of those: public class ContributionEvaluation { private static final String EP_ID = "com.codeaffine.post.contribution";public void evaluate() { IExtensionRegistry registry = Platform.getExtensionRegistry(); IConfigurationElement[] elements = registry.getConfigurationElementsFor( EP_ID ); Collection<Runnable> contributions = new ArrayList<Runnable>(); for( IConfigurationElement element : elements ) { Object extension; try { extension = element.createExecutableExtension( "class" ); } catch( CoreException e ) { throw new RuntimeException( e ); } contributions.add( ( Runnable )extension ); } for( Runnable runnable : contributions ) { runnable.run(); } } } While evaluate could be split into smaller methods to clarify its responsibilities, the class would also be filled with more glue code. As I find such sections hard to read and awkward to write I was pondering about a fluent interface approach that should guide a developer through the various implementation steps. Combined with Java 8 lambda expressions I was able to create an auxiliary that boils down the evaluate functionality to: public void evaluate() { new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .process(); } Admittedly I cheated a bit, since it is possible to improve the first example also a little by using the java 8 Collection#forEach feature instead of looping explicitly. But I think this still would not make the code really great! For general information on how to extend Eclipse using the extension point mechanism you might refer to the Plug-in Development Environment Guide of the online documentation. RegistryAdapter The main class of the helper implementation is the RegistryAdapter, which encapsulates the system’s IExtensionRegistry instance and provides a set of methods to define what operations should be performed with respect to a particular extension point. At the moment the adapter allows to read contribution configurations or to create executable extensions. Multiple contributions are evaluated as shown above using methods that are denoted in plural – to evaluate exactly one contribution element, methods denoted in singular are appropriate. This means to operate on a particular runnable contribution you would use createExecutableExtension instead of createExecutableExtensions. Depending on which operation is selected different configuration options are available. This is made possible as the fluent API implements a kind of grammar to improve guidance and programming safety. For example the readExtension operation does not allow to register a ExecutableExtensionConfigurator, since this would be an inept combination. The method withConfiguration allows to configure or initialize each executable extension after its creation. But as shown in the example above it can also be used to invoke the runnable extension directly. Due to the type safe implementation of createExecutableExtension(s) it is possible to access the extension instance within the lambda expression without cast. Finally the method process() executes the specified operation and returns a typed Collection of the created elements in case they are needed for further processing: Collection<Extension> extensions = new RegistryAdapter().readExtensions( EP_ID ).process(); Predicate But how is is possible to select a single eclipse extension point contribution element with the adapter? Assume that we add an attribute id to our contribution definition above. The fluent API of RegistryAdapter allows to specify a Predicate that can be used to select a particular contribution: public void evaluate() { new RegistryAdapter() .createExecutableExtension( EP_ID, Runnable.class ) .withConfiguration( ( runnable, extension ) -> runnable.run() ) .thatMatches( attribute( "id", "myContribution" ) ) .process(); } There is a utility class Predicates that provides a set of predefined implementations to ease common use cases like attribute selection. The code above is a shortcut using static imports for: .thatMatches( Predicates.attribute( "id", "myContribution" ) ) where “myContribution” stands for the unique id value declared in the extension contribution: <extension point="com.codeaffine.post.contribution"> <contribution id="myContribution" class="com.codeaffine.post.MyContribution"> </contribution> </extension>Of course it is possible to implement custom predicates in case the presets are not sufficient: public void evaluate() { Collection<Extension> extensions = new RegistryAdapter() .readExtensions( EP_ID, Description.class ) .thatMatches( (extension) -> extension.getValue() != null ) .process(); } Extension Usually Eclipse Extension Point evaluation operates most of the time on IConfigurationElement. The adapter API is unsharp in distinguishing between extension point and configuration element and provides a simple encapsulation called Extension. But for more sophisticated tasks Extension instances make the underlying configuration element accessible. In general Extension provides accessors to the attribute values, contribution names, contribution values, nested contributions and allows to create an executable extension. One of the major reasons to introduce this abstraction was to have an API that converts checked CoreException implicitly to runtime exceptions as I am accustomed to work with the Fail Fast approach without bulky checked exeption handling. Exception Handling However in case that the eclipse extension evaluation is invoked at startup time of a plug-in or gets executed in background, Fail Fast is not an option. And it is surely not reasonable to ignore remaining contributions after a particular contribution has caused a problem. Because of this the adapter API allows to replace the Fail Fast mechanism with explicit exception handling: public void evaluate() { Collection<Runnable> contributions = new RegistryAdapter() .createExecutableExtensions( EP_ID, Runnable.class ) .withExceptionHandler( (cause) -> handle( cause ) ) .process(); [...] } private void handle( CoreException cause ) { // do what you gotta do } Note that the returned collection of contributions contains of course only those elements that did not run into any trouble. Where to get it? For those who want to check it out, there is a P2 repository that contains the feature com.codeaffine.eclipse.core.runtime providing the RegistryAdapter and its accompanying classes. The repository is located at:http://fappel.github.io/xiliary/and the source code and issue tracker is hosted at:https://github.com/fappel/xiliaryAlthough documentation is missing completely at this moment, it should be quite easy to get started with the given explanations of this post. But please keep in mind that the little tool is in a very early state and probably will undergo some API changes. In particular dealing only with CoreExceptions while looping over the contributions still is a bit too weak. Conclusion The sections above introduced the basic functionality of the RegistyAdapter and focused on how it eases Eclipse extension point evaluation. I replaced old implementations in my current projects with the adapter and did not run into any trouble, which means that the solution looks promising to me so far… But there is still more than meets the eye. With this little helper in place, combined with an additional custom assert type, writing integration tests for an extension point’s evaluation functionality really gets a piece of cake. That topic is however out of scope for this post and will be covered next time. So stay tuned and do not forget to share the knowledge, in case you find the approach described above useful – thanks!Reference: Eclipse Extension Point Evaluation Made Easy from our JCG partner Rudiger Herrmann at the Code Affine blog....


A microservice is a small, focussed piece of software that can be developed, deployed and upgraded independently. Commonly, it exposes it functionality via a synchronous protocol such as HTTP/REST. That is my understanding of microservices, at least. There is no hard definition of what they are, but they currently seem to be the cool kid on the block, attracting increasing attention and becoming a mainstream approach to avoiding the problem with monolithic architectures. Like any architectural solution, they are not without their downsides too, such as increased deployment and monitoring complexity. This post will have a look at some of the common characteristics of microservices and contrast them with monolithic architectures. Definition and Characteristics Let’s start with some definitions from folks wiser than I: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. – Microservices by Martin Fowler and and James Lewis [1] Functionally decompose an application into a set of collaborating services, each with a set of narrow, related functions, developed and deployed independently, with its own database. – Microservices Architecture by Chris Richardson [2] Microservices are a style of software architecture that involves delivering systems as a set of very small, granular, independent collaborating services. – Microservices – Not A Free Lunch by Benjamin Wootton [6] Some if this may not sound new. Since way back in 1984, the Unix Philosophy [8] has advocated writing programs that do one thing well, working together with other programs through standard interfaces. So perhaps more useful than definitions are some common characteristics of a microservice:Single purposeEach service should be focussed, doing one thing well. Cliff Moon [4] defined a microservice as “any isolated network service that will only perform operations on a single type of resource”, and gives the example of a user microservice that can perform operations such as new signups, password resets, etc.Loosely coupledA microservice should be able to operate without relying on other services. That is not to say that microservices cannot communicate with other microservices, it’s just that should just be the exception rather than the rule. A microservice should be, where possible, self-sufficient.Independently deployableWith monoliths, a change to any single piece of the application requires the entire app to be deployed. With microservices, each one should be deployable by itself, independently of any other services or apps. This can provide great flexibility, or ‘agility’. Ideally this should be done in a fully automated way; you’ll want a solid Continuous Integration pipeline, and devops culture, behind you. As discussed below in the disadvantages section, the one caveat here is when you are changing your interfaces.SmallThe ‘micro’ in microservice isn’t too important. 10 to 1000 lines of code might be a reasonable ball park but a much better definition might be ‘Small enough to fit in your head’ [3], that is, the project should be small enough to be easily understood by one developer. Another might be ‘Small enough to throw away’, or rewrite over maintain [3]; At one end of the scale, a single developer could create a microservice in a day. At the other end, Fowler [1] suggests that “The largest follow Amazon’s notion of the Two Pizza Team (i.e. the whole team can be fed by two pizzas), meaning no more than a dozen people”. The main point is that size is not the most important characteristic of a microservice – a single, focused purpose is. However, perhaps the best way to understand microservices is to consider an alternative, contrasting architectural style: the monolith. The Monolithic Alternative A monolithic application is one that is built and deployed as a single artifact (e.g. war file). In many ways this is the opposite of the microservice architecture. Applications often start out life as a monolith, and for good reason. Monoliths are:Easy to setup and develop – single project in an IDE Easy to deploy – a single war file Can be scaled horizontally by adding more servers, typically behind a load balancerIn fact it is probably advisable to start your applications as a monolith . Keep things simple until you have a good reason for changes (avoiding YAGNI architectural decisions). That being said, as monoliths grow, you may well start running into problems… Problems with MonolithsCodebase can be difficult to setup, and understandA large monolithic app can overload your IDE, be slow to build (and hence run tests), and it can be difficult to understand the whole application. This can have a downward spiral on software quality.Forced team dependenciesTeams are forced to coordinate (e.g. on technology choices, release cycles, shared resources etc), even if what they are working on has little, if anything, in common. For example, two teams working on separate functionality within the same monolith may be forced to use the same versions of libraries. Team A need to use Spring 3 for legacy code reasons. Team B want to use Spring 4. With both Spring3 and Spring4 in your list of dependencies, which one actually gets used? In java-world it is surprisingly easy to run into these conflicts.How do you split up teams when using a monolithic architecture?Often teams are split by technology e.g. UI teams, server-side logic teams, and database teams. Even simple changes can require a lot of different teams. It may often be easier to hack the required functionality into the area your own team is responsible for, rather than deal with cross team coordination, even if it was better placed elsewhere –  Conway’s Law [9] in action. This is especially true the larger, more dispersed and more bureaucratic a team is.Obstacle to frequent deploymentsWhen you deploy your entire codebase in one go, each deployment becomes a much bigger, likely organizational wide, deal. Deployment cycles becomes slower and less frequently which makes each deployment more risky.A long-term commitment to a technology stackWhether you like it or not! Would you like to start using Ruby in your project? If the whole app is written in Java, then you will probably be forced to rewrite it all! A daunting, and unlikely, possibility. This all or nothing type setup is closely tied to the next point of share libraries… Why use Microservices? In contrast to monolithic applications, the microservice approach is to focus on a specific area of business functionality, not technology. Such services are often developed by teams that are cross-functional. This is perhaps one of the reasons why so many job descriptions these days say ‘full stack’. So, what are the advantages of using microservices? Many of the advantages of microservices relate to the problems mentioned for monolithic architectures above and include:Being smaller and focussed means microservices are easier to understand for developers, and faster to build, deploy and startup Independently deployableEach can be deployed without impacting other services (with interface changes being a notable exception)Independently scalableEasy to add more instances the services that are experiencing heaviest loadIndependent technology stackEach microservice can use a completely independent technology stack allowing easier migrate your technology stack. I think it is worth pointing out here that just because you can use a different technology for each microservice doesn’t mean you should! Increasingly heterogeneous stacks bring increasing complexity. Exercise caution and be driven by business needs.Improved resiliency;If one service goes down (e.g. a memory leak), the rest of the app should continue to run unaffected. DisadvantagesDistributed applications are always more complicated! While monoliths typically use in-memory calls, microservices typically require inter-process calls, often to different boxes but in the same data center. As well as being more expensive, the APIs associated with remote calls are usually coarser-grained, which can be more awkward to use. Refactoring code in a monolith is easy. Doing it in a microservice can be much more difficult e.g. moving code between microservices Although microservices allow you to independently release, that is not so straightforward when you are changing the interfaces – requires coordination across the clients and the service that is change. That being said, some ways to mitigate this are:Use flexible, forgiving, broad interfaces Be as tolerant as possible when reading data from a service. Use design patterns like the Tolerant Reader be conservative in what you do, be liberal in what you accept from others — Jon PostelWhere things can start to get hard with microservices is at an operations level. Runtime management and monitoring of microservices in particular can be problematic. A good ops/devops team is necessary, particularly when you are deploying large numbers of microservices at scale. Where as detecting problems in a single monolithic application can be dealt with by attached a monitor to the single process, doing the same when you have dozens of processes interacting is much more difficult. Microservices vs SOA SOA, or Service Oriented Architecture, is an architectural design pattern that seems to have somewhat fallen out of favor. SOA also involved a collection of services, so what are the difference between SOA and microservices? It is a difficult question to answer, but Fowler here used the term ‘SOA done right’, which I like. Adrian Cockroft [15] described Microservice as being like SOA but with a bounded context. Wikipedia distinguishes the two by saying that SOA aims at integrating various (business) applications whereas several microservices belong to one application only [14]. A related aspect is that many SOAs use ESB (Enterprise Service Buses), where as microservices tend to smart endpoints, dumb pipes [1]. Finally, although neither microservices and SOAs are tied to any one protocol or data format, SOAs did seem to frequently involve Simple Object Access Protocol (SOAP)-based Web services, using XML and WSDL etc, whereas microservices seem to commonly favour REST and JSON. Who is using Microservices? Most large scale web sites including Netflix, Amazon and eBay have evolved from a monolithic architecture to a microservices architecture. Amazon was on of the pioneers of using microservices. Between 100-150 services are accessed to build a single page [10]. If for example, the recommendation service is down, default recommendations can be used. These may be less effective at tempting you to buy, but is a better alternative to errors or no recommendations at all. Netflix are also pioneers in the microservice world, not only using microservices extensively, but also releasing many useful tools back into the open source world, including Chaos Monkey for testing web application resiliency and Janitor Monkey for cleaning up unused resources. See more at netflix.github.io. TicketMaster, the ticket sales and distribution company, is also making increasing use of microservices to give them “Boardroom agility or the process of quickly reacting to the marketplace.” [12] Best practices Some best practices for microservices might be:Separate codebasesEach microservice has its own repository and CI build setupSeparate interface and implementationSeparate the API and implementation modules, using a Maven multi-module project or similar. For example, clients should depend on CustomerDao rather than CustomerDaoImpl, or JpaCustomerDao.Use monitoring!For example AppDynamics and New RelicHave health checks built into your services Have standard templates availableIf many developers are creating microservices, have a template they can use that gets them up and running quickly and implements corporate standards for logging and the aforementioned monitoring and health checks.Support multiple versionsLeave multiple old microservice versions running. Fast introduction vs. slow retirement asymmetry. [11] Summary As an architectural approach, and particularly as an alternative to monolithic architectures, microservices are an attractive choice. They allow independent technology stacks to be used, with each service being independently built and deployed, meaning you are much more likely to be able to follow the ‘deploy early, deploy often’ mantra. That being said, they do bring their own complexities, including deployment and monitoring. It is advisable to start with the relative simplicity of a monolithic approach and only consider microservices when you start running into problems. Even then, migrating slowly to microservices is likely a sensible approach. For example, introducing new areas of functionality as microservices, and slowly migrating old as they need updates and rewrites anyway. And all the while, bear in mind that while each microservice itself may be simple, some of the complexity is simply moved up a level. The coordination of dozens or even hundreds of microservices brings many new challenges including build, deployment and monitoring and shouldn’t be undertaken without a solid Continuos Delivery infrastructure in place, and a good devops mentality within in the team. Cross functional and multidisciplinary teams using automation are automation are essential. Used judiciously and with the right infrastructure in place, microservices seem to be thriving. I like Martin Fowlers’s guarded optimism: “We write with cautious optimism that microservices can be a worthwhile road to tread”. [12] References and reading materials:Microservices by Martin Fowler and James Lewis Microservices Architecture by Chris Richardson Micro services – Java, the Unix Way by James Lewis Microservices, or How I Learned To Stop Making Monoliths and Love Conway’s Law by Cliff Moon Micro service architecure by Fred George Microservices are not a free lunch by Benjamin Wootton Antifragility and Microservices by Russ Miles The Unix Philosophy Conway’s Law Amazon Architecture Migrating to microservices by Adrian Cockroft Microservices with Spring Boot  Microservices for the Grumpy Neckbeard Microservices definition on Wikipedia Microservices and DevOps by Adrian CockcroftReference: Microservices from our JCG partner Shaun Abram at the Shaun Abram’s blog blog....

Typesafe APIs for the browser

A new feature in Ceylon 1.1, that I’ve not blogged about before, is dynamic interfaces. This was something that Enrique and I worked on together with Corbin Uselton, one of our GSoC students. Ordinarily, when we interact with JavaScript objects, we do it from within a dynamic block, where Ceylon’s usual scrupulous typechecking is suppressed. The problem with this approach is that if it’s an API I use regularly, my IDE can’t help me get remember the names and signatures of all the operations of the API. Dynamic interfaces make it possible to ascribe static types to an untyped JavaScript API. For example, we could write a dynamic interface for the HTML 5 CanvasRenderingContext2D like this: dynamic CanvasRenderingContext2D { shared formal variable String|CanvasGradient|CanvasPattern fillStyle; shared formal variable String font;shared formal void beginPath(); shared formal void closePath();shared formal void moveTo(Integer x, Integer y); shared formal void lineTo(Integer x, Integer y);shared formal void fill(); shared formal void stroke();shared formal void fillText(String text, Integer x, Integer y, Integer maxWidth=-1);shared formal void arc(Integer x, Integer y, Integer radius, Float startAngle, Float endAngle, Boolean anticlockwise); shared formal void arcTo(Integer x1, Integer y1, Integer x2, Float y2, Integer radius);shared formal void bezierCurveTo(Integer cp1x, Integer cp1y, Integer cp2x, Float cp2y, Integer x, Integer y);shared formal void strokeRect(Integer x, Integer y, Integer width, Integer height); shared formal void fillRect(Integer x, Integer y, Integer width, Integer height); shared formal void clearRect(Integer x, Integer y, Integer width, Integer height);shared formal CanvasGradient createLinearGradient(Integer x0, Integer y0, Integer x1, Integer y1); shared formal CanvasGradient createRadialGradient(Integer x0, Integer y0, Integer r0, Integer x1, Integer y1, Integer r1); shared formal CanvasPattern createPattern(dynamic image, String repetition);//TODO: more operations!! }dynamic CanvasGradient { shared formal void addColorStop(Integer offset, String color); }dynamic CanvasPattern { //todo } Now, if we assign an instance of JavaScript’s CanvasRenderingContext2D to this interface type, we won’t need to be inside a dynamic block when we call its methods. You can try it out in your own browser by clicking the “TRY ONLINE” button! CanvasRenderingContext2D ctx; dynamic { //get the CanvasRenderingContext2D from the //canvas element using dynamically typed code ctx = ... ; }//typesafe code, checked at compile time ctx.fillStyle = "navy"; ctx.fillRect(50, 50, 235, 60); ctx.beginPath(); ctx.moveTo(100,50); ctx.lineTo(60,5); ctx.lineTo(75,75); ctx.fill(); ctx.fillStyle = "orange"; ctx.font = "40px PT Sans"; ctx.fillText("Hello world!", 60, 95); Notice that we don’t need to ascribe an explicit type to every operation of the interface. We can leave some methods, or even just some parameters of a method untyped, by declaring them dynamic. Such operations may only be called from within a dynamic block, however. A word of caution: dynamic interfaces are a convenient fiction. They can help make it easier to work with an API in your IDE, but at runtime there is nothing Ceylon can do to ensure that the object you assign to the dynamic interface type actually implements the operations you’ve ascribed to it.Reference: Typesafe APIs for the browser from our JCG partner Gavin King at the Ceylon Team blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: