Featured FREE Whitepapers

What's New Here?


Java Word (.docx) documents with docx4j

A couple of months ago I needed to create a dynamic Word document with a number of tables and paragraphs. In the past I’ve used POI for this, but I’ve found this hard to use and it doesn’t work that well for me when creating more complex documents. So for this project, after some searching around, I decided to use docx4j. Docx4j, according to their site is a: “docx4j is a Java library for creating and manipulating Microsoft Open XML (Word docx, Powerpoint pptx, and Excel xlsx) files. It is similar to Microsoft’s OpenXML SDK, but for Java. ” In this article I’ll show you a couple of examples you can use to generate content for word documents. More specifically we’ll look at the following two examples:Load in a template word document to add content to and save as new document Add paragraphs to this template document Add tables to this template documentThe general approach here is to first create a Word document that contains the layout and main styles of your final document. In this document you’ll need to add placeholders (simple strings) that we’ll use to search for and replace with real content. A very basic template for instance looks like this:In this article we’ll show you how you can fill this so get this:Load in a template word document to add content to and save as new document First things first. Lets create a simple word document that we can use as a template. For this just open Word, create a new document and save it as template.docx. This is the word template we’ll use to add content to. The first thing we need to do is load this document with docx4j. You can this with the following piece of java code: private WordprocessingMLPackage getTemplate(String name) throws Docx4JException, FileNotFoundException { WordprocessingMLPackage template = WordprocessingMLPackage.load(new FileInputStream(new File(name))); return template; }This will return a java object representing the complete (at this moment) empty document. We can now use the Docx4J API to add, delete and modify content in this word document. Docx4J has a number of helper classes you can use to traverse through this document. I did write a couple of helpers myself though that make it really easy to find the specific placeholders and replace them with the real content. Lets look at one of them. This operation is a wrapper around a couple of JAXB operations that allows you to search through a specific element and all it’s children for a certain class. You can for instance use this to get all the tables in the document, all the rows within a table and more like that. private static List<Object> getAllElementFromObject(Object obj, Class<?> toSearch) { List<Object> result = new ArrayList<Object>(); if (obj instanceof JAXBElement) obj = ((JAXBElement<?>) obj).getValue(); if (obj.getClass().equals(toSearch)) result.add(obj); else if (obj instanceof ContentAccessor) { List<?> children = ((ContentAccessor) obj).getContent(); for (Object child : children) { result.addAll(getAllElementFromObject(child, toSearch)); } } return result; }Nothing to complex, but really helpful. Lets see how we can use this operation. For this example we’ll just replace a simple text placeholder with a different value. This is for instance something you’d use to dynamically set the title of a document. First though, add a custom placeholder in the word template you created. I’ll use SJ_EX1 for this. We’ll replace this value with our name. The basic text elements in a docx4j are represented by the org.docx4j.wml.Text class. To replace this simple placeholder all we have to do is call this method: private void replacePlaceholder(WordprocessingMLPackage template, String name, String placeholder ) { List<Object> texts = getAllElementFromObject(template.getMainDocumentPart(), Text.class); for (Object text : texts) { Text textElement = (Text) text; if (textElement.getValue().equals(placeholder)) { textElement.setValue(name); } } }This will look for all the Text elements in the document, and those that match are replaced with the value we specify. Now all we need to do is write the document back to a file. private void writeDocxToStream(WordprocessingMLPackage template, String target) throws IOException, Docx4JException { File f = new File(target); template.save(f); }Not that hard as you can see. With this setup we can also add more complex content to our word documents. The easiest way to determine how to add specific content is by looking at the XML source code of the word document. That’ll tell you which wrappers are needed and how Word marshalls the XML. For the next example we’ll look at how to add a complete paragraph.   Add paragraphs to this template document You might wonder why we need to be able to add paragraphs? We can already add text, and isn’t a paragraph just a large piece of text? Well, yes and no. A paragraph indeed looks like a big piece of text, but what you need to take into account are the linebreaks. If you add a Text element, like we did earlier, and add linebreaks to the text, they won’t show up. When you want linebreaks, you’ll need to create a new paragraph. Luckily, though, this is also very easy to do with Docx4j. We’ll do this by taking the following steps:Find the paragraph to replace from the template Split the input text into seperate lines For each line create a new paragraph based on the paragraph from the template Remove the original paragraphShouldn’t be to hard with the helper methods we already have. private void replaceParagraph(String placeholder, String textToAdd, WordprocessingMLPackage template, ContentAccessor addTo) { // 1. get the paragraph List<Object> paragraphs = getAllElementFromObject(template.getMainDocumentPart(), P.class); P toReplace = null; for (Object p : paragraphs) { List<Object> texts = getAllElementFromObject(p, Text.class); for (Object t : texts) { Text content = (Text) t; if (content.getValue().equals(placeholder)) { toReplace = (P) p; break; } } } // we now have the paragraph that contains our placeholder: toReplace // 2. split into seperate lines String as[] = StringUtils.splitPreserveAllTokens(textToAdd, '\n'); for (int i = 0; i < as.length; i++) { String ptext = as[i]; // 3. copy the found paragraph to keep styling correct P copy = (P) XmlUtils.deepCopy(toReplace); // replace the text elements from the copy List texts = getAllElementFromObject(copy, Text.class); if (texts.size() > 0) { Text textToReplace = (Text) texts.get(0); textToReplace.setValue(ptext); } // add the paragraph to the document addTo.getContent().add(copy); } // 4. remove the original one ((ContentAccessor)toReplace.getParent()).getContent().remove(toReplace); }In this method we replace the content of a paragraph with the supplied text and then new paragraphs to the argument specified with addTo. String placeholder = "SJ_EX1"; String toAdd = "jos\ndirksen"; replaceParagraph(placeholder, toAdd, template, template.getMainDocumentPart());If you run this with more content in your word template you’ll notice that the paragraphs will appear at the bottom of your document. The reason is that the paragraphs are added back to the main document. If you want your paragraphs to be added at a specific place in your document (which is something you usually want) you can wrap them in a 1×1 borderless table. This table is than seen as the parent of the paragraph and new paragraphs can be added there. Add tables to this template document The final example I’d like to show is how to add tables to a word template. A better description actually would be, how you can fill predefined tables in your word template. Just as we did for simple text and paragraphs, we’ll replace placeholders. For this example add a simple table to your word document (which you can style as you like). To this table add 1 dummy row that serves as template for the content. In the code we’ll look for that row, copy it, and replace the content with new rows from java code like this:find the table that contains one of our keywords copy the row that serves as row template for each row of data add a row to the table based on the row template remove the original template rowThe same approach as we’ve also shown for the paragraphs. First though lets look at how we’ll provide the replacement data. For this example I just supply a set of hashmaps that contain the name of the placeholder to replace and the value to replace it with. I also provide the replacement tokens that can be found in the table row. Map<String,String> repl1 = new HashMap<String, String>(); repl1.put("SJ_FUNCTION", "function1"); repl1.put("SJ_DESC", "desc1"); repl1.put("SJ_PERIOD", "period1"); Map<String,String> repl2 = new HashMap<String,String>(); repl2.put("SJ_FUNCTION", "function2"); repl2.put("SJ_DESC", "desc2"); repl2.put("SJ_PERIOD", "period2"); Map<String,String> repl3 = new HashMap<String,String>(); repl3.put("SJ_FUNCTION", "function3"); repl3.put("SJ_DESC", "desc3"); repl3.put("SJ_PERIOD", "period3"); replaceTable(new String[]{"SJ_FUNCTION","SJ_DESC","SJ_PERIOD"}, Arrays.asList(repl1,repl2,repl3), template);Now what does this replaceTable method look like. private void replaceTable(String[] placeholders, List<Map<String, String>> textToAdd, WordprocessingMLPackage template) throws Docx4JException, JAXBException { List<Object> tables = getAllElementFromObject(template.getMainDocumentPart(), Tbl.class); // 1. find the table Tbl tempTable = getTemplateTable(tables, placeholders[0]); List<Object> rows = getAllElementFromObject(tempTable, Tr.class); // first row is header, second row is content if (rows.size() == 2) { // this is our template row Tr templateRow = (Tr) rows.get(1); for (Map<String, String> replacements : textToAdd) { // 2 and 3 are done in this method addRowToTable(tempTable, templateRow, replacements); } // 4. remove the template row tempTable.getContent().remove(templateRow); } }This method finds the table, gets the first row and for each supplied map it add a new row to the table. Before returning it removes the template row. This method uses two helpers: addRowToTable and getTemplateTable. We’ll first look at this last one: private Tbl getTemplateTable(List<Object> tables, String templateKey) throws Docx4JException, JAXBException { for (Iterator<Object> iterator = tables.iterator(); iterator.hasNext();) { Object tbl = iterator.next(); List<?> textElements = getAllElementFromObject(tbl, Text.class); for (Object text : textElements) { Text textElement = (Text) text; if (textElement.getValue() != null && textElement.getValue().equals(templateKey)) return (Tbl) tbl; } } return null; }This function just looks whether a table contains one of our placeholders. If so that table is returned. The addRowToTable operation is also very simple. private static void addRowToTable(Tbl reviewtable, Tr templateRow, Map<String, String> replacements) { Tr workingRow = (Tr) XmlUtils.deepCopy(templateRow); List textElements = getAllElementFromObject(workingRow, Text.class); for (Object object : textElements) { Text text = (Text) object; String replacementValue = (String) replacements.get(text.getValue()); if (replacementValue != null) text.setValue(replacementValue); } reviewtable.getContent().add(workingRow); }This method copies our template and replaces the placeholders in this template row with the provided values. This copy is added to the table. And that’s it. With this piece of code we can fill arbitrairy tables in our word document, while preserving table layout and styling. That’s it so far for this article. With paragraphs and tables you can create many different types of documents and this nicely matches the type of documents that are most often generated. This same approach though can also be used to add other type of content to word documents. Reference: Create complex Word (.docx) documents programatically with docx4j from our JCG partner Jos Dirksen at the Smart Java blog....

Spring MVC Integration Tests

An approach to Integration Testing the controllers in Spring MVC is to use the Integration Test support provided by Spring. With Junit4 this support consists of a custom Junit Runner called the SpringJunit4ClassRunner, and a custom annotation to load up the relevant Spring configuration. A sample Integration test would be along these lines: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:/META-INF/spring/webmvc-config.xml", "contextcontrollertest.xml"}) public class ContextControllerTest {@Autowired private RequestMappingHandlerAdapter handlerAdapter;@Autowired private RequestMappingHandlerMapping handlerMapping; ...... @Test public void testContextController() throws Exception{ MockHttpServletRequest httpRequest = new MockHttpServletRequest("POST","/contexts"); httpRequest.addParameter("name", "context1"); httpRequest.setAttribute(DispatcherServlet.OUTPUT_FLASH_MAP_ATTRIBUTE,new FlashMap()); MockHttpServletResponse response = new MockHttpServletResponse(); Authentication authentication = new UsernamePasswordAuthenticationToken(new CustomUserDetails(..), null); SecurityContextHolder.getContext().setAuthentication(authentication);Object handler = this.handlerMapping.getHandler(httpRequest).getHandler(); ModelAndView modelAndView = handlerAdapter.handle(httpRequest, response, handler); assertThat(modelAndView.getViewName(), is("redirect:/contexts")); } }I have used a MockHttpServletRequest to create a dummy POST request to a “/contexts” uri, and added some authentication details for Spring Security related details to be available in the Controller. The ModelAndView returned by the controller is being validated to make sure the returned view name is as expected. A better way to perform a Controller related integration is using a relatively new Spring project called Spring-test-mvc , which provides a fluent way to test the controller flows. The same tests as above look like the following with Spring-test-mvc: @Test public void testContextController() throws Exception{ Authentication authentication = new UsernamePasswordAuthenticationToken(new CustomUserDetails(..), null); SecurityContextHolder.getContext().setAuthentication(authentication); xmlConfigSetup("classpath:/META-INF/spring/webmvc-config.xml", "classpath:/org/bk/lmt/web/contextcontrollertest.xml").build() .perform(post("/contexts").param("name", "context1")) .andExpect(status().isOk()) .andExpect(view().name("redirect:/contexts")); }The test has now become much more concise and there is no need to deal directly with a MockHttpServletRequest and MockHttpServletResponse instances and reads very well. I have a little reservation about the amount of static imports and the number of function calls that are involved here, but again like everything else it is just a matter of getting used to this approach of testing. Resources under WEB-INF location can also be used with spring-test-mvc, this way: xmlConfigSetup("/WEB-INF/spring/webmvc-config.xml","classpath:/org/bk/lmt/web/contextcontrollertest.xml") .configureWebAppRootDir("src/main/webapp", false).build() .perform(post("/contexts").param("name", "context1")) .andExpect(status().isOk()) .andExpect(view().name("redirect:/contexts")); xmlConfigSetup("/WEB-INF/spring/webmvc-config.xml", "classpath:/org/bk/lmt/web/contextcontrollertest.xml") .configureWebAppRootDir("src/main/webapp", false).build() .perform(get("/contexts")) .andExpect(status().isOk()) .andExpect(view().name("contexts/list"));Reference: Spring MVC Integration Tests from our JCG partner Biju Kunjummen at the all and sundry blog....

Running Cassandra in a Multi-node Cluster

This post gathers the steps I followed in setting up an Apache Cassandra cluster in multi-node. I have referred Cassandra wiki and Datastax documentation in setting up my cluster. The following procedure is expressed in details, sharing my experience in setting up the cluster.Setting up first node Adding other nodes Monitoring the cluster – nodetool, jConsole, Cassandra GUII used Cassandra 1.1.0 and Cassandra GUI – cassandra-gui-0.8.0-beta1 version(As older release had problems in showing data) in Ubuntu OS. Setting up first node Open cassandra.yaml which is in ‘apache-cassandra-1.1.0/conf’. Change listen_address: localhost –> listen_address: <node IP address> rpc_address: localhost –> rpc_address: <node IP address> – seeds: ’′ –> – seeds: ‘node IP address’  The listen address defines where the other nodes in the cluster should connect. So in a multi-node cluster it should to changed to it’s identical address of Ethernet interface. The rpc address defines where the node is listening to clients. So it can be same as node IP address or set it to wildcard if we want to listen Thrift clients on all available interfaces. The seeds act as the communication points. When a new node joins the cluster it contact the seeds and get the information about the ring and basics of other nodes. So in multi-node, it needs to be changed to a routable address as above which makes this node a seed. Note: In multi-node cluster, it is better to have multiple seeds. Though it doesn’t mean to have a single point of failure in using one node as a seed, it will make delays in spreading status message around the ring. A list of nodes to be act as seeds can be defined as follows, – seeds: ‘<ip1>,<ip2>,<ip3>’ For the moment let’s go forward with previous configuration with single seed. Now we can simply start Cassandra on this node, which will run perfect without the rest of the nodes. Let’s imagine our cluster need increased performance and more data is feeding to the system. So it’s the time to add another node to the cluster. Adding other nodes Simply copy the Apache Cassandra folder of first node to each of these. Now replace the listen_address: <node IP address> and rpc_address: <node IP address> as relevant for each node. (No need to touch seeds section) When we start each node now it will join the ring, using the seeds as hubs of the gossip network. In the logs it will show up the information related to other nodes in the cluster as it can see.   Monitoring the cluster Nodetool – This is shipped with Apache Cassandra. We can run it being inside Cassandra folder with bin/nodetool . With the ring command of nodetool we can check some information of the ring as follows. bin/nodetool -host <node IP address> ring It has lot more useful functionalities which can be referred at site. jConsole – We can use this to monitor usage of memory, thread behavior etc. It is so helpful to analyse the cluster in detail and to fine tune the performance. This guide also carries good information on using jConsole if you are not familiar with it already.Cassandra GUI – This is to satisfy the need to visualize the data inside the cluster. With this we can see the content distributed across the cluster at one place.Reference: Running Cassandra in a Multi-node Cluster from our JCG partner Pushpalanka at the Pushpalanka’s Blog blog....

Why Do We Need Managers Anyway?

I’ve been scratching my head lately regarding management, and what is it good for. So I’m probably going to spend some time on the subject in this here blog. Long time readers of this blog (you poor guys, thanks for sticking with me!) know I’m a fan of Manager Tools. For the last seven years, these guys have been pumping out great advice, and even more important: actionable advice, for managers who want to become more effective. The need for this podcast, and their consulting services is apparent. Wherever you go you’ll find bad managers. Actually, bad or good is not the correct term here – it’s effectiveness. Effective managers improve the effectiveness of their teams, and so the organization benefits. And Manager Tools helps in becoming more effective in the modern organization enterprise structure.Let’s switch back to agile land for a minute. We know effective agile teams are self-organized. There is no manager role in a scrum team. We also know that this kind of team usually clashes with the rest of the organization. Yet, assuming that the team is really effective without “proper management”, can we apply these to whole organizations? If most managers are not effective, and are really an obstacle, why do we need them? What managers do Here are responsibilities of managers. Responsibilities created by company structure, they can be changed in the structure changes. I don’t include technical responsibilities, like planning or budgeting. While mostly done by managers, they can be done by team members.Hiring and firing Promoting team members Coaching team members Improve effectiveness Protecting the team Translating for the team Solve conflicts Make decisions(I probably missed one or ten, but this mostly covers it for now). Do these activities require a dedicated manager? Hiring and firing seems like a managerial job. Yet in effective organizations the team is part of the process of interviewing and accepting the new guy/girl. They are also usually passive (sometimes more) players in ejecting him/her out. While these decisions don’t require a manager, organizational structure does. Hiring someone means money transfers. Firing him means legal stuff. The system requires authority of approval. Promotion is an organizational idiom, (in flat organizations, promotions make no sense) but it comes with money perks, so again we need approval authority. Do teams need managers to improve? Jurgen Appelo thinks not, and I agree. Coaching can be effective, having the expertise. If the manager doesn’t have the knowledge she can’t coach. But she can make sure coaching occurs. This requires authority and follow-up. As we look to improve the performance of the team, the manager can create the conditions where effectiveness improves. The less expert she is, the team members rely on themselves to improve. Protecting the team may sound patronizing, and as all agile practitioners know, is required. Protection from what? Other parts of the organization. Actually, protection is part of inter-organizational communication. It is more simple (although maybe not as effective) to communicate through small number of managers, rather than to everyone directly. The manager at this point translates an organizational message to “what does it mean to us”. Anyone can do that – but it requires some recognition in the organization, with organizational know-how. Finally, decision making and solving conflicts within the team stems directly from authority. Anyone can and does make decision everyday. We don’t need special people for that, yet we gravitate towards certain people that have these skills. We may even call them “leaders”. So do we need managers? We talk about managers, and how organizations are easy to manage, because it simplifies management. The word “management” is so engrossing, it’s easy to dump so much into it. We can reduce the term “managers” to contact points with the rest of the organization. The teams can do the rest, and their effectiveness will probably improve. Without managers, we’ll have anarchy. Which may not be that bad: Take a look at Fred George’s experience. He calls it “Programmer Anarchy”. It works. Is anarchy for everyone? Can we get rid of managers altogether? Do we want to? I’ll wait a posts more to get to that decision Reference: Why Do We Need Managers Anyway? from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Spring & JSF integration: Exception Handling

Most JSF developers will be familiar the “An Error Occurred” page that gets displayed when an unexpected exception is thrown somewhere their code. This page is really useful when developing but is not something you usually want for a production application. You generally have a couple of options when it comes to replacing this page with stock JSF; you can use define some HTML <error-page> elements in your web.xml or you can write a custom ExceptionHandler Neither option is ideal for a Spring developer, <error-page> elements tend to be too simplistic and it is hard to use Spring concepts, such as dependency injection, with custom ExceptionHandlers. Luckily both JSF and Spring are very extensible frameworks so a project that I have been working on to integrate the technologies can offer some compelling alternatives. The first option available allows ExceptionHandlers to be registered as Spring beans. Rather than use the existing javax.faces.context.ExceptionHandler class a new org.springframework.springfaces.exceptionhandler.ExceptionHandler interface is available. The interface is pretty straight forward, it defines a single handle method that should return true if the exception has been handled. The interface uses a generic to limit the types of exception considered. public interface ExceptionHandler<E extends Throwable> { boolean handle(E exception, ExceptionQueuedEvent event) throws Exception; } All relevant beans beans that implement the ExceptionHandler interface will be consulted when an exception occurs from JSF. The first handler to return true will ‘win’ and subsequent handlers will not be called. You can use the org.springframework.core.Ordered interface or the @Ordered annotation if you need to sort your handlers. Of course, now that exception handlers are regular Spring beans, you can use all the standard Spring functionality such a dependency injection and AOP. Now that we have basic exception handler hooks we can go on to offer a couple of useful implementations: Sometimes the best way to handle certain exceptions is to simply show a message and remain on the current screen. For example, suppose a service throws TooManyResultsException when search queries are too broad. A simple message telling the user to ‘try again with more precise terms’ might be the only exception handling required. The org.springframework.springfaces.exceptionhandler.ObjectMessageExceptionHandler class builds on previous work that maps Objects to messages. Include an entry in your Spring MessageSource with the fully qualified name of the Exception as the key and a FacesMessage will be shown if the exception is thrown. com.mycorp.search.TooManyResultsException=Too many results found, please try again with more precise search terms You can easily map any number of exceptions to messages, you can even refer to properties of the exception using ‘{property}‘ placeholders in your message string. Messages can be displayed on the screen using standard JSF techniques (usually a <h:messages/> component). Support for quickly mapping exceptions to messages is nice but it won’t be enough for a lot of applications and writing ExceptionHandler beans can quickly become tiresome. The final optional available is org.springframework.springfaces.mvc.exceptionhandler.DispatcherExceptionHandler. The DispatcherExceptionHandler provides a bridge between JSF and Spring MVC that allows you to use @ExceptionHandler annotations in your @Controllers as you would with any other Spring MVC application. Methods annotated with @ExceptionHandler are really versatile and can have very flexible signatures; you can deal with exceptions directly or return a view that should be rendered: @ExceptionHandler public String handle(ExampleException e) { return 'redirect:errorpage'; } Using @ExceptionHandler annotations with Spring MVC is a very natural fit and there have been numerous articles written on the subject. Hopefully existing JSF developers will find the Spring MVC programming style an attractive alternative to standard JSF. Please take a look at the other articles in this series and if you want to examine the exception handing code the ‘org.springframework.springfaces.exceptionhandler’ and ‘org.springframework.springfaces.mvc.exceptionhandler’ packages are a good place to start. Reference: Integrating Spring & JavaServer Faces : Exception Handling from our JCG partner Phillip Webb at the Phil Webb’s Blog blog....

Top 5 Picks of Google IO 2012

Google IO 2012 developer conference has just concluded last week amid lot of fanfare. I for one think that Google has lot more influence on Enterprise technology than it seems. Some of what Enterprises see as latest and greatest of technology (such as Map Reduce) has been pioneered in Google a while ago. This is one the main reasons why I followed this event closely. Over the 3 days Google has several technology and product announcements across the product lines. Here are the top 5 picks at Techspot. #1 Google Compute EngineWith Compute Engine, Google made it intentions clear as a Cloud Service provider. Google entering Infrastructure as a Service (IaaS) market brings more innovation and cheaper compute resources; enhancing its portfolio beyond PaaS. How exciting!! It is interesting to see the reverse trend of PaaS offerings to IaaS offerings (Microsoft also recently started offering Linux VMs as a service). Does this mean uptake of PaaS is very low? Compute Engine offers Linux VMs running Ubuntu 12.04 and CentOS 6.2. Most of the concepts that Amazon EC2 uses such as zones, ephemeral versus persistent disks are present in Compute Engine tool. That said, this is a reasonable start for Google but a long way to be a threat to Amazon EC2. It is interesting to see Enterprises like Bestbuy listed as their beta customers. #2 Jelly Bean, Android 4.1Jelly bean, Android 4.1 is the newest version of the popular mobile operating System announced in IO. Apart from the several performance optimizations and various user level features, Jelly bean packs several interesting technology innovations. Prime among them is “Google Now”. Gartner has been talking about Context Delivery Architecture for few years, Google Now is probably first main stream execution of the same. Google now combines user location, calendar details, past searches, traffic, weather and time in interesting ways and provides useful information before even asking. One of the use case shown is, when you have an appointment at a certain location, it checks traffic and tells how long it’ll take to get there. It notifies when you should leave, so that you can reach the destination on time. There are many other interesting features including Project Butter, Systrace tool, peer-to-peer service discovery, cloud messaging, smart app updates. #3 Packaged Apps & Chrome Everywhere Google chrome is the new generally available on Android 4.1. With good HTML 5 support, mobile web applications on Chrome in Android are very powerful. Chrome is also available on iOS. Packaged apps are one of the interesting features that are announced for Chrome. Packaged apps allow access to Chrome API, but are written in HTML 5, JavaScript and CSS. Packaged Apps are loaded locally and supports offline mode. In a way these are like Adobe AIR applications, but are written using standard web technologies. This is probably another category defining feature; developing cross platform hybrid applications running locally. Some interesting stats on chrome are revealed. Chrome now has 310M active users across the world. #4 Project Glass Lot has been written and said about Project Glass from Google. Wearable computing is about to start. Project glass probably can make a new way of how we use and interact with computers. Technically not much has been revealed on project glass, other than the fact that it has camera, microphone, gyroscope and wi-fi connectivity. Key-note demo by Sergey Brin that involved blimp, sky divers, project glass and Google+ hang-outs is an epic. #5 Nexus 7 Tablet Technically Nexus 7 is just an Android Jelly Bean device. There is no reason to pick this other than for the fact that it is the first official Tablet from Google. At a price of $199 and with Quad Core Tegra 3 architecture and 12 GPUs, this tablet is a real power house. Wi-fi, gyroscope, GPS, accelerometer, Gorilla Glass, front facing camera, this is a sure main stream device and probably puts android tablets into millions of hands, another great revenue opportunity for Mobile developers. Interestingly Nexus 7 ships with Chrome as the default browser instead of Android’s native web-kit based browser. I guess Google is trying to avoid an IE 6 scenario here by separating browser from OS. There are many more interesting announcements that didn’t make it to the top 5 list but I found them interesting enough to mention here:Offline maps and offline voice recognition for Android Enhanced Gesture support in Android Apps Script for Good Apps Automation Google Docs OfflineWhat are your top 5 picks? Reference: Top 5 Picks of Google IO 2012 from our JCG partner Munish K Gupta at the Tech Spot blog....

DevOps is Culture – so is a lot of other stuff

I hung out in an excellent discussion at DevopsDays driven by Spike Morelli around culture. The premise was that DevOps started as an idea around culture – around behavior patterns that lead to better software. Somewhere along the way our industry shifted this discussion into a tools discussion & now the amount of noise out there about “DevOps tools” is magnitudes higher than any discussion about the real reason DevOps exists – to shift culture. I looked up the definition of “culture” – here are a few definitions:the quality in a person or society that arises from a concern for what is regarded as excellent in arts, letters, manners,scholarly pursuits, etc. that which is excellent in the arts, manners, etc. the behaviors and beliefs characteristic of a particular social, ethnic, or age group: the youth culture; the drug culture.Note that culture is the manifestation of intellectual achievement. It’s the evidence and result of achievement. I think the 3rd definition is most appropriate for DevOps – what are the behaviors that are characteristic of a well integrated Development & Operations organization? The challenge, the discussion, was how we can re-balance the scales and get the word out that this is actually about culture and that tools happen as a result of culture, not the other way around. This post begins my contribution to that effort. The question was asked – do we all agree that culture is the most important thing when it comes to creating a successful business? The short answer is “yes”. If you wanted to hear all the if/and/but/what-if/etc discussions, you should have come to Devopsdays. For the sake of this blog post – culture is the most important factor. If you want case studies and analysis that proves that culture matters – read Jim Collins Good to Great. My present company has a really excellent culture of Developer / Ops cooperation and collaboration. I wasn’t there when it wasn’t that way (if ever) and so I can’t tell a story about how to change your organization. What I can tell you is what a healthy and thriving Dev/Ops practicing organization looks like and what I think some of the key factors in that success are. I see this as two components – there are fundamental core values that enable and support the culture, and then there are tactical things that are done to make the culture work for us. I’d like to talk about both. The culture is the result of these actions and ideas put into practice. Background I work for a company with a well defined set of core values. Those values set forth parameters under which the culture exists. Here’s what they are: These values are public and they matter – they matter a lot. These might sound hokey to you – but every single one of them is held high at the company & strongly defended. Defending a list of values like this is hard sometimes. When someone doesn’t show respect to others, how do you uphold that core value? When someone’s idea of “work life balance” is different than another person, how do you support both of them? When creating your own reality means you don’t want to work for Rally anymore – what do you do? I’m proud to say that in Rally’s case – they are generally true to the core values. Putting “Create your own reality” on a list of core values doesn’t create culture – what creates culture is having repeated examples where individuals have followed their passion & the company has supported them. This support doesn’t just mean they have permissions, it means the company uses whatever resources it can to help. Sometimes this means using your resources to help someone find another job. Sometimes this means helping them get an education they can use at another company. Usually though, it means getting them into a role where they can do their best work. Whatever the case – Rally’s culture is to always be true to that core value and do whatever they can to support an employee in creating their own reality. This is repeated for all of the core values. By being explicit & public about these values they set the stage for what an employee can expect from Rally as a workplace. But there’s more to it – you have to make sure these core values are upheld and you have to make sure they thrive – and this is where some of the tactical parts come in. What are the tactical things? Collaboration – at Rally collaboration is a requirement. Development is done almost exclusively in pairs, planning is done as groups, retrospectives are done regularly and the actions from those retrospectives are announced publicly and followed up on. Architecture decisions are reviewed by a group comprised of Developers, Operations and Product. Self Formed Teams – teams are largely formed of individuals who have an interest in that teams work. When we need a task force, an email will go out to the organization looking for people interested in participating & those teams self-form. This also gives anyone in the company the ability to participate in areas of the business they may never otherwise get exposure to. Servant Leadership – Leaders at Rally often do very similar work to everyone else – they just have the additional responsibility of enabling their teams. Decisions about how to do things don’t often come from managers, they come from the teams. Data Driven Decisions – Not strictly associated with a core value, I think this is one of the most important aspects of the Rally culture. There is an expectation that we establish evidence that a decision is correct. Sometimes this evidence is established before any dev work is done but sometimes this data comes from dark launching a new service or testing out some new piece of software. Either way, it’s understood that the job isn’t really done until you have data to support why a particular decision is right & have talked to the broader group about it. There are plenty of other things here and there but you get the general idea. We talk a lot & tell each other what we’re doing, we enlist passionate individuals in areas they have interest, we embrace & seek out change and we empower individuals to drive change by working with others. So what? What does that have to do with Devops? Everything 2.5 years ago the company had some very serious performance & stability problems. Technical debt had caught up with them and the only real way to fix the problem was to completely change the way the company did development & prioritized their work. The good news is that they did it, but it was made possible by the fact that individuals were empowered to drive that change. Almost overnight, two teams were formed to focus on architectural issues. A council was formed to prioritize architectural work. The things we all complain about never being able to prioritize became a priority and remain a priority to a degree I’ve never experienced before at other companies. Prioritizing this work is defended and advocated by the development teams – something only possible because of the collaborative environment in which we operate. I have been personally involved in two services that literally started out as a skeleton of an app when they went into production. The goal was to lay the groundwork to allow fast production deployments, get monitoring in place & enable visibility while the system was being developed. This was all done because the developers understand the value of these things, but they don’t know exactly how to build it – they need Ops help. Having tight Ops and Dev collaboration on these projects has made them examples of what works in our organization. These projects become examples for other teams in the company and they push the envelope on new tech. These two projects have: Implemented a completely new monitoring framework that allows developers to add any metric they want to the system Implemented Continuous deployment Established an example of how and why to Dark Launch a service I’m sure the list will continue to go on… it’s fantastic stuff. The Rub – culture isn’t much of anything without people who embrace it. Along with a responsibility for pushing change from the bottom up in Rally comes responsibility for defending culture – or changing it. This means that when you hire people, they have to align with your core values – they have to be willing to defend that culture or the company as a whole needs to shift culture. All those core values and tactical things will not maintain a culture that the team members do not support. Rally’s culture is what it is because everyone takes it seriously and that includes taking it seriously when there’s a problem that needs fixing. This has happened. There are core values that used to be on that list above but they aren’t anymore. At one point or another things changed and those core values were eroding at other core values. This takes time to surface, it takes time to collect data to show it’s true, but when the teams start to observe this trend they have to take action. This isn’t the job of management alone – this is the job of every member of the company. When the voice begins to develop asking for change – you need a culture that allows that change to take place and for everyone to agree on the new shape things take. That said, it also isn’t possible if management doesn’t support those same core values. Management has the same responsibility to take those core values seriously. DevOps is our little corner of a much bigger idea There’s a problem that we’re trying to fix – we’re trying to improve the happiness of people, the quality of software, and the general health of our industry. Our industry is totally healthy when you look at the bottom line, but we’re looking for something more. We want a happy and healthy development organization (including Ops, because Ops is part of the Development organization), but we also want our other teams to be part of that. As Ops folks and Developers, we can clean up our side of the street – we can do better. We seek to set an example for the rest of the organization. For culture to really improve in companies it has to go beyond Dev and Ops into Executives, Product, Support, Marketing, Sales and everyone else. You ALL own quality by building a healthy substrate (culture) on top of which all else evolves. But in the end it’s about culture. It’s really only about culture for now – because when you get culture right the other problems are easy to solve. Congratulations to those of you who read this far – shoot me a note and let me know you read this far because you probably share the same passion about this that I do. Also – putting up blog posts from 32,000 feet is awesome – thanks Southwest. Reference: DevOps is Culture – so is a lot of other stuff… from our JCG partner Aaron Nichols at the Operation Bootstrap blog....

EasyMock tutorial – Getting Started

In this post, I’m going to show you what EasyMock is and how you can use it for testing your java application. For this purpose, I’m going to create a simple Portfolio application and test it using JUnit & EasyMock libraries. Before we begin, lets first understand the need behind using EasyMock. Lets say, you are building an Android mobile application for maintaining user’s stock portfolios. Your application would use a stock market service to retrieve stock prices from a real server (such as NASDAQ). When it comes to testing your code, you wouldn’t want to hit the real stock market server for fetching the stock prices. Instead, you would like some dummy price values. So, you need to mock the stock market service that returns dummy values without hitting the real server. EasyMock is exactly doing the same – helps you to mock interfaces. You can pre-define the behavior of your mock objects and then use this mock object in your code for testing. Because, you are only concerned about testing your logic and not the external services or objects. So, it makes sense mock the external services. To make it clear, have a look at the below code excerpt (we’ll see the complete code in a while): StockMarket marketMock = EasyMock.createMock(StockMarket.class); EasyMock.expect(marketMock.getPrice('EBAY')).andReturn(42.00); EasyMock.replay(marketMock); In the first line, we ask the EasyMock to create a mock object for our StockMarket interface. And then in the second line, we define how this mock object should behave – i.e., when the getPrice() method is called with the parameter “EBAY”, the mock should return 42.00. And then, we call the replay() method, to make the mock object ready to use. So, that pretty much set the context about the EasyMock and it’s usage. Let’s dive into our Portfolio application. You can download the complete source code from Github. Portfolio application Our Portfolio application is really simple. It has a Stock class to represent a stock name and quantity and the Portfolio class to hold a list of stocks. This Portfolio class has a method to calculate the total value of the portfolio. Our class uses a StockMarket (an interface) object to retrieve the stock prices. While testing our code, we will mock this StockMarket using EasyMock. Stock.java A very simple Plain Old Java Object (POJO) to represent a single stock. package com.veerasundar.easymock; public class Stock { private String name; private int quantity; public Stock(String name, int quantity) { this.name = name; this.quantity = quantity; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getQuantity() { return quantity; } public void setQuantity(int quantity) { this.quantity = quantity; } }StockMarket.java An interface to represent a stock market service. It has a method that returns the stock price of the given stock name. package com.veerasundar.easymock; public interface StockMarket { public Double getPrice(String stockName); }Portfolio.java This object holds a list of Stock objects and a method to calculate the total value of the portfolio. It uses a StockMarket object to retrieve the stock prices. Since it is not a good practice to hard code the dependencies, we haven’t initialized the stockMarket object. We’ll inject it later using our test code. package com.veerasundar.easymock; import java.util.ArrayList; import java.util.List; public class Portfolio { private String name; private StockMarket stockMarket; private List<Stock> stocks = new ArrayList<Stock>(); * * this method gets the market value for each stock, sums it up and returns * the total value of the portfolio. * public Double getTotalValue() { Double value = 0.0; for (Stock stock : this.stocks) { value += (stockMarket.getPrice(stock.getName()) * stock .getQuantity()); } return value; } public String getName() { return name; } public void setName(String name) { this.name = name; } public List<Stock> getStocks() { return stocks; } public void setStocks(List<Stock> stocks) { this.stocks = stocks; } public void addStock(Stock stock) { stocks.add(stock); } public StockMarket getStockMarket() { return stockMarket; } public void setStockMarket(StockMarket stockMarket) { this.stockMarket = stockMarket; } } So, now we have coded the entire application. In this, we are going to test the Portfolio.getTotalValue() method, because that’s where our business logic is. Testing Portfolio application using JUnit and EasyMock If you haven’t used JUnit before, then it is a good time to Get started with JUnit. PortfolioTest.java package com.veerasundar.easymock.tests; import junit.framework.TestCase; import org.easymock.EasyMock; import org.junit.Before; import org.junit.Test; import com.veerasundar.easymock.Portfolio; import com.veerasundar.easymock.Stock; import com.veerasundar.easymock.StockMarket; public class PortfolioTest extends TestCase { private Portfolio portfolio; private StockMarket marketMock; @Before public void setUp() { portfolio = new Portfolio(); portfolio.setName('Veera's portfolio.'); marketMock = EasyMock.createMock(StockMarket.class); portfolio.setStockMarket(marketMock); } @Test public void testGetTotalValue() { * = Setup our mock object with the expected values * EasyMock.expect(marketMock.getPrice('EBAY')).andReturn(42.00); EasyMock.replay(marketMock); * = Now start testing our portfolio * Stock ebayStock = new Stock('EBAY', 2); portfolio.addStock(ebayStock); assertEquals(84.00, portfolio.getTotalValue()); } } As you can see, during setUp() we are creating new Portfolio object. Then we ask EasyMock to create a mock object for the StockMarket interface. Then we inject this mock object into our portfolio object using portfolio.setStockMarket() method. In the @Test method, we define how our mock object should behave when called, using the below code: EasyMock.expect(marketMock.getPrice('EBAY')).andReturn(42.00); EasyMock.replay(marketMock); So, here after our mock object’s getPrice method would return 42.00 when called with EBAY. Then we are creating a ebayStock with 2 quantities and add that to our portfolio. Since we setup the stock price of EBAY as 42.00, we know that the total value of our portfolio is 84.00 (i.e. 2 x 42.00). In the last line, we are asserting the same using the JUnit assertEquals() method. The above test should run successfully if we haven’t made any mistakes in the getTotalValue() code. Otherwise, the test would fail. Conclusion So, that’s how we use the EasyMock library to mock the external services/objects and use them in our testing code. EasyMock can do much more than what I shown in this post. I’ll probably try to cover some advanced usage scenarios in my next posts. Reference: EasyMock tutorial – Getting Started from our JCG partner Veera Sundar at the Veera Sundar blog....

Grails Custom AuthenticationProvider

In order to tighten up security in our new Grails app I went about implementing the Spring Security Plugin. Getting it up and running with a standard username/password scenario was simple, as that is all wired up automagically by the plugin. That solved half of my problem, but we also need to support authentication with SAML, and there were no clear examples of how to do that. I’d like to share what I built in case anyone has a similar requirement. I won’t focus on the SAML specifics, but rather on how to build any custom authentication provider in grails.You can map a URL to a filter by extending AbstractAuthenticationProcessingFilter and registering it with Spring. Then you can provide that URL for custom authentication. In my case it looked something like this: class SamlAuthenticationFilter extends AbstractAuthenticationProcessingFilter {public SamlAuthenticationFilter() { super("/somecustomauth") }@Override Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) { if (!request.getMethod().equals("POST")) { throw new AuthenticationServiceException("Authentication method not supported: " + request.getMethod()) }String accessToken = request.getParameter("sometoken") return this.getAuthenticationManager().authenticate(new SamlAuthenticationToken(accessToken)); }}The filter is then setup as a Spring bean, along with an authentication provider which I’ll discuss shortly: import SamlAuthenticationFilter import SamlAuthenticationProviderbeans = { samlAuthenticationFilter(SamlAuthenticationFilter) { authenticationManager = ref('authenticationManager') sessionAuthenticationStrategy = ref('sessionAuthenticationStrategy') authenticationSuccessHandler = ref('authenticationSuccessHandler') authenticationFailureHandler = ref('authenticationFailureHandler') rememberMeServices = ref('rememberMeServices') authenticationDetailsSource = ref('authenticationDetailsSource') }samlAuthenticationProvider(SamlAuthenticationProvider) { sAMLAuthenticationService = ref('SAMLAuthenticationService') sAMLSettingsService = ref('SAMLSettingsService') userDetailsService = ref('userDetailsService') passwordEncoder = ref('passwordEncoder') userCache = ref('userCache') saltSource = ref('saltSource') preAuthenticationChecks = ref('preAuthenticationChecks') postAuthenticationChecks = ref('postAuthenticationChecks') } }And the bean is then registered as a filter in the Bootstrap: import org.codehaus.groovy.grails.plugins.springsecurity.SecurityFilterPosition import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtilsclass BootStrap {def init = { servletContext -> SpringSecurityUtils.clientRegisterFilter('samlAuthenticationFilter', SecurityFilterPosition.SECURITY_CONTEXT_FILTER.order + 10) }def destroy = { } }We also need to create the Token class that is used by the Filter and the Authentication Provider: import org.springframework.security.authentication.UsernamePasswordAuthenticationToken import org.springframework.security.core.userdetails.UserDetailsclass SamlAuthenticationToken extends UsernamePasswordAuthenticationToken {String tokenpublic SamlAuthenticationToken(String token) { super(null, null); this.token = token; }public SamlAuthenticationToken(UserDetails principal, String samlResponse) { super(principal, samlResponse, principal.getAuthorities()) }}And finally the AuthenticationProvider itself: import org.springframework.security.authentication.dao.DaoAuthenticationProvider import org.springframework.security.core.Authentication import sonicg.authentication.SAMLAuthenticationServiceclass SamlAuthenticationProvider extends DaoAuthenticationProvider {@Override Authentication authenticate(Authentication authentication) { def token = (SamlAuthenticationToken) authenticationdef user = // define user if credentials check outif (user){ def userDetails = userDetailsService.loadUserByUsername(user.username) def token1 = new SamlAuthenticationToken(userDetails, token.samlResponse) return token1 }else{ return null }}@Override public boolean supports(Class authentication) { return (SamlAuthenticationToken.class.isAssignableFrom(authentication)); } }The last piece of the puzzle is to tell Spring to try using this authentication provider before the other standard three in Config.groovy: grails.plugins.springsecurity.providerNames = [ 'samlAuthenticationProvider', 'daoAuthenticationProvider', 'anonymousAuthenticationProvider', 'rememberMeAuthenticationProvider']In this case it’s important that the custom filter goes first, as it’s Token is a subclass of UsernamePasswordAuthenticationToken. If the DAO provider was first it would try to authenticate the custom token before our filter gets a chance.That’s it! Hopefully this proves useful to someone. It’s also just a first draft, and perhaps once the security requirements evolve I can refine the implementation and share what I’ve learned. Reference: Custom AuthenticationProvider with Grails from our JCG partner Kali Kallin at the Kallin Nagelberg’s journey into the west blog....

Concurrency – Sequential and Raw Thread

I worked on a project a while back, where the report flow was along these lines:User would request for a report The report request would be translated into smaller parts/sections The report for each part, based on the type of the part/section would be generated by a report generator The constituent report parts would be reassembled into a final report and given back to the userMy objective is to show how I progressed from a bad implementation to a fairly good implementation: Some of the basic building blocks that I have is best demonstrated by a unit test: This is a test helper which generates a sample report request, with constituent report request parts: public class FixtureGenerator { public static ReportRequest generateReportRequest(){ List<ReportRequestPart> requestParts = new ArrayList<ReportRequestPart>(); Map<String, String> attributes = new HashMap<String, String>(); attributes.put("user","user"); Context context = new Context(attributes ); ReportRequestPart part1 = new ReportRequestPart(Section.HEADER, context); ReportRequestPart part2 = new ReportRequestPart(Section.SECTION1, context); ReportRequestPart part3 = new ReportRequestPart(Section.SECTION2, context); ReportRequestPart part4 = new ReportRequestPart(Section.SECTION3, context); ReportRequestPart part5 = new ReportRequestPart(Section.FOOTER, context); requestParts.add(part1); requestParts.add(part2); requestParts.add(part3); requestParts.add(part4); requestParts.add(part5); ReportRequest reportRequest = new ReportRequest(requestParts ); return reportRequest; }}And the test for the report generation: public class FixtureGenerator { @Test public void testSequentialReportGeneratorTime(){ long startTime = System.currentTimeMillis(); Report report = this.reportGenerator.generateReport(FixtureGenerator.generateReportRequest()); long timeForReport = System.currentTimeMillis()-startTime; assertThat(report.getSectionReports().size(), is (5)); logger.error(String.format("Sequential Report Generator : %s ms", timeForReport)); }The component which generates a part of the report is a dummy implementation with a 2 second delay to simulate a IO intensive call: public class DummyReportPartGenerator implements ReportPartGenerator{@Override public ReportPart generateReportPart(ReportRequestPart reportRequestPart) { try { //Deliberately introduce a delay Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } return new ReportPart(reportRequestPart.getSection(), "Report for " + reportRequestPart.getSection()); } }Sequential Implementation   Given these base set of classes, my first naive sequential implementation is the following: public class SequentialReportGenerator implements ReportGenerator { private ReportPartGenerator reportPartGenerator;@Override public Report generateReport(ReportRequest reportRequest){ List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); List<ReportPart> reportSections = new ArrayList<ReportPart>(); for (ReportRequestPart reportRequestPart: reportRequestParts){ reportSections.add(reportPartGenerator.generateReportPart(reportRequestPart)); } return new Report(reportSections); } ...... }Obviously, for a report request with 5 parts in it, each part taking 2 seconds to be fulfilled this report takes about 10 seconds for it to be returned back to the user. It begs to be made concurrent. Raw Thread Based Implementation   The first concurrent implementation, not good but better than sequential is the following, where a thread is spawned for every report request part, waiting on the reportparts to be generated(using thread.join() method), and aggregating the pieces as they come in. public class RawThreadBasedReportGenerator implements ReportGenerator { private static final Logger logger = LoggerFactory.getLogger(RawThreadBasedReportGenerator.class);private ReportPartGenerator reportPartGenerator;@Override public Report generateReport(ReportRequest reportRequest) { List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); List<Thread> threads = new ArrayList<Thread>(); List<ReportPartRequestRunnable> runnablesList = new ArrayList<ReportPartRequestRunnable>(); for (ReportRequestPart reportRequestPart : reportRequestParts) { ReportPartRequestRunnable reportPartRequestRunnable = new ReportPartRequestRunnable(reportRequestPart, reportPartGenerator); runnablesList.add(reportPartRequestRunnable); Thread thread = new Thread(reportPartRequestRunnable); threads.add(thread); thread.start(); }for (Thread thread : threads) { try { thread.join(); } catch (InterruptedException e) { logger.error(e.getMessage(), e); } }List<ReportPart> reportParts = new ArrayList<ReportPart>();for (ReportPartRequestRunnable reportPartRequestRunnable : runnablesList) { reportParts.add(reportPartRequestRunnable.getReportPart()); }return new Report(reportParts);} ..... }The danger with this approach is that a new thread is being created for every report part, so in a real world scenario if a 100 simultaneous request comes in with each request spawning 5 threads, this can potentially end up creating 500 costly threads in the vm!! So thread creation has to be constrained in some way. I will go through two more approaches where threads are controlled, in the next blog entry. Reference: Concurrency – Sequential and Raw Thread from our JCG partner Biju Kunjummen at the all and sundry blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books