Featured FREE Whitepapers

What's New Here?


Revealing the length of Garbage Collection pauses

There are several ways to improve your product. One such way is to carefully track what your users are experiencing and improve based on that. We do apply this technique ourselves and have again spent some time looking at different data Besides many other aspects we were after, we also posed a question “what is the worst-case effect for latency GC is triggering for an application”.  To answer the question we analyzed the data from 312 different JVMs attaching Plumbr Agent during the past two months. The  results were interesting and we decided to share the outcome with you:    On the X-axis there is the maximum length of the pause within this JVM, grouped into buckets. On the Y-axis there is the number of applications with maximum pause falling into a particular bucket. Using the data above, we can for example claim the following about the 312 JVMs being monitored:57 JVMs (18%) managed to keep GC pauses at bay with maximum pause under 256ms 73 JVMs (23%) faced a maximum GC pause in between 1024ms and 4095ms 105 JVMs (33%) stopped the application threads for 4 or more seconds due to GC. 43 JVMs (14%) faced a maximum GC pause longer than 16 seconds 18 JVMs (6%) contained a GC pause spanning for more than a minute Current record holder managed to stop all application threads for more than 16 minutes due to a garbage collection pause.We do admit that our data might be biased in regard that – the JVMs Plumbr ends up monitoring are more likely to suffer from performance issues triggering longer GC pauses. So there is a grain of salt you have to take these results with, but overall, the discoveries are still interesting. After all, tens of added seconds to the latency cannot be considered tolerable for majority of the applications out there. We have several hypotheses why the situation looks as bad as it currently does:In first case, engineers are not even aware that their application is performing so badly. Having no access to GC logs and being isolated from the customer support might completely hide the problem from the people who could be able to improve the situation Second case consists of people is struggling to reproduce the problem. As always, first step towards having a solution is building a reproducible test case in an environment where further experiments can be concluded. When the long-lasting GC pauses only occur in production environments, then coming up with a solution is a daunting task. Third group of the issues falls on the shoulders of engineers who are aware of the issue and can even reproduce the behaviour at will, but have no clues how to actually improve the situation. Tuning GC is a tricky task and requires a lot of knowledge about JVM internals, so most engineers in this situation find themselves in between a rock and a hard place.The good news is that we are working hard on making all those reasons obsolete – Plumbr surfaces the poorly-behaving GC issues, alerts you when these issues are detected and better yet, gives you a tailor-made solutions how to improve the behaviour. So instead of weeks of trial-and-error you are now able to surface and solve those cases in minutes.Reference: Revealing the length of Garbage Collection pauses from our JCG partner Nikita Artyushov at the Plumbr Blog blog....

The DSL Jungle

DSLs are a common thing in the programming world nowadays. Many frameworks and tools decide to build a DSL for their…specific things. Builds tools are the primary candidates, but testing frameworks, web frameworks and whatnot also decide to define a DSL. With these DSLs you define build steps, web routing rules, test acceptance criteria, etc. What is the most common thing about all these DSLs? Two things. First, they are predominantly about configuration. Some specific way of configuring something specific to the tool or framework. The second thing is that you copy-paste code. Everytime I’m confronted with some DSL that is meant to help with my programming task, I end up copy-pasting examples or existing code, and then modifying it. Even though I’ve been working with a DSL for 8 months (from time to time), I just don’t remember its syntax. And you may say “yeah, that’s because you use bad DSLs”. Well, then I haven’t seen a good one yet. I’m currently using sbt, spray routing, cucumber for scala, previously I’ve used groovy and grails DSLs, and a few others along the way. But is it bad that you copy-paste existing pieces of code? Not always. You can, of course, base your configuration on existing, working pieces. But there are three issues – duplicate code, autocomplete and exploration. You know copy-pasting is wrong and leads to duplication. Not only that, but you may forget to change or remove something in the pasted code. And if you want to add some property, it would be good to be able to auto-complete it, rather than mistyping or, or forgetting whether it was “filePath”, “filepath”, “file-path” or just “path”. Having 2-3 DSLs in parts of a big project, you can’t remember all property names, so the alternative is to go and see the documentation (if you don’t have a working piece with that particular property to copy-paste from). Exploration is an even bigger issue. Especially when learning, or remembering how to do certain things with a given DSL, it is crucial to be able to explore the possibilities. What properties does this have, that might be useful? What does this property do exactly and does it have subproperties? What can I nest under this item? This is very important, regardless of your knowledge of the tool/framework. But with most DSLs you don’t have that. They either have some bizarre syntax, or they are JSON-based, or they look like the language you are using, but not quite, and hence even an IDE finds it difficult to understand them (spray being such an example). You either look at the documentation, or you copy-paste, or both. And you are kind of lost in this DSL jungle of ever so “cooler” DSLs that do a wide variety of things. And now I’ll drop the X-bomb. I love XML. Trusting the “XML configuration files are evil” meme has lead to many incomprehensible configurations, that are “short and easy to read and write”. Easy, if you remembered what those double-percentage signs meant compared to the single percentage signs, and where exactly to put the parentheses. In almost every scenario where someone decided that a DSL is a good idea, XML would have worked brilliantly. Using an XSD schema (which, I agree, is a bit tedious to write) you can make any XML-aware tool be turned into an IDE for configuration. Take the maven pom file, for example. Did you forget what element you could nest under “build”? Hit CTRL+space and you’ll find out. Being unified, you can read the XML configuration of any framework or tool that uses it, not just this particular one, that is the n-th DSL in a single project. While XML is verbose, it is straightforward and standard. So if you are writing a tool, and can’t make some configuration available via annotations or via very simple code (builders, setters, fluent interfaces), don’t go for a DSL. Don’t write DSLs where you can easily use XML. It will look good on your README.md, but your users will copy-paste all the time and may actually hate it. So please don’t contribute to the DSL jungle. And do you know why that is? Remember the initial note that these are DSLs you use when programming. Well, DSLs are not for programmers. DSLs are for non-programmers to express business logic in (almost) prose. Or at least their usage should be limited to that, where they can really excel. If you are making a tool for business analysts, feel free to design the most awesome DSL. If you are building a tool for programmers, don’t.Reference: The DSL Jungle from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Legacy Code to Testable Code #4: More Accessors!

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. It continues the last post on accessors.   We talked about “setter” accessors as a mean to inject values. The other side of the coin is when we want to know something has happened inside our object. For example, if internal state has changed, or a non-public method was called. In fact, this is like the “if a tree fell in the woods” question: Why do we care if internal state changed? In legacy code, there are many cases where a class grows large and does many things. If we had separated responsibilities, the result would be possible to assert on another class. Alas, with god objects, things are a bit of a mess. When that happens, our acceptance criteria may move inside: We can either check internal state or internal method calls for its impact. To check internal state, we can add a “getter” method. Adding a “getter” function is easy, and if it doesn’t have logic (and it shouldn’t), it can expose the information without any harm done. If the refactoring tool begs you to add a “setter” you can set it to be private, so no one else uses it. Role reversal In a funny way, “getter” methods can reverse roles: We can use a “getter” method to inject a value by mocking it. So in our getAccount example: protected Bank getBank() { return new Bank(); }public void getAccount() { Bank tempBank = getBank(); ... By mocking the getBank method we can return a mockBank (according to our tools of choice): when(testedObject.getBank()).thenReturn(mockBank); On the other hand, we can verify a call on a “setter” instead of exposing a value. So if our Account object has an internal state called balance, instead of exposing it and checking it after the tested operation, we can add a “setter” method, and see if it was called. verify(account).setBalance(3); In contrast to injection, when we probe we don’t want to expose an object on the stack. It’s in the middle of an operation, and therefore not interesting (and hard to examine). If there’s an actual case for that, we can use the “setter” method verification option. In this example, the addMoney function calculates the interimBalance before setting the value back to currentBalance. public void addMoney(int amount) { int interimBalance = currentBalance; interimBalance += amount; currentBalance = interimBalance; } If we want to check the `currentBalance` before the calculation, we can modify the method to: public void addMoney(int amount) { int interimBalance = setInterim(currentBalance); interimBalance += amount; currentBalance = interimBalance; }protected void setInterim (int balance){ return balance; } Then in our test we can use verification as a precondition: verify(account).setInterim(100); Adding accessors is a solution for a problem that was created before we thought about tests: The design is not modular enough and has many responsibilities. It holds information inside it, and tests (and future clients) cannot access it. If we wrote it “right” the first time, the god class would probably have been written as a set of classes. With our tests in place, we want to get to a modular design. Tests give us the safety to change the code. So are the automated refactoring tools. We can start the separation even before our tests using the Extract Method refactoring pattern. We’re going to discuss it next.Reference: Legacy Code to Testable Code #4: More Accessors! from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

My Five Rules for Remote Working

A couple of weeks ago, there was a stir (again) about remote working and its succes and/or failure: it was reported that Reddit, the website where many people lose countless of hours, were forcing all their employees to move to SF. After a similar thing happened at Yahoo last year it made me think about why remote work is such a huge success for us at Activiti and Alfresco. You see, I’m a remote worker for more than five years now. First at Red Hat and then at Alfresco. I worked a couple of years as Java consultant before that, so I’ve seen my share of office environments (checking my Linkedin, it comes down to about 10 different office environments). I had to go to these offices each day. Comparing those experiences, I can – without exaggeration – say that I’m way more productive nowadays, working from home. Many people (both in and outside IT) ask me how I do it. They say “they couldn’t do it”. Maybe that’s true. Maybe some people need a lot of people around them. But for the kind of job I am into – developing software – I believe having a lot of people around me doesn’t aid me in writing higher quality software faster. Anyway, like I said, I did some thinking around it and I came to the following “rules” which I have been following all these years which I believe are crucial (at least for me!) to making remote working a success.Rule 1: The Door Having a separate space to work is crucial when wanting do serious remote working. Mentally it is important that you can close “The Door” of your office space when you finished working. It brings some kind of closure to the working day. Many people, when they work from home, put their laptop on let’s say the kitchen table. That doesn’t work. It is not a space that encourages work. There are distractions everywhere (kids that come home, food very close by, …). But most importantly, there is no distinction between when you are working and when you are not. My wife and kids they know and understand that when The Door is closed, I’m at work. I can’t be disturbed until that Door opens. But when I close The Door in the evening and come downstairs, they also know that I’m fully available for them.Rule 2: The Gear The second rule is related to the first one: what to put in that room. The answer is simple: only the best. A huge desk, a big-ass 27″ monitor (or bigger), a comfortable chair (your ass spends a lot of time on it), the fastest internet you can buy, some quality speakers, a couple of cool posters and family pictures on the wall, …. This is the room where you spend most of your time in the week, so you need to make it a place where you love to go to.Often, I hear from people which company allows for remote work that their company should pay for all of this. I think that’s wrong. It’s a two-way street: your company gives you the choice, privilege and trust to work from home, so you from your side must take care that your home office isn’t decreasing anything compared to the office gear you have. Internet connection, chair and computer monitor are probably the most important bits here. If you try to be cheap on any of those, you’ll repay it in decreased productivity. Rule 3: The Partner Your partner is of utmost importance to make remote work a success. Don’t be fooled by the third place here, when your partner is not into it, all the other points are useless. It’s pretty simple and comes down to one core agreement you need to make when working from home: when you are working from home you are not “at home”. When you work, there is no time for cleaning the house, doing the dishes, mowing the grass, etc … You are at work, and that needs to be seen as a full-time, serious thing. Your partner needs to understand that when you would do any of these things, it would be bad for your career. Many people think this is easy, but I’ve seen many fail. A lot of people still see working from home as something that is not the same as “regular work”. They think you’ve got all the time in the world now. Wrong. Talk it through with your partner. If he/she doesn’t see it (or is jealous), don’t do it. Rule 4: Communicate, communicate, communicate More than a team in an office, you need to communicate. If you don’t communicate, you simply don’t exist. At Activiti, we are skyping a lot during the day. We all know exactly what the other team members are currently doing. We have an informal agreement that we don’t announce a call typically. You just press the ‘call’ button and the other side has to pick it up and respond. It’s the only way remote work can work. Communicate often. Also important: when you are away from your laptop, say it in a common chat window. There is nothing as damaging for remote workers as not picking up Skype/Phone for no reason. Rule 5: Trust People The last rule is crucial. Working remote is based on trust. Unlike in the office, there is no physical proof that you are actually working (although being physically in an office is not correlated with being productive!). You need to trust people that they do their job. But at the same time, don’t be afraid to check up on people’s work (for us, those are the commits) and ask the questions why something is taking longer than expected. Trust grows both ways. The second part of this trust-story is that there needs to be trust from the company to the team. If that trust is missing, your team won’t be working remote for long. At Activiti, we are very lucky to have Paul Holmes Higgin as our manager. He is often in the office of Alfresco and makes sure that whatever we are doing is known to the company and vice versa. He attends many of the (online) meetings that happen company wide all the time so that we are free to code. There is nothing as bad for a remote team as working in isolation.Conclusion So those are my five (personal!) rules I follow when working from home. With all these bad press from the likes of Reddit and Yahoo, I thought it was time for some positive feedback. Remote work is perfect for me: it allows me to be very productive, while still being able to see my family a lot. Even though I put in a lot of hours every week, I’m still seeing my kids grow up every single day and I am there for them when they need me. And that is something priceless.Reference: My Five Rules for Remote Working from our JCG partner Joram Barrez at the Small steps with big feet blog....

Spring Rest API with Swagger – Integration and configuration

Nowadays, exposed APIs are finally getting the attention they deserve and companies are starting to recognize their strategic value. However, working with 3rd party APIs can be really tedious work, especially when these APIs are not maintained, ill designed or missing any documentation. That’s why I decided to look around for ways to provide fellow programmers and other team members with proper documentation when it comes to integration. One way to go is to use WADL, which is a standard specifically designed to describe HTTP based web applications (like REST web services). However there are few drawback when using WADL that made me look elsewhere for solutions how to properly document and expose API documentation.   Swagger Another way might be to go with Swagger. Swagger is both specification and framework implementation that supports full life cycle of RESTful web services development. The specification itself is language-agnostic, which might come in handy in heterogeneous environment. Swagger also comes with Swagger UI module which allows both programmers and other team members to meaningfully interact with APIs and gives them a way to work with it while providing access to the documentation. Spring with Jersey example Not long ago, I came across an article describing Swagger specification and I was pretty intrigued to give it a try. At that time I was working on a sweet little microservice so I had an ideal testing ground to try it out. Based on that I prepared a short example about how to use Swagger in your application, when you are using Spring framework and Jersey. Example code models simplified REST API for a subset of possible APIs in a shop application scenario. Note: Import declarations were omitted from all Java code samples. Jersey servlet Before we get down to introducing Swagger to our code, lets take a moment and explore our example a little. First of all, lets look at web.xml. There is plain old web.xml with few simple declarations and mappings in code sample below. Nothing special here, just a bunch of configuration. <web-app id="SpringWithSwagger" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <display-name>Spring Jersey Swagger Example</display-name><context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:beans.xml</param-value> </context-param><listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener><servlet> <servlet-name>jersey-serlvet</servlet-name> <servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class> <init-param> <param-name>javax.ws.rs.Application</param-name> <param-value>com.jakubstas.swagger.SpringWithSwagger</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet><servlet-mapping> <servlet-name>jersey-serlvet</servlet-name> <url-pattern>/rest/*</url-pattern> </servlet-mapping> </web-app> Endpoint Second thing we are going to need is the endpoint that defines our REST service – for example employee endpoint for listing current employees. Once again, there is nothing extraordinary, only a few exposed methods providing core API functionality. package com.jakubstas.swagger.rest;@Path("/employees") public class EmployeeEndpoint {private List<Employee> employees = new ArrayList<Employee>();{ final Employee employee = new Employee(); employee.setEmployeeNumber(1); employee.setFirstName("Jakub"); employee.setSurname("Stas");employees.add(employee); }@OPTIONS public Response getProductsOptions() { final String header = HttpHeaders.ALLOW; final String value = Joiner.on(", ").join(RequestMethod.GET, RequestMethod.OPTIONS).toString();return Response.noContent().header(header, value).build(); }@GET @Produces(MediaType.APPLICATION_JSON) public Response getEmployees() { return Response.ok(employees).build(); } } Swagger dependencies First thing we need to do is to include all required Swagger dependencies in our pom.xml as shown below (lucky for us it’s only a single dependency). <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> ... <properties> ... <swagger-version>1.3.8</swagger-version> ... </properties> ... <dependencies> ... <!-- Swagger --> <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs_2.10</artifactId> <version>${swagger-version}</version> </dependency> ... </dependencies> </project> Swagger configuration Now, lets take a look how Swagger integrates into our example. As with any introduction of a new dependency in your project, you should be concerned with how invasive and costly this process will be. The only effected places will be your REST endpoints, Spring configuration and some transfer objects (given you choose to include them) as you will see in following code samples. This means that there is no configuration needed in web.xml for Swagger to work with your Spring application, meaning it’s rather non-invasive in this way and remains constrained within APIs realm. You need three basic properties for Swagger to work:API versionProvides the version of the application APIbase pathThe root URL serving the APIresource packageDefines package where to look for Swagger annotationsSince API maintenance is primarily responsibility of analysts and programmers, I like to keep this configuration in a separate property file called swagger.properties. This way it is not mixed with application configuration and is less likely to be modified by accident. Following snippet depicts such a configuration file. swagger.apiVersion=1.0 swagger.basePath=http://[hostname/ip address]:[port]/SpringWithSwagger/rest swagger.resourcePackage=com.jakubstas.swagger.rest For a second part of configuration I created a configuration bean making use of previously mentioned properties. Using Spring’s @PostConstruct annotation providing bean life-cycle hook, we are able to instantiate and set certain attributes that Swagger requires, but is not able to get (in current version at least). package com.jakubstas.swagger.rest.config;/** * Configuration bean to set up Swagger. */ @Component public class SwaggerConfiguration {@Value("${swagger.resourcePackage}") private String resourcePackage;@Value("${swagger.basePath}") private String basePath;@Value("${swagger.apiVersion}") private String apiVersion;@PostConstruct public void init() { final ReflectiveJaxrsScanner scanner = new ReflectiveJaxrsScanner(); scanner.setResourcePackage(resourcePackage);ScannerFactory.setScanner(scanner); ClassReaders.setReader(new DefaultJaxrsApiReader());final SwaggerConfig config = ConfigFactory.config(); config.setApiVersion(apiVersion); config.setBasePath(basePath); }public String getResourcePackage() { return resourcePackage; }public void setResourcePackage(String resourcePackage) { this.resourcePackage = resourcePackage; }public String getBasePath() { return basePath; }public void setBasePath(String basePath) { this.basePath = basePath; }public String getApiVersion() { return apiVersion; }public void setApiVersion(String apiVersion) { this.apiVersion = apiVersion; } } Last step is to declare following three Swagger beans: ApiListingResourceJSON, ApiDeclarationProvider and ResourceListingProvider. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:component-scan base-package="com.jakubstas.swagger" /> <context:property-placeholder location="classpath:swagger.properties" /><bean class="com.wordnik.swagger.jaxrs.listing.ApiListingResourceJSON" /> <bean class="com.wordnik.swagger.jaxrs.listing.ApiDeclarationProvider" /> <bean class="com.wordnik.swagger.jaxrs.listing.ResourceListingProvider" /> </beans> Swagger is now configured and you can check whether your setup is working properly. Just enter the URL from your basePath variable followed by /api-docs into your browser and check the result. You should see an output similar to following snippet I received after accessing http://[hostname]:[port]/SpringWithSwagger/rest/api-docs/ in my example. {"apiVersion":"1.0","swaggerVersion":"1.2"} What is next? If you followed all steps you should now have working setup to start with an API documentation. I will showcase how to describe APIs using Swagger annotations in my next article called Spring Rest API with Swagger – Creating documentation. The code used in this micro series is published on GitHub and provides examples for all discussed features and tools. Please enjoy!Reference: Spring Rest API with Swagger – Integration and configuration from our JCG partner Jakub Stas at the Jakub Stas blog....

How To Control Access To REST APIs

Exposing your data or application through a REST API is a wonderful way to reach a wide audience. The downside of a wide audience, however, is that it’s not just the good guys who come looking.         Securing REST APIs Security consists of three factors:Confidentiality Integrity AvailabilityIn terms of Microsoft’s STRIDE approach, the security compromises we want to avoid with each of these are Information Disclosure, Tampering, and Denial of Service. The remainder of this post will only focus on Confidentiality and Integrity. In the context of an HTTP-based API, Information Disclosure is applicable for GET methods and any other methods that return information. Tampering is applicable for PUT, POST, and DELETE. Threat Modeling REST APIs A good way to think about security is by looking at all the data flows. That’s why threat modeling usually starts with a Data Flow Diagram (DFD). In the context of a REST API, a close approximation to the DFD is the state diagram. For proper access control, we need to secure all the transitions. The traditional way to do that, is to specify restrictions at the level of URI and HTTP method. For instance, this is the approach that Spring Security takes. The problem with this approach, however, is that both the method and the URI are implementation choices. URIs shouldn’t be known to anybody but the API designer/developer; the client will discover them through link relations. Even the HTTP methods can be hidden until runtime with mature media types like Mason or Siren. This is great for decoupling the client and server, but now we have to specify our security constraints in terms of implementation details! This means only the developers can specify the access control policy. That, of course, flies in the face of best security practices, where the access control policy is externalized from the code (so it can be reused across applications) and specified by a security officer rather than a developer. So how do we satisfy both requirements? Authorizing REST APIs I think the answer lies in the state diagram underlying the REST API. Remember, we want to authorize all transitions. Yes, a transition in an HTTP-based API is implemented using an HTTP method on a URI. But in REST, we shield the URI using a link relation. The link relation is very closely related to the type of action you want to perform. The same link relation can be used from different states, so the link relation can’t be the whole answer. We also need the state, which is based on the representation returned by the REST server. This representation usually contains a set of properties and a set of links. We’ve got the links covered with the link relations, but we also need the properties. In XACML terms, the link relation indicates the action to be performed, while the properties correspond to resource attributes. Add to that the subject attributes obtained through the authentication process, and you have all the ingredients for making an XACML request! There are two places where such access control checks comes into play. The first is obviously when receiving a request. You should also check permissions on any links you want to put in the response. The links that the requester is not allowed to follow, should be omitted from the response, so that the client can faithfully present the next choices to the user. Using XACML For Authorizing REST APIs I think the above shows that REST and XACML are a natural fit. All the more reason to check out XACML if you haven’t already, especially XACML’s REST Profile and the forthcoming JSON Profile.Reference: How To Control Access To REST APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....

Understanding strategy pattern by designing game of chess

Today we will try to understand Strategy Pattern with the help of an example. The example we will consider is The Game of Chess. The intention here is to explain strategy pattern and not to build a comprehensive Chess Game solution. Strategy Pattern : The Strategy pattern is known as a behavioural pattern – it’s used to manage algorithms, relationships and responsibilities between objects. The main benefit of strategy pattern is to choose the algorithm/behaviour at runtime.        Lets try to understand this by implementing this to design the chess game. In chess there are different characters like King, Queen, Bishop and all of them have different moves. There could be many possible solutions to this design, lets explore one by one :The first way would be to define movement in each and every class, every character will have its own move() implementation. In this way there is no code reusability and we can not change the implementation at run time. Make a separate MovementController Class and put an if else for each type of movement of an object.public class BadDesginCharacterMovementController {public void move(Character character){ if(character instanceof King){ System.out.print("Move One Step forward"); }else if(character instanceof Queen){ System.out.print("Move One Step forward"); }else if(character instanceof Bishop){ System.out.print("Move diagonally"); } } } This is a poor design, with strong coupling, moreover using if/else makes it ugly. So, we would like to have a design where we can have loose coupling, where we can decide the movement algorithm at run time and there is code reusability. Lets see this complete implementation using Strategy Pattern. Below is that class diagram of our implementation:The complete source code can be downloaded from here.We will have our base abstract class as Character Class, which all the characters can extend and set their own MovementBehaviour implementation. public class Character {private MovementBehaviour movementBehaviour;String move(){ return movementBehaviour.move(); }public void setMovementBehaviour(MovementBehaviour movementBehaviour) { this.movementBehaviour = movementBehaviour; } } This class has a movement Behaviour: public interface MovementBehaviour {String move(); } So, each Character : King,Queen,Bishop will extend Character and they can have their own implementation of Movement Behaviour. public class King extends Character {public King() { setMovementBehaviour(new SingleForward()); } } Here for simplicity, I have called the setMovemementBehaviour method inside the constructor of King. Similarly, another character Queen can be defined as : public class Queen extends Character {public Queen() { setMovementBehaviour(new SingleForward()); } } And, Bishop as : public class Bishop extends Character {public Bishop() { setMovementBehaviour(new DiagonalMovement()); } } The implementation of different movements can be as follows: Single Forward : public class SingleForward implements MovementBehaviour {@Override public String move() { return "move one step forward"; } } Diagonal Movement: public class DiagonalMovement implements MovementBehaviour {@Override public String move() { return "Moving Diagonally"; } } With this example we can understand the Strategy Pattern.Reference: Understanding strategy pattern by designing game of chess from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....

CallSerially The EDT & InvokeAndBlock (Part 1)

We last explained some of the concepts behind the EDT in 2008 so its high time we wrote about it again, there is a section about it in the developer guide as well as in the courses on Udemy but since this is the most important thing to understand in Codename One it bares repeating. One of the nice things about the EDT is that many of the concepts within it are similar to the concepts in pretty much every other GUI environment (Swing/FX, Android, iOS etc.). So if you can understand this explanation this might help you when working in other platforms too. Codename One can have as many threads as you want, however there is one thread created internally in Codename One named “EDT” for Event Dispatch Thread. This name doesn’t do the thread justice since it handles everything including painting etc. You can imagine the EDT as a loop such as this: while(codenameOneRunning) { performEventCallbacks(); performCallSeriallyCalls(); drawGraphicsAndAnimations(); sleepUntilNextEDTCycle(); } The general rule of the thumb in Codename One is: Every time Codename One invokes a method its probably on the EDT (unless explicitly stated otherwise), every time you invoke something in Codename One it should be on the EDT (unless explicitly stated otherwise). There are a few notable special cases:NetworkManager/ConnectionRequest – use the network thread internally and not the EDT. However they can/should be invoked from the EDT. BrowserNavigationCallback – due to its unique function it MUST be invoked on the native browser thread. Displays invokeAndBlock/startThread – create completely new threads.Other than those pretty much everything is on the EDT. If you are unsure you can use the Display.isEDT method to check whether you are on the EDT or not. EDT Violations You can violate the EDT in two major ways:Call a method in Codename One from a thread that isn’t the EDT thread (e.g. the network thread or a thread created by you). Do a CPU intensive task (such as reading a large file) on the EDT – this will effectively block all event processing, painting etc. making the application feel slow.Luckily we have a tool in the simulator: the EDT violation detection tool. This effectively prints a stack trace to suspect violations of the EDT. Its not fool proof and might land your with false positives but it should help you with some of these issues which are hard to detect. So how do you prevent an EDT violation? To prevent abuse of the EDT thread (slow operations on the EDT) just spawn a new thread using either new Thread(), Display.startThread or invokeAndBlock (more on that later). Then when you need to broadcast your updates back to the EDT you can use callSerially or callSeriallyAndWait. CallSerially callSerially invokes the run() method of the runnable argument it receives on the Event Dispatch Thread. This is very useful if you are on a separate thread but is also occasionally useful when we are on the EDT and want to postpone actions to the next cycle of the EDT (more on that next time). callSeriallyAndWait is identical to call serially but it waits for the callSerially to complete before returning. For obvious reasons it can’t be invoked on the EDT. In the second part of this mini tutorial I will discuss invokeAndBlock and why we might want to use callSerially when we already are on the EDT.Reference: CallSerially The EDT & InvokeAndBlock (Part 1) from our JCG partner Shai Almog at the Codename One blog....

Ceylon: Planning the future of Ceylon 1.x

With the release of Ceylon 1.1, we’ve reached a point where we need to do some serious thinking about what are our priorities for the development of Ceylon 1.1.5, 1.2, and beyond. I definitely don’t yet have a crystal clear vision of what is going to be in 1.2, so we’re also looking for community feedback on this. I do know of one item which is the top priority right now, and will be the main feature of Ceylon 1.1.5:      Serialization.This was a feature that slipped from Ceylon 1.0, and which again narrowly missed out on inclusion in Ceylon 1.1. The concept behind serialization in Ceylon is to have an API responsible for assembling and disassembling objects that is agnostic as to the actual format of the serialized stream. Of course, this API also has to be platform neutral, in order to allow serialization between programs running on the JVM and programs running on a JavaScript VM. Tom Bentley already has a working prototype implementation. Once this feature is done, we can start working on serialization libraries supporting JSON and whatever else. I also count the following as a high priority areas of work:Java EE integration, and support for technologies like JPA and CDI. Adding properties to the language, that is, a new syntax for attribute references, allowing easy MVC UI bindings. Improving the Cayla web framework, and ceylon.html.Beyond that, we’re not sure where else we should concentrate development effort. Here are some things that stick out to me:Addition of named constructors, allowing multiple ways to instantiate and initialize a class. AST transformers—a system of compiler plugins, based around ceylon.ast, enabling advanced compile-time metaprogramming, which would form the foundation for LINQ-style queries, interceptors and proxies, and autogeneration of equals(), hash, and string, and more. Addition of a syntax for expressing patterns in BNF. The Ceylon plugin for IntelliJ IDEA. Android support. Assemblies—a facility for packaging multiple modules into a deployable “application”. New platform modules defining dynamic interfaces for typesafe interaction with JavaScript APIs such as the DOM, jQuery, etc. Interoperation with dynamic languages on the JVM, via Ceylon’s dynamic blocks and dynamic interfaces. Enabling the use of Ceylon for scripting.We can’t do all of this in Ceylon 1.2. Therefore, we’re looking for feedback from the community. Let us know, here in comments, or on the mailing list, what you feel is missing from Ceylon, either from the above list, or whatever else you think is important.Reference: Ceylon: Planning the future of Ceylon 1.x from our JCG partner Gavin King at the Ceylon Team blog blog....

right-pad values with XSLT

In this post an XSLT function that can be used to right-pad the value of an element with a chosen character to a certain length. No rocket science but this might become handy again so by putting it down here I don’t have to reinvent it later. The function itself looks like:               <xsl:stylesheet version="2.0" xmlns:functx="http://my/functions" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"><xsl:function name="functx:pad-string-to-length" as="xsd:string"> <xsl:param name="stringToPad" as="xsd:string?"/> <xsl:param name="padChar" as="xsd:string"/> <xsl:param name="length" as="xsd:integer"/> <xsl:sequence select=" substring( string-join ( ($stringToPad, for $i in (1 to $length) return $padChar) ,'') ,1,$length) "/> </xsl:function></xsl:stylesheet> And with this function available you can use it like: <xsl:template match="/"> <xsl:value-of select="functx:pad-string-to-length(//short_value, '0', 12)" /> </xsl:template> The input XML like: <test> <short_value>123</short_value> </test> will give as a result in this case: 123000000000 By the way, this function only works with XSLT2!Reference: right-pad values with XSLT from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: