Featured FREE Whitepapers

What's New Here?


Quick, and a bit dirty, JSON Schema generation with MOXy 2.5.1

So I am working on a new REST API for an upcoming Oracle cloud service these days so one of the things I needed was the ability to automatically generate a JSON Schema for the bean in my model. I am using MOXy to generate the JSON from POJO and as of version 2.5.1 of EclipseLink it now has the ability to generate a JSON Schema from the bean model. There will be a more formal solution integrated into Jersey 2.x at a future date; but this solution will do at the moment if you want to play around with this. So the first class we need to put in place is a model processor, very much and internal Jersey class, that allows us to amend the resource model with extra methods and resources. To each resource in the model we can add the JsonSchemaHandler which does the hard work of generating a new schema. Since this is a simple POC there is no caching going on here, please be aware of this if you are going to use this in production code. import com.google.common.collect.Lists;import example.Bean;import java.io.IOException; import java.io.StringWriter;import java.text.SimpleDateFormat;import java.util.Date; import java.util.List;import javax.inject.Inject;import javax.ws.rs.HttpMethod; import javax.ws.rs.WebApplicationException; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.core.Configuration; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;import javax.xml.bind.JAXBException; import javax.xml.bind.SchemaOutputResolver; import javax.xml.transform.Result; import javax.xml.transform.stream.StreamResult;import org.eclipse.persistence.jaxb.JAXBContext;import org.glassfish.jersey.process.Inflector; import org.glassfish.jersey.server.ExtendedUriInfo; import org.glassfish.jersey.server.model.ModelProcessor; import org.glassfish.jersey.server.model.ResourceMethod; import org.glassfish.jersey.server.model.ResourceModel; import org.glassfish.jersey.server.model.RuntimeResource; import org.glassfish.jersey.server.model.internal.ModelProcessorUtil; import org.glassfish.jersey.server.wadl.internal.WadlResource;public class JsonSchemaModelProcessor implements ModelProcessor {private static final MediaType JSON_SCHEMA_TYPE = MediaType.valueOf("application/schema+json"); private final List<ModelProcessorUtil.Method> methodList;public JsonSchemaModelProcessor() { methodList = Lists.newArrayList(); methodList.add(new ModelProcessorUtil.Method("$schema", HttpMethod.GET, MediaType.WILDCARD_TYPE, JSON_SCHEMA_TYPE, JsonSchemaHandler.class)); }@Override public ResourceModel processResourceModel(ResourceModel resourceModel, Configuration configuration) { return ModelProcessorUtil.enhanceResourceModel(resourceModel, true, methodList, true).build(); }@Override public ResourceModel processSubResource(ResourceModel resourceModel, Configuration configuration) { return ModelProcessorUtil.enhanceResourceModel(resourceModel, true, methodList, true).build(); }public static class JsonSchemaHandler implements Inflector<ContainerRequestContext, Response> {private final String lastModified = new SimpleDateFormat(WadlResource.HTTPDATEFORMAT).format(new Date());@Inject private ExtendedUriInfo extendedUriInfo;@Override public Response apply(ContainerRequestContext containerRequestContext) {// Find the resource that we are decorating, then work out the // return type on the first GETList<RuntimeResource> ms = extendedUriInfo.getMatchedRuntimeResources(); List<ResourceMethod> rms = ms.get(1).getResourceMethods(); Class responseType = null; found: for (ResourceMethod rm : rms) { if ("GET".equals(rm.getHttpMethod())) { responseType = (Class) rm.getInvocable().getResponseType(); break found; } }if (responseType == null) { throw new WebApplicationException("Cannot resolve type for schema generation"); }// try { JAXBContext context = (JAXBContext) JAXBContext.newInstance(responseType);StringWriter sw = new StringWriter(); final StreamResult sr = new StreamResult(sw);context.generateJsonSchema(new SchemaOutputResolver() { @Override public Result createOutput(String namespaceUri, String suggestedFileName) throws IOException { return sr; } }, responseType);return Response.ok().type(JSON_SCHEMA_TYPE) .header("Last-modified", lastModified) .entity(sw.toString()).build(); } catch (JAXBException jaxb) { throw new WebApplicationException(jaxb); } } }}Note the very simple heuristic in the JsonSchemaHandler code it assumes that for each resource there is a 1:1 mapping to a single JSON Schema element. This of course might not be true for your particular application. Now that we have the schema generated in a know location we need to tell the client about it, the first thing we will do is to make sure that there is a suitable link header when the user invokes OPTIONS on a particular resource: import java.io.IOException;import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.core.Context; import javax.ws.rs.core.Link; import javax.ws.rs.core.UriInfo;public class JsonSchemaResponseFilter implements ContainerResponseFilter {@Context private UriInfo uriInfo;@Override public void filter(ContainerRequestContext containerRequestContext, ContainerResponseContext containerResponseContext) throws IOException {String method = containerRequestContext.getMethod(); if ("OPTIONS".equals(method)) {Link schemaUriLink = Link.fromUriBuilder(uriInfo.getRequestUriBuilder() .path("$schema")).rel("describedBy").build();containerResponseContext.getHeaders().add("Link", schemaUriLink); } } }Since this is JAX-RS 2.x we are working with we of course are going bundle all the bit together into a feature: import javax.ws.rs.core.Feature; import javax.ws.rs.core.FeatureContext;public class JsonSchemaFeature implements Feature {@Override public boolean configure(FeatureContext featureContext) {if (!featureContext.getConfiguration().isRegistered(JsonSchemaModelProcessor.class)) { featureContext.register(JsonSchemaModelProcessor.class); featureContext.register(JsonSchemaResponseFilter.class); return true; } return false; } }I am not going to show my entire set of POJO classes; but just quickly this is the Resource class with the @GET method required by the schema generation code: import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType;@Path("/bean") public class BeanResource {@GET @Produces(MediaType.APPLICATION_JSON) public Bean getBean() { return new Bean(); } } And finally here is what you see if you perform a GET on a resource: GET .../resources/bean Content-Type: application/json{ "message" : "hello", "other" : { "message" : "OtherBean" }, "strings" : [ "one", "two", "three", "four" ] } And OPTIONS: OPTIONS .../resources/bean Content-Type: text/plain Link: <http://.../resources/bean/$schema>; rel="describedBy"GET, OPTIONS, HEAD And finally if you resolve the schema resource: GET .../resources/bean/$schema Content-Type: application/schema+json{ "$schema" : "http://json-schema.org/draft-04/schema#", "title" : "example.Bean", "type" : "object", "properties" : { "message" : { "type" : "string" }, "other" : { "$ref" : "#/definitions/OtherBean" }, "strings" : { "type" : "array", "items" : { "type" : "string" } } }, "additionalProperties" : false, "definitions" : { "OtherBean" : { "type" : "object", "properties" : { "message" : { "type" : "string" } }, "additionalProperties" : false } } } There is a quite a bit of work to do here, in particular generating the hypermedia extensions based on the declarative linking annotations that I forward ported into Jersey 2.x a little while back. But it does point towards a solution and we get to exercise a variety of solutions to get something working now.Reference: Quick, and a bit dirty, JSON Schema generation with MOXy 2.5.1 from our JCG partner Gerard Davison at the Gerard Davison’s blog blog....

Java 8 LongAdders: The Right Way To Manage Concurrent Counters

I just lOvE new toys, and Java 8 has a bunch of them. This time around I want to talk about one of my favourites – concurrent adders. This is a new set of classes for managing counters written and read by multiple threads. The new API promises significant performance gains, while still keeping things simple and straightforward. As people have been managing concurrent counters since the dawn of multi-core architectures, let’s take a look and see what are some of the options Java offered up until now, and how they perform compared to this new API. Dirty counters – this approach means you’re writing / reading from a regular object or static field across multiple threads. Unfortunately, this doesn’t work for two reasons. The first is that in Java, an A += B operation isn’t Atomic. If you open up the output bytecode, you’ll see at least four instructions – one for loading the field value from the heap into the thread stack, a second for loading the delta, a third to add them and the fourth to set the result into the field. If more than one thread is doing this at the same time for the same memory location, you run a high chance of missing out on a write operation, as one thread can override the value of another (AKA “read-modify-write”). There’s also another nasty angle to this which has to do with the volatility of the value. More on that below. This is such a rookie mistake, and one that’s super hard to debug. If you do run across anybody doing this in your app, I’d like to ask a small favor. Run a search across your database for “Tal Weiss”. If you see me there – delete my records. I’ll feel safer. Synchronized – the most basic of concurrency idioms, this blocks all other threads while reading or writing the value. While it works, it’s a sure-fire way of turning your code into a DMV line. RWLock – this slightly more sophisticated version of the basic Java lock enables you to discern between threads that change the value and need to block others vs. ones that only read and don’t require a critical section. While this can be more efficient (assuming the number of writers is low) it’s a pretty meh approach, as you’re blocking the execution of all other threads when acquiring the write lock. Volatile – this fairly misunderstood keyword essentially instructs the JIT compiler to de-optimize the run-time machine code, so that any modification to the field is immediately seen by other threads. This invalidates some of the JIT compiler favorite optimizations of playing with the order in which assignments are applied to memory. Come again you say? You heard me. The JIT compiler may change the order in which assignments to fields are made. This arcane little strategy (also known as happens-before) allows it to minimize the number of times the program needs to access global heap, while still making sure your code is unaffected by it. Pretty sneaky… So when should I use volatile counters? If you have just one thread updating a value and multiple threads consuming it, this is a really good strategy – no contention at all. So why not use it always you ask? Because this doesn’t work well when more than one thread is updating the field. Since A += B is not atomic, you’re running a risk of overriding somebody else’s write. Up until Java 8, what you needed to do for this was use an AtomicInteger. AtomicInteger – this set of classes uses CAS (compare-and-swap) processor instructions to update the value of the counter. Sounds great, doesn’t it? Well, yes and no. This works well as it utilizes a direct machine code instruction to set the value with minimum effect on the execution of other threads. The downside is that if it fails to set the value due to a race with another thread, it has to try again. Under high contention this can turn into a spin lock, where the thread has to continuously try and set the value in an infinite loop, until it succeeds. This isn’t quite what we were looking for. Enter Java 8 with LongAdders. Java 8 Adders – this is such a cool new API I just can’t stop gushing about it! From a usage perspective it’s very similar to an AtomicInteger. Simply create a LongAdder and use intValue() and add() to get / set the value. The magic happens behind the scenes. What this class does is when a straight CAS fails due to contention, it stores the delta in an internal cell object allocated for that thread. It then adds the value of pending cells to the sum when intValue() is called. This reduces the need to go back and CAS or block other threads. Pretty smart stuff! So alright enough talking – let’s see this puppy in action. We’ve set up the following benchmark: reset a counter to zero and start to read and increment it using multiple threads. Stop when the counter reaches 10^8. We ran the benchmark on an i7 processor with 4 cores. We ran the benchmark with a total of ten threads – five for writing and five for reading so we were bound to have some serious contention here:Notice that both dirty and volatile risk value overwrites.The code is available hereThe Bottom LineConcurrent Adders clean house with a 60-100% performance boost over atomic integers. Adding threads didn’t make much of a difference, except when locking. Notice the huge performance penalty you get for using synchronized or RW-locks – an order of magnitude slower!If you’ve already had the chance to use these classes in your code – I’d love to hear about it.Additional reading - Brian Goetz on Java concurrency.Reference: Java 8 LongAdders: The Right Way To Manage Concurrent Counters from our JCG partner Tal Weiss at the Takipi blog....

Easter Hack: Even More Critical Bugs in SSL/TLS Implementations

It’s been some time since my last blog post – time for writing is rare. But today, I’m very happy that Oracle released the brand new April Critical Patch Update, fixing 37 vulnerabilities in our beloved Java (seriously, no kidding – Java is simply a great language!). With that being said, all vulnerabilities reported by my colleagues (credits go to Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk) and me are fixed and I highly recommend to patch as soon as possible if you are running a server powered by JSSE! Additional results on crypto hardware suffering from vulnerable firmware are ommited at this moment, because the patch(es) isn’t/aren’t available yet – details follow when the fix(es) is/are ready. To keep this blog post as short as possible I will skip a lot of details, analysis and pre-requisites you need to know to understand the attacks mentioned in this post. If you are interested use the link at the end of this post to get a much more detailed report. Resurrecting Fixed Attacks Do you remember Bleichenbacher’s clever million question attack on SSL from 1998? It was believed to be fixed with the following countermeasure specified in the TLS 1.0 RFC: “The best way to avoid vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incor- rectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.” – Source: RFC 2246 In simple words, the server is advised to create a random PreMasterSecret in case of problems during processing of the received, encrypted PreMasterSecret (structure violations, decryption errors, etc.). The server must continue the handshake with the randomly generated PreMasterSecret and perform all subsequent computations with this value. This leads to a fatal Alert when checking the Finished message (because of different key material at client- and server-side), but it does not allow the attacker to distinguish valid from invalid (PKCS#1v1.5 compliant and non-compliant) ciphertexts. In theory, an attacker gains no additional information on the ciphertext if this countermeasure is applied (correctly). Guess what? The fix itself can introduce problems:Different processing times caused by different code branches in the valid and invalid cases What happens if we can trigger Excpetions in the code responsible for branching? If we could trigger different Exceptions, how would that influence the timing behaviour?Let’s have a look at the second case first, because it is the easiest one to explain if you are familiar with Bleichenbacher’s attack: Exploiting PKCS#1 Processing in JSSE A coding error in the com.sun.crypto.provider.TlsPrfGenerator (missing array length check and incorrect decoding) could be used to force an ArrayIndexOutOfBoundsException during PKCS#1 processing. TheException finally led to a general error in the JSSE stack which is being communicated to the client in form of anINTERNAL_ERROR SSL/TLS alert message. What can we learn from this? The alert message is only send if we are already inside the PKCS#1 decoding code blocks! With this side channel Bleichenbacher’s attack can be mounted again: An INTERNAL_ERROR alert message suggests a PKCS#1 structure that was recognized as such, but contained an error – any other alert message was caused by the different processing branch (the countermeasure against this attack). The side channel is only triggered if the PKCS#1 structure contains a specific structure. This structure is shown below.  If  a 00 byte is contained in any of the red marked positions the side-channel will help us to recognize these ciphertexts. We tested our resurrected Bleichenbacher attack and were able to get the decrypted PreMasterSecret back. This took about 5h and 460000 queries to the target server for a 2048 bit key. Sounds much? No problem… By using the newest, high performance adaption of the attack (many thanks to Graham Steel for very the helpful discussions!) resulted in only about 73710 queries in mean for a 4096 bit RSA key!  This time JSSE was successfully exploited once. But let’s have a look on a much more complicated scenario. No obvious presence of a side channel at all :-( Maybe we can use the first case… Secret Depending Processing Branches Lead to Timing Side Channels A conspicuousness with respect to the random PreMasterSecret generation (you remeber, the Bleichenbacher countermeasure) was already obvious during the code analysis of JSSE for the previous attack: The randomPreMasterSecret was only generated if problems occured during PKCS#1 decoding. Otherwise, no random bytes were generated (sun.security.ssl.Handshaker.calculateMasterSecret(…)). The question is, how time consuming is the generation of a random PreMasterSecret? Well, it depends and there is no definitive answer to this question. Measuring time for valid and invalid ciphertexts revealed blurred results. But at least, having different branches with different processing times introduces the chance for a timing side channel. This is why OpenSSL was independently patched during our research to guarantee equal processing times for both, valid and invalid ciphertexts. Risks of Modern Software Design To make a long story short, it turned out that not the random number generation caused the timing side channel, but the concept of creating and handling Exceptions. Throwing and catching Exceptions is a very expensive task with regards towards the consumption of processing time. Unfortunately, the Java code responsible for PKCS#1 decoding (sun.security.rsa.RSAPadding.unpadV15(…)) was written with the best intentions from a developers point of view. It throws Exceptions if errors occur during PKCS#1 decoding. Time measurements revealed significant differences in the response time of a server when confronted with valid/invalid PKCS#1 structures. These differences could even be measured in a live environment (university network) with a lot of traffic and noise on the line. Again, how is this useful? It’s always the same – when knowing that the ciphertext reached the PKCS#1 decoding branch, you know it was recognized as PKCS#1 and thus represents a useful and valid side channel for Bleichenbacher’s attack.  The attack on an OpenJDK 1.6 powered server took about 19.5h and 18600 oracle queries in our live setup! JSSE was hit the second time…. OAEP Comes To The Rescue Some of you might say “Switch to OAEP and all of your problems are gone….”. I agree, partly. OAEP will indeed fix a lot of security problems (but definitely not all!), but only if implemented correctly. Manger told us that implementing OAEP the wrong way could have disastrous results. While looking at the OAEP decoding  code insun.security.rsa.RSAPadding it turned out that the code contained a behaviour similar to the one described by Manger as problematic. This could have led to another side channel if SSL/TLS did already offer OAEP support…. All the vulnerabilties mentioned in this post are fixed, but others are in the line to follow… We submitted a research paper which will explain the vulnerabilities mentioned here in more depth and the unpublished ones as well, so stay tuned – there’s more to come. Many thanks to my fellow researchers Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk all of our findings wouldn’t have been possible without everyones speical contribution. It needs a skilled team to turn theoretical attacks into practice! A more detailed analysis of all vulnerabilities listed here, as well as a lot more on SSL/TLS security can be found in my Phd thesis: 20 Years of SSL/TLS Research: An Analysis of the Internet’s Security Foundation.Reference: Easter Hack: Even More Critical Bugs in SSL/TLS Implementations from our JCG partner Christopher Meyer at the Java security and related topics blog....

Grails Goodness: Extending IntegrateWith Command

We can extend the integrate-with command in Grails to generate files for a custom IDE or build system. We must add a _Events.groovy file to our Grails projects and then write an implementation for the eventIntegrateWithStart event. Inside the event we must define a new closure with our code to generate files. The name of the closure must have the following pattern: binding.integrateCustomIdentifier. The value for CustomIdentifier can be used as an argument for the integrate-with command. Suppose we want to extend integrate-with to generate a simple Sublime Text project file. First we create a template Sublime Text project file where we define folders for a Grails application. We create the folder src/ide-support/sublimetext and add the file grailsProject.sublimetext-project with the following contents: { "folders": [ { "name": "Domain classes", "path": "grails-app/domain" }, { "name": "Controllers", "path": "grails-app/controllers" }, { "name": "Taglibs", "path": "grails-app/taglib" }, { "name": "Views", "path": "grails-app/views" }, { "name": "Services", "path": "grails-app/services" }, { "name": "Configuration", "path": "grails-app/conf" }, { "name": "grails-app/i18n", "path": "grails-app/i18n" }, { "name": "grails-app/utils", "path": "grails-app/utils" }, { "name": "grails-app/migrations", "path": "grails-app/migrations" }, { "name": "web-app", "path": "web-app" }, { "name": "Scripts", "path": "scripts" }, { "name": "Sources:groovy", "path": "src/groovy" }, { "name": "Sources:java", "path": "src/java" }, { "name": "Tests:integration", "path": "test/integration" }, { "name": "Tests:unit", "path": "test/unit" }, { "name": "All files", "follow_symlinks": true, "path": "." } ] } Next we create the file scripts/_Events.groovy: includeTargets << grailsScript("_GrailsInit")eventIntegrateWithStart = {// Usage: integrate-with --sublimeText binding.integrateSublimeText = {// Copy template file. ant.copy(todir: basedir) { fileset(dir: "src/ide-support/sublimetext/") }// Move template file to real project file with name of Grails application. ant.move(file: "$basedir/grailsProject.sublime-project", tofile: "$basedir/${grailsAppName}.sublime-project", overwrite: true)grailsConsole.updateStatus "Created SublimeText project file" } } We are done and can now run the integrate-with command with the new argument sublimeText: $ grails integrate-with --sublimeText | Created SublimeText project file. $ If we open the project in Sublime Text we see our folder structure for a Grails application:Code written with Grails 2.3.7.Reference: Grails Goodness: Extending IntegrateWith Command from our JCG partner Hubert Ikkink at the JDriven blog....

Agile – What’s a Manager to Do?

As a manager, when I first started learning about Agile development, I was confused by the fuzzy way that Agile teams and projects are managed (or manage themselves), and frustrated and disappointed by the negative attitude towards managers and management in general. Attempts to reconcile project management and Agile haven’t answered these concerns. The PMI-ACP does a good job of making sure that you understand Agile principles and methods (mostly Scrum and XP with some Kanban and Lean), but is surprisingly vague about what an Agile project manager is or does. Even a book like the Software Project Manager’s Bridge to Agility, intended to help bridge PMI’s project management practices and Agile, fails to come up with a meaningful job for managers or project managers in an Agile world. In Scrum (which is what most people mean when they say Agile today), there is no place for project managers at all: responsibilities for management are spread across the Product Owner, the Scrum Master and the development team.We have found that the role of the project manager is counterproductive in complex, creative work. The project manager’s thinking, as represented by the project plan, constrains the creativity and intelligence of everyone else on the project to that of the plan, rather than engaging everyone’s intelligence to best solve the problems. In Scrum, we have removed the project manager. The Product Owner, or customer, provides just-in-time planning by telling the development team what is needed, as often as every month. The development team manages itself, turning as much of what the product owner wants into usable product as possible. The result is high productivity, creativity, and engaged customers. We have replaced the project manager with the Scrum Master, who manages the process and helps the project and organization transition to agile practices. Ken Schwaber, Agility and PMI, 2011 Project Managers have the choice of becoming a Scrum Master (if they can accept a servant leader role and learn to be an effective Agile coach – and if the team will accept them) or a Product Owner (if they have deep enough domain knowledge and other skills), or find another job somewhere else. Project Manager as Product Owner The Product Owner is command-and-control position responsible for the “what” part of a development project. It’s a big job. The Product Owner owns the definition of what needs to be built, decides what gets done and in what order, approves changes to scope and makes scope / schedule / cost trade-offs, and decides when work is done. The Product Owner manages and represents the business stakeholders, and makes sure that business needs are met. The Product Owner replaces the project manager as the person most responsible for the success of the project (“the one throat to choke”). But they don’t control the team’s work, the technical details of who does the work or how. That’s decided by the team. Some project managers may have the domain knowledge and business experience, the analytical skills and the connections in the customer organization to meet the requirements of this role. But it’s also likely to be played by an absentee business manager or sponsor, backed up by a customer proxy, a business analyst or someone else on the team without real responsibility or authority in the organization, creating potentially serious project risks and management problems. Some organizations have tried to solve this by sharing the role across two people: a project manager and a business analyst, working together to handle all of the Product Owner’s responsibilities. Project Manager as Scrum Master It seems like the most natural path for a project manager is to become the team’s Scrum Master, although there is a lot of disagreement over whether a project manager can be effective – and accepted – as a Scrum Master, whether they will accept the changes in responsibilities and authority, and be willing to change how they work with the team and the rest of the organization. The Scrum Master is a “process owner” and coach, not a project manager. They help the team – and the Product Owner – understand how to work in an Agile process framework, what their roles and responsibilities are, set up and guide the meetings and reviews, and coach team members through change and conflict. The Scrum Master works a servant leader, a (nice) process cop, a secretary and a gofer. Somebody who supports the team and the Product Owner, “carries food and water” for them, tries to protect them from the world outside of the project and helps them solve problems. But the Scrum Master has no direct authority over the project or the team and does not make decisions for them, because Agile teams are supposed to be self-directing, self-organizing and self-managing. Of course that’s not how things start off. Any group of people must work their way through Tuckman’s 4 stages of team development: Forming-Storming-Norming-Performing. It’s only when they reach the last stage that a group can effectively manage themselves. In the mean time, somebody (the Scrum Master / Coach) has to help the team make decisions that they aren’t ready to make on their own. It can take a long time for a team to reach this point, for people to learn to trust each other – and the organization – enough. And it may not last long, before something outside of the team’s control sets them back: a key person leaving or joining the team, a change in leadership, a shock to the project like a major change in direction or cuts to the budget. Then they need to be led back to a high performing state again. Coaching the team and helping them out can be a full-time job in the beginning. After the team has got together and learned the process? Not so much. Which is why the Scrum Master is sometimes played part-time by a developer or sometimes even rotated between people on the development team. But even when the team is performing at a high level, there’s more to managing an Agile project than setting up meetings, buying pizza and trying to stay out of the way. I’ve come to understand that Agile doesn’t make a manager’s job go away. If anything, it expands it. Managing Upfront First, there’s all of the work that has to be done upfront at the start of a project – before Iteration Zero. Identifying stakeholders. Securing the charter. Negotiating the project budget and contract terms. Understanding and navigating the organization’s bureaucracy. Figuring out governance and compliance requirements and constraints, what the PMO needs. Working with HR, line managers and functional managers to put the team together, finding and hiring good people, getting space for them to work in and the tools that they need to work with. Lining up partners and suppliers and contractors. Contracting and licensing and other legal stuff. The Product Owner might do some of this work – but they can’t do it all. Managing Up and Out Then there’s the work that needs to be managed outside of the team. Agile development is insular, insulated and inward-looking. The team is protected from the world outside so they can focus on building features together. But the world outside is too important to ignore. Every development project involves more than designing and building software – often much more than the work of development itself. Every project, even a small project, has dependencies and hand-offs that need to be coordinated with other teams in other places, with other projects, with specialists outside of the team, with customers and partners and suppliers. There is forward planning that needs to be done, setting and tracking drop-dead dates, defining and maintaining interfaces and integration points and landing zones. Agile teams move and respond to change quickly. These changes can have impacts outside of the team, on the customer, other teams and other projects, other parts of the organization, suppliers and partners. You can try using a Scrum of Scrums to coordinate with other Agile teams up to a point, but somebody still has to keep track of dependencies and changes and delays and orchestrate the hand-offs. Depending on the contracting model and your compliance or governance environment, formal change control may not go away either, at least not for material changes. Even if the Product Owner and the team are happy, somebody still has to take care of the paperwork to stay onside of regulatory traceability requirements and to stay within contract terms. There are a lot of people who need to know what’s going on in a project outside of the development team – especially in big projects in big organizations. Communicating outwards, to people outside of the team and outside of the company. Communicating upwards to management and sponsors, keeping them informed and keeping them onside. Task boards and burn downs and big visible charts on the wall might work fine for the team, but upper management and the PMO and other stakeholders need a lot more, they need to understand development status in the overall context of the project or program or business change initiative. And there’s cost management and procurement. Forecasting and tracking and managing costs, especially costs outside of development labor costs. Contracts and licensing need to be taken care of. Stuff needs to be bought. Bills need to be paid. Managing Risks Scrum done right (with XP engineering practices carefully sewed in) can be effective in containing many common software development risks: scope, schedule, requirements specification, technical risks. But there are other risks that still need to be managed, risks that come from outside of the team: program risks, political risks, partner risks and other logistical risks, integration risks, data quality risks, operational risks, security risks, financial risks, legal risks, strategic risks.Scrum purposefully has many gaps, holes, and bare spots where you are required to use best practices – such as risk management. Ken Schwaber While the team and the Product Owner and Scrum Master are focused on prioritizing and delivering features and resolving technical issues, somebody has to look further out for risks, bring them up to the team, and manage the risks that aren’t under the team’s control. Managing the End Game And just like at the start of a project, when the project nears the end game, somebody needs to take care of final approvals and contractual acceptance, coordinate integration with other systems and with customers and partners, data setup and cleansing and conversion, documentation and training. Setting up the operations infrastructure, the facilities and hardware and connectivity, the people and processes and tools needed to run the system. Setting up a support capability. Packaging and deployment, roll out planning and roll back planning, the hand-off to the customer or to ops, community building and marketing and whatever else is required for a successful launch. Never mind helping make whatever changes are required to business workflows and business processes that may be required with the new system. Project Management doesn’t go away in Agile There are lots of management problems that need to be taken care of in any project. Agile spreads some management responsibilities around and down to the team, but doesn’t make management problems go away. Projects can’t scale, teams can’t succeed, unless somebody – a project manager or the PMO or someone else with the authority and skills required – takes care of them.Reference: Agile – What’s a Manager to Do? from our JCG partner Jim Bird at the Building Real Software blog....

JavaFX Tip 3: Use Callback Interface

As a UI framework developer it is part of my job to provide ways to customize the appearance and behavior of my controls. In many cases this is done by allowing the framework user to register a factory on a control. In the past I would have created a factory interface for this and provided one or more default implementations within the framework. These things are done differently in JavaFX and I have started to embrace it for my own work. JavaFX uses a generic interface called javafx.util.Callback wherever a piece of code is needed that produces a result (R) for a given parameter (P). The interface looks like this: public interface Callback<P,R> { public R call(P param); } Advantages At first I didn’t like using this interface because my code was loosing verbosity: I no longer had self-explaining interface names. But in the end I realized that the advantages overweight the lack of verbosity. The advantages being:We end up writing less code. No specialized interface, no default implementations. The developer using the API does not have to remember different factories, instead he can focus on the object that he wants to create and the parameters that are available to him. The Callback interface is a functional interface. We can use Lambda expressions, which makes the code more elegant and we once again have to write less code.Case Study The  FlexGanttFX framework contains a control called Dateline for displaying (surprise) dates. Each date is shown in its own cell. The dateline can display different temporal units (ChronoUnit from java.time, and SimpleUnit from FlexGanttFX). A factory approach is used to build the cells based on the temporal unit shown. Before I was using the callback approach I had the following situation: an interface called DatelineCellFactory with exactly one method createDatelineCell(). I was providing two default implementations called ChronoUnitDatelineCellFactory and SimpleUnitDatelineCellFactory. By using Callback I was able to delete all three interfaces / classes and in the skin of the dateline I find the following two lines instead: dateline.setCellFactory(<span class="skimlinks-unlinked">SimpleUnit.class</span>, unit -> new SimpleUnitDatelineCell()); dateline.setCellFactory(<span class="skimlinks-unlinked">ChronoUnit.class</span>, unit -> new ChronoUnitDatelineCell());Two lines of code instead of three files! I think this example speaks for itself.Reference: JavaFX Tip 3: Use Callback Interface from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....

10 JDK 7 Features to Revisit, Before You Welcome Java 8

It’s been almost a month Java 8 is released and I am sure all of you are exploring new features of JDK 8. But, before you completely delve into Java 8, it’s time to revisit some of the cool features introduced on Java 7. If you remember, Java 6 was nothing on feature, it was all about JVM changes and performance, but JDK 7 did introduced some cool features which improved developer’s day to day task. Why I am writing this post now? Why I am talking about Java 1. 7, when everybody is talking about Java 8? Well I think, not all Java developers are familiar with changes introduced in JDK 7, and what time can be better to revisit earlier version than before welcoming a new version. I don’t see automatic resource management used by developer in daily life, even after IDE’s has got content assist for that. Though I see programmers using String in Switch and Diamond operator for type inference, again there is very little known about fork join framework,  catching multiple exception in one catch block or using underscore on numeric literals.  So I took this opportunity to write a summary sort of post to revise these convenient changes and adopt them into out daily programming life. There are couple of good changes on NIO and new File API, and lots of other at API level, which is also worth looking. I am sure combined with Java 8 lambda expression, these feature will result in much better and cleaner code.Type inference Before JDK 1.7 introduce a new operator <<, known as diamond operator to making type inference available for constructors as well. Prior to Java 7, type inference is only available for methods, and Joshua Bloch has rightly predicted in Effective Java 2nd Edition, it’s now available for constructor as well. Prior JDK 7, you type more to specify types on both left and right hand side of object creation expression, but now it only needed on left hand side, as shown in below example. Prior JDK 7 Map<String, List<String>> employeeRecords = new HashMap<String, List<String>>(); List<Integer> primes = new ArrayList<Integer>(); In JDK 7 Map<String, List<String>> employeeRecords = new HashMap<>(); List<Integer> primes = new ArrayList<>(); So you have to type less in Java 7, while working with Collections, where we heavily use Generics. See here for more detailed information on diamond operator in Java.String in SwitchBefore JDK 7, only integral types can be used as selector for switch-case statement. In JDK 7, you can use a String object as the selector. For example, String state = "NEW";switch (day) { case "NEW": System.out.println("Order is in NEW state"); break; case "CANCELED": System.out.println("Order is Cancelled"); break; case "REPLACE": System.out.println("Order is replaced successfully"); break; case "FILLED": System.out.println("Order is filled"); break; default: System.out.println("Invalid");} equals() and hashcode() method from java.lang.String is used in comparison, which is case-sensitive. Benefit of using String in switch is that, Java compiler can generate more efficient code than using nested if-then-else statement. See here for more detailed information of how to use String on Switch case statement.Automatic Resource ManagementBefore JDK 7, we need to use a finally block, to ensure that a resource is closed regardless of whether the try statement completes normally or abruptly, for example while reading files and streams, we need to close them into finally block, which result in lots of boiler plate and messy code, as shown below : public static void main(String args[]) { FileInputStream fin = null; BufferedReader br = null; try { fin = new FileInputStream("info.xml"); br = new BufferedReader(new InputStreamReader(fin)); if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } finally { try { if (fin != null) fin.close(); if (br != null) br.close(); } catch (IOException ie) { System.out.println("Failed to close files"); } } } Look at this code, how many lines of boiler codes? Now in Java 7, you can use try-with-resource feature to automatically close resources, which implements AutoClosable and Closeable interface e.g. Streams, Files, Socket handles, database connections etc. JDK 7 introduces a try-with-resources statement, which ensures that each of the resources in try(resources) is closed at the end of the statement by calling close() method of AutoClosable. Now same example in Java 7 will look like below, a much concise and cleaner code : public static void main(String args[]) { try (FileInputStream fin = new FileInputStream("info.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(fin));) { if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } } Since Java is taking care of closing opened resources including files and streams, may be no more leaking of file descriptors and probably an end to file descriptor error. Even JDBC 4.1 is retrofitted as AutoClosable too.Fork Join FrameworkThe fork/join framework is an implementation of the ExecutorService interface that allows you to take advantage of multiple processors available in modern servers. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application. As with any ExecutorService implementation, the fork/join framework distributes tasks to worker threads in a thread pool. The fork join framework is distinct because it uses a work-stealing algorithm, which is very different than producer consumer algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy. The centre of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService class. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes. You can wrap code in a ForkJoinTask subclass like RecursiveTask (which can return a result) or RecursiveAction. See here for some more information on fork join framework in Java.Underscore in Numeric literalsIn JDK 7, you could insert underscore(s) ‘_’ in between the digits in an numeric literals (integral and floating-point literals) to improve readability. This is especially valuable for people who uses large numbers in source files, may be useful in finance and computing domains. For example, int billion = 1_000_000_000; // 10^9 long creditCardNumber = 1234_4567_8901_2345L; //16 digit number long ssn = 777_99_8888L; double pi = 3.1415_9265; float pif = 3.14_15_92_65f; You can put underscore at convenient points to make it more readable, for examples for large amounts putting underscore between three digits make sense, and for credit card numbers, which are 16 digit long, putting underscore after 4th digit make sense, as they are printed in cards. By the way remember that you cannot put underscore, just after decimal number or at the beginning or at the end of number. For example, following numeric literals are invalid, because of wrong placement of underscore: double pi = 3._1415_9265; // underscore just after decimal point long creditcardNum = 1234_4567_8901_2345_L; //underscore at the end of number long ssn = _777_99_8888L; //undersocre at the beginning See my post about how to use underscore on numeric literals for more information and use case.Catching Multiple Exception Type in Single Catch BlockIn JDK 7, a single catch block can handle more than one exception types. For example, before JDK 7, you need two catch blocks to catch two exception types although both perform identical task: try {......} catch(ClassNotFoundException ex) { ex.printStackTrace(); } catch(SQLException ex) { ex.printStackTrace(); } In JDK 7, you could use one single catch block, with exception types separated by ‘|’. try {......} catch(ClassNotFoundException|SQLException ex) {ex.printStackTrace();} By the way, just remember that Alternatives in a multi-catch statement cannot be related by sub classing. For example a multi-catch statement like below will throw compile time error : try {......} catch (FileNotFoundException | IOException ex) {ex.printStackTrace();} Alternatives in a multi-catch statement cannot be related by sub classing, it will throw error at compile time : java.io.FileNotFoundException is a subclass of alternative java.io.IOException at Test.main(Test.java:18) see here to learn more about improved exception handling in Java SE 7.Binary Literals with prefix “0b”In JDK 7, you can express literal values in binary with prefix ’0b’ (or ’0B’) for integral types ( byte, short, int and long), similar to C/C++ language. Before JDK 7, you can only use octal values (with prefix ’0′) or hexadecimal values (with prefix ’0x’ or ’0X’). int mask = 0b01010000101; or even better int binary = 0B0101_0000_1010_0010_1101_0000_1010_0010;Java NIO 2.0Java SE 7 introduced java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the default file system. It also introduced the Path class which allow you to represent any path in operating system. New File system API complements older one and provides several useful method checking, deleting, copying, and moving files. for example, now you can check if a file is hidden in Java. You can also create symbolic and hard links from Java code.  JDK 7 new file API is also capable of searching for files using wild cards. You also get support to watch a directory for changes. I would recommend to check Java doc of new file package to learn more about this interesting useful feature.G1 Garbage CollectorJDK 7 introduced a new Garbage Collector known as G1 Garbage Collection, which is short form of garbage first. G1 garbage collector performs clean-up where there is most garbage. To achieve this it split Java heap memory into multiple regions as opposed to 3 regions in the prior to Java 7 version (new, old and permgen space). It’s said that G1 is quite predictable and provides greater through put for memory intensive applications.More Precise Rethrowing of ExceptionThe Java SE 7 compiler performs more precise analysis of re-thrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. before JDK 7, re-throwing an exception was treated as throwing the type of the catch parameter. For example, if your try block can throw ParseException as well as IOException. In order to catch all exceptions and rethrow them, you would have to catch Exception and declare your method as throwing an Exception. This is sort of obscure non-precise throw, because you are throwing a general Exception type (instead of specific ones) and statements calling your method need to catch this general Exception. This will be more clear by seeing following example of exception handling in code prior to Java 1.7 public void obscure() throws Exception{ try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } From JDK 7 onwards you can be more precise while declaring type of Exception in throws clause of any method. This precision in determining which Exception is thrown from the fact that, If you re-throw an exception from a catch block, you are actually throwing an exception type which:your try block can throw, has not handled by any previous catch block, and is a subtype of one of the Exception declared as catch parameterThis leads to improved checking for re-thrown exceptions. You can be more precise about the exceptions being thrown from the method and you can handle them a lot better at client side, as shown in following example : public void precise() throws ParseException, IOException { try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } The Java SE 7 compiler allows you to specify the exception types ParseException and IOException in the throws clause in the preciese() method declaration because you can re-throw an exception that is a super-type of any of the types declared in the throws, we are throwing java.lang.Exception, which is super class of all checked Exception. Also in some places you will see final keyword with catch parameter, but that is not mandatory any more.That’s all about what you can revise in JDK 7. All these new features of Java 7 are very helpful in your goal towards clean code and developer productivity. With lambda expression introduced in Java 8, this goal to cleaner code in Java has reached another milestone. Let me know, if you think I have left out any useful feature of Java 1.7, which you think should be here. P.S. If you love books then you may like Java 7 New features Cookbook from Packet Publication as well.Reference: 10 JDK 7 Features to Revisit, Before You Welcome Java 8 from our JCG partner Javin Paul at the Javarevisited blog....

Android Shake to Refresh tutorial

In this post we want to explore another way to refresh our app UI called Shake to Refresh. We all know the pull-to-refresh pattern that is implemented in several app. In this pattern we pull down our finger along the screen and the UI is refreshed:Even this pattern is very useful, we can use another pattern to refresh our UI, based on smartphone sensors, we can call it Shake to refresh. Instead of pulling down our finger, we shake our smartphone to refresh the UI:Implementation In order to enable our app to support the Shake to refresh feature we have to use smartphone sensors and specifically motion sensors: Accelerometer. If you want to have more information how to use sensor you can give a look here. As said, we want that the user shakes the smartphone to refresh and at the same time we don’t want that the refresh process starts accidentally or when user just moves his smartphone. So we have to implement some controls to be sure that the user is shaking the smartphone purposely. On the other hand we don’t want to implement this logic in the class that handles the UI, because it is not advisable to mix the UI logic with other things and using another class we can re-use this “pattern” in other contexts. Then, we will create another class called ShakeEventManager. This class has to listen to sensor events: public class ShakeEventManager implements SensorEventListener { .. } so that it will implement SensorEventListener. Then we have to look for the accelerometer sensor and register our class as event listener: public void init(Context ctx) { sManager = (SensorManager) ctx.getSystemService(Context.SENSOR_SERVICE); s = sManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); register(); } and then: public void register() { sManager.registerListener(this, s, SensorManager.SENSOR_DELAY_NORMAL); } To trigger the refresh event on UI some conditions must be verified, these conditions guarantee that the user is purposely shaking his smartphone. The conditions are:The acceleration must be greater than a threshold level A fixed number of acceleration events must occur The time between these events must be in a fixed time windowWe will implement this logic in onSensorChanged method that is called everytime a new value is available. The first step is calculating the acceleration, we are interested to know the max acceleration value on the three axis and we want to clean the sensor value from the gravity force. So, as stated in the official Android documentation, we first apply a low pass filter to isolate the gravity force and then high pass filter: private float calcMaxAcceleration(SensorEvent event) { gravity[0] = calcGravityForce(event.values[0], 0); gravity[1] = calcGravityForce(event.values[1], 1); gravity[2] = calcGravityForce(event.values[2], 2);float accX = event.values[0] - gravity[0]; float accY = event.values[1] - gravity[1]; float accZ = event.values[2] - gravity[2];float max1 = Math.max(accX, accY); return Math.max(max1, accZ); } where // Low pass filter private float calcGravityForce(float currentVal, int index) { return ALPHA * gravity[index] + (1 - ALPHA) * currentVal; } Once we know the max acceleration we implement our logic: @Override public void onSensorChanged(SensorEvent sensorEvent) { float maxAcc = calcMaxAcceleration(sensorEvent); Log.d("SwA", "Max Acc ["+maxAcc+"]"); if (maxAcc >= MOV_THRESHOLD) { if (counter == 0) { counter++; firstMovTime = System.currentTimeMillis(); Log.d("SwA", "First mov.."); } else { long now = System.currentTimeMillis(); if ((now - firstMovTime) < SHAKE_WINDOW_TIME_INTERVAL) counter++; else { resetAllData(); counter++; return; } Log.d("SwA", "Mov counter ["+counter+"]");if (counter >= MOV_COUNTS) if (listener != null) listener.onShake(); } }} Analyzing the code at line 3, we simply calculate the acceleration and then we check if it is greater than a threshold value  (condition 1) (line 5). If it is the first movement,  (line 7-8), we save the timestamp to check if other events happen in the specified time window. If all the conditions are satisfied we invoke a callback method define in the callback interface: public static interface ShakeListener { public void onShake(); } Test app Now we have implemented the shake event manager we are ready to create a simple app that uses it. We can create a simple activity with a ListView that is refreshed when shake event occurs: public class MainActivity extends ActionBarActivity implements ShakeEventManager.ShakeListener { ....@Override public void onShake() { // We update the ListView } } Where at line 5 we update the UI because this method is called only when the user is shaking his smartphone. Some final considerations: When the app is paused we have to unregister the sensor listener so that it won’t listen anymore to events and in this way we will save the battery. On the other hand when the app is resumed we will register again the listener: Override protected void onResume() { super.onResume(); sd.register(); }@Override protected void onPause() { super.onPause(); sd.deregister(); }Reference: Android Shake to Refresh tutorial from our JCG partner Francesco Azzola at the Surviving w/ Android blog....

Programmatic Access to Sizes of Java Primitive Types

One of the first things many developers new to Java learn about is Java’s basic primitive data types, their fixed (platform independent) sizes (measured in bits or bytes in terms of two’s complement), and their ranges (all numeric types in Java are signed). There are many good online resources that list these characteristics and some of these resources are the Java Tutorial lesson on Primitive Data Types, The Eight Data Types of Java, Java’s Primitive Data Types, and Java Basic Data Types. Java allows one to programmatically access these characteristics of the basic Java primitive data types. Most of the primitive data types’ maximum values and minimum values have been available for some time in Java via the corresponding reference types’ MAX_VALUE and MIN_VALUE fields. J2SE 5 introduced a SIZE field for most of the types that provides each type’s size in bits (two’s complement). JDK 8 has now provided most of these classes with a new field called BYTES that presents the type’s size in bytes (two’s complement). DataTypeSizes.java package dustin.examples.jdk8;import static java.lang.System.out; import java.lang.reflect.Field;/** * Demonstrate JDK 8's easy programmatic access to size of basic Java datatypes. * * @author Dustin */ public class DataTypeSizes { /** * Print values of certain fields (assumed to be constant) for provided class. * The fields that are printed are SIZE, BYTES, MIN_VALUE, and MAX_VALUE. * * @param clazz Class which may have static fields SIZE, BYTES, MIN_VALUE, * and/or MAX_VALUE whose values will be written to standard output. */ private static void printDataTypeDetails(final Class clazz) { out.println("\nDatatype (Class): " + clazz.getCanonicalName() + ":"); final Field[] fields = clazz.getDeclaredFields(); for (final Field field : fields) { final String fieldName = field.getName(); try { switch (fieldName) { case "SIZE" : // generally introduced with 1.5 (twos complement) out.println("\tSize (in bits): " + field.get(null)); break; case "BYTES" : // generally introduced with 1.8 (twos complement) out.println("\tSize (in bytes): " + field.get(null)); break; case "MIN_VALUE" : out.println("\tMinimum Value: " + field.get(null)); break; case "MAX_VALUE" : out.println("\tMaximum Value: " + field.get(null)); break; default : break; } } catch (IllegalAccessException illegalAccess) { out.println("ERROR: Unable to reflect on field " + fieldName); } } }/** * Demonstrate JDK 8's ability to easily programmatically access the size of * basic Java data types. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { printDataTypeDetails(Byte.class); printDataTypeDetails(Short.class); printDataTypeDetails(Integer.class); printDataTypeDetails(Long.class); printDataTypeDetails(Float.class); printDataTypeDetails(Double.class); printDataTypeDetails(Character.class); printDataTypeDetails(Boolean.class); } } When executed, the code above writes the following results to standard output. The Output Datatype (Class): java.lang.Byte: Minimum Value: -128 Maximum Value: 127 Size (in bits): 8 Size (in bytes): 1Datatype (Class): java.lang.Short: Minimum Value: -32768 Maximum Value: 32767 Size (in bits): 16 Size (in bytes): 2Datatype (Class): java.lang.Integer: Minimum Value: -2147483648 Maximum Value: 2147483647 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Long: Minimum Value: -9223372036854775808 Maximum Value: 9223372036854775807 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Float: Maximum Value: 3.4028235E38 Minimum Value: 1.4E-45 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Double: Maximum Value: 1.7976931348623157E308 Minimum Value: 4.9E-324 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Character: Minimum Value: UPDATE: Note that, as Attila-Mihaly Balazs has pointed out in the comment below, the MIN_VALUE values showed for java.lang.Float and java.lang.Double above are not negative numbers even though these constant values are negative for Byte, Short, Int, and Long. For the floating-point types of Float and Double, the MIN_VALUE constant represents the minimum absolute value that can stored in those types. Although the characteristics of the Java primitive data types are readily available online, it’s nice to be able to programmatically access those details easily when so desired. I like to think about the types’ sizes in terms of bytes and JDK 8 now provides the ability to see those sizes directly measured in bytes.Reference: Programmatic Access to Sizes of Java Primitive Types from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Thoughts on The Reactive Manifesto

Reactive programming is an emerging trend in software development that has gathered a lot of enthusiasm among technology connoisseurs during the last couple of years. After studying the subject last year, I got curious enough to attend the “Principles of Reactive Programming” course on Coursera (by Odersky, Meijer and Kuhn). Reactive advocates from Typesafe and others have created The Reactive Manifesto that tries to formulate the vocabulary for reactive programming and what it actually aims at. This post collects some reflections on the manifesto.         According to The Reactive Manifesto systems that are reactivereact to events – event-driven nature enables the following qualities react to load – focus on scalability by avoiding contention on shared resources react to failure – resilient systems that are able to recover at all levels react to users – honor response time guarantees regardless of loadEvent-driven Event-driven applications are composed of components that communicate through sending and receiving events. Events are passed asynchronously, often using a push based communication model, without the event originator blocking. A key goal is to be able to make efficient use of system resources, not tie up resources unnecessarily and maximize resource sharing. Reactive applications are built on a distributed architecture in which message-passing provides the inter-node communication layer and location transparency for components. It also enables interfaces between components and subsystems to be based on loosely coupled design, thus allowing easier system evolution over time. Systems designed to rely on shared mutable state require data access and mutation operations to be coordinated by using some concurrency control mechanism, in order to avoid data integrity issues. Concurrency control mechanisms limit the degree of parallelism in the system. Amdahl’s law formulates clearly how reducing the parallelizable portion of the program code puts an upper limit to system scalability. Designs that avoid shared mutable state allow for higher degrees of parallelism and thus reaching higher degrees of scalability and resource sharing. Scalable System architecture needs to be carefully designed to scale out, as well as up, in order to be able to exploit the hardware trends of both increased node-level parallelism (increased number of CPUs and nb. of physical and logical cores within a CPU) and system level parallelism (number of nodes). Vertical and horizontal scaling should work both ways, so an elastic system will also be able to scale in and down, thereby allowing to optimize operational cost structures for lower demand conditions. A key building block for elasticity is achieved through a distributed architecture and the node-to-node communication mechanism, provided by message-passing, that allows subsystems to be configured to run on the same node or on different nodes without code changes (location transparency). Resilient A resilient system will continue to function in the presence of failures in one or more parts of the system, and in unanticipated conditions (e.g. unexpected load). The system needs to be designed carefully to contain failures in well defined and safe compartments to prevent failures from escalating and cascading unexpectedly and uncontrollably. Responsive The Reactive manifesto characterizes the responsive quality as follows: Responsive is defined by Merriam-Webster as “quick to respond or react appropriately”. … Reactive applications use observable models, event streams and stateful clients. … Observable models enable other systems to receive events when state changes. … Event streams form the basic abstraction on which this connection is built. … Reactive applications embrace the order of algorithms by employing design patterns and tests to ensure a response event is returned in O(1) or at least O(log n) time regardless of load. Commentary If you’ve been actively following software development trends during the last couple of years, ideas stated in the reactive manifesto may seem quite familiar to you. This is because the manifesto captures insights learned by the software development community in building internet-scale systems. One such set of lessons stems from problems related to having centrally-stored state in distributed systems. The tradeoffs of having a strong consistency model in a distributed system have been formalized in the CAP theorem. CAP-induced insights led developers to consider alternative consistency models, such as BASE, in order to trade off strong consistency guarantees for availability and partition tolerance, but also scalability. Looser consistency models have been popularized during recent years, in particular, by different breeds of NoSQL databases. Application’s consistency model has a major impact on the application scalability and availability, so it would be good to address this concern more explicitly in the manifesto. The chosen consistency model is a cross-cutting trait, over which all the application layers should uniformly agree. This concern is something that is mentioned in the manifesto, but since it’s such an important issue, and it has subtle implications, it would be good to elaborate it a bit more or refer to a more through discussion of the topic. Event-driven is a widely used term in programming that can take on many different meanings and has multiple variations. Since it’s such an overloaded term, it would be good to define it more clearly and try to characterize what exactly does and does not constitute as event-driven in this context. The authors clearly have event-driven architecture (EDA) in mind, but EDA is also something that can be achieved with different approaches. The same is true for “asynchronous communication”. In the reactive manifesto “asynchronous communication” seems to imply using message-passing, as in messaging systems or the Actor model, and not asynchronous function or method invocation. The reactive manifesto adopts and combines ideas from many movement from CAP theorem, NoSQL, event-driven architecture. It captures and amalgamates valuable lessons learned learned by the software development community in building internet-scale applications. The manifesto makes a lot of sense, and I can subscribe to the ideas presented in it. However, on a few occasions, the terminology could be elaborated a bit and made more approachable to developers who don’t have extensive experience in scalability issues. Sometimes, the worst thing that can happen to great ideas is that they get diluted by unfortunate misunderstandings!Reference: Thoughts on The Reactive Manifesto from our JCG partner Marko Asplund at the practicing techie blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books