Featured FREE Whitepapers

What's New Here?

scala-logo

Fun with function composition in Scala

The goal of this post is to show how a list of functions can be composed to create a single function, in the context of mapping a set of values using those functions. It’s a cute example that shows off some of the goodness that comes with functional programming in Scala. And, while this isn’t a tutorial, it might still be useful for people who are just getting into functional programming. We’ll start with the list of numbers 1 to 5 and some simple functions — one for adding 1, another for squaring, and third for adding 100. scala> val foo = 1 to 5 toList foo: List[Int] = List(1, 2, 3, 4, 5)scala> val add1 = (x: Int) => x + 1 add1: (Int) => Int = <function1>scala> val add100 = (x: Int) => x + 100 add100: (Int) => Int = <function1>scala> val sq = (x: Int) => x * x sq: (Int) => Int = <function1>We can then apply any of these functions to each element in the list foo by using the map function. scala> foo map add1 res0: List[Int] = List(2, 3, 4, 5, 6)scala> foo map add100 res1: List[Int] = List(101, 102, 103, 104, 105)scala> foo map sq res2: List[Int] = List(1, 4, 9, 16, 25)We can save the results of mapping all the values through add1, and then map the resulting list through sq. scala> val bar = foo map add1 bar: List[Int] = List(2, 3, 4, 5, 6)scala> bar map sq res3: List[Int] = List(4, 9, 16, 25, 36)Or, if we don’t care about the intermediate result, we can just keep on mapping, through both functions. scala> foo map add1 map sq res4: List[Int] = List(4, 9, 16, 25, 36)What we just did, above, was sq(add1(x)) for every x in the list foo. We could have instead composed the two functions, since sq(add1(x)) = sq?add1(x). Here’s what it looks like in Scala: scala> val sqComposeAdd1 = sq compose add1 sqComposeAdd1: (Int) => Int = <function1>scala> foo map sqComposeAdd1 res5: List[Int] = List(4, 9, 16, 25, 36)Of course, we can do this with more than two functions. scala> foo map add1 map sq map add100 res6: List[Int] = List(104, 109, 116, 125, 136)scala> foo map (add100 compose sq compose add1) res7: List[Int] = List(104, 109, 116, 125, 136)And so on. Now, imagine that you want the user of a program you’ve written to be able to select the functions they want to apply to a list of items, perhaps from a set of predefined functions you’ve provided plus perhaps ones they are themselves defining. So, here’s the really useful part: we can compose that arbitrary bunch of functions on the fly to turn them into a single function, without having to write out “compose … compose … compose…” or “map … map … map …” We do this by building up a list of the functions (in the order we want to apply them to the values) and then reducing them using the compose function. Equivalent to what we had above: scala> val fncs = List(add1, sq, add100) fncs: List[(Int) => Int] = List(<function1>, <function1>, <function1>)scala> foo map ( fncs.reverse reduce (_ compose _) ) res8: List[Int] = List(104, 109, 116, 125, 136)Notice the that it was necessary to reverse the list in order for the composition to be ordered correctly. If you don’t feel like doing that, you can use andThen in Scala. scala> foo map (add1 andThen sq andThen add100) res9: List[Int] = List(104, 109, 116, 125, 136)Which we can of course use with reduce as well. scala> foo map ( fncs reduce (_ andThen _) ) res10: List[Int] = List(104, 109, 116, 125, 136)Since functions are first class citizens (something we used several times above), we can assign the composed or andThened result to a val and use it directly. scala> val superFunction = fncs reduce (_ andThen _) superFunction: (Int) => Int = <function1>scala> foo map superFunction res11: List[Int] = List(104, 109, 116, 125, 136)This example is of course artificial, but the general pattern works nicely with much more complex/interesting functions and can provide a nice way of configuring a bunch of alternative functions for different use cases. Reference: Fun with function composition in Scala from our JCG partner Jason Baldridge at the Bcomposes blog. Related Articles :How Scala changed the way I think about my Java Code What features of Java have been dropped in Scala? Testing with Scala Things Every Programmer Should Know The Most Powerful JVM Language Available...
software-development-2-logo

This comes BEFORE your business logic!

This is post originally published by Ron Gross one of our JCG partners. It focuses in .NET technologies at times, but the underlying principles apply to all programming languages. So I guess hard-core Java developers will have to bear with it at moments. Enjoy! I thought it might be worthwhile to formulate a technical checklist for a software project – gather all the questions you need answered before you should begin coding the business logic itself. To most of these questions there are only one or two possible answers, and StackOverflow can help you choose between them if you don’t know already which is the best solution for you. It’s a long list, but I really believe all or most of these will bite you in the ass if you delay them. 1. Programming language / framework This is the first choice because it influences everything else. All of us have our favorite languages, and our degree of proficiency with them varies. Besides this factor (which may turn out to be huge), consider:Performance characteristics. This is probably not relevant today to over 95% of software projects, as most languages will have a reference implementation or two that will be fast enough – but don’t go writing embedded code in Ruby (I’d take this chance to refute once and for all the illusion that some people still maintain – C/C++ is not faster than .Net or Java, it’s just more predictable). Important 3rd party libraries. If your business application just has to have Lucene 2.4, and ports of Lucene to other languages are lacking in functionality, this pretty much limits you to Java. If you’re writing a GUI-rich application targeted at Windows, then .Net is probably your safest bet.2. Unit testingThis should include a mocking framework, though I usually tend to write integration tests more than unit tests. Think about integration with your test runner – for example, Resharper didn’t support running MSTest tests two years ago (when we were using it, mainly because we didn’t know any better). Unit tests are not enough – integration tests are the only thing that gives confidence in the actual end-to-end product. For strong TDD advocates: System Tests are also very valuable as they are the only thing that tests flows on the entire system, and not on a single component-path.3. Dependency Injection / IOC frameworkI’ve only recently started applying this technique heavily, and it’s a beauty. It allows writing isolated unit tests and easy mocking, and helps lifetime management (e.g. no need to hand code the Singleton pattern by hand). A good framework will ease your life, not complicate it. When implementing your choice of IOC framework, remember that wiring it up for integration tests is not the necessarily same wireup for actual production code.4. Presentation Tier Most projects need one, whether it’s a web or desktop application. 5. Data TierHow do you plan to store and access your data? Can your site be database-driven? Or do you need to go the NO-SQL path? You might want to combine both – use a database to store most of your data, but use another place to store performance-critical data. Object Relation Mapping – You usually will not want to hand-craft your SQL, but rather use an ORM (we’re all object oriented here, right?).6. Communication Layer  If your project have several independent components or services, you should choose a communication model. Do they communicate over the database, using direct RPC invocations, or message passing via a Message Bus? 7. LoggingLogging everything to a central database if in my experience the best solution. Use a logging framework (and preferably pick one that has a simple error level scheme and allows built-in log formatting). Make sue your logging doesn’t hurt your performance (opening a new connection to the DB for every log is a bad idea), but don’t prematurely optimize it. Do not scatter your logs – a unified logging scheme is crucial in analyzing application errors, don’t log some events to the DB and other to file. I find it useful to automatically fail unit tests that raise errors. Even if the class under test behaved as expected, it might have reported an error condition internally, in which case you want to know about it. Use an in-memory appender, collect all the error/fatal logs and assert there are none – except for tests in which you specifically feed your code erroneous input.8. ConfigurationDecide on a sensible API to access configuration from code. You can use your language’s native configuration mechanism or home grow your own. Decide on how to maintain configuration files. Who is responsible for updating them? Which configurations are mandatory, which are optional? Is the configuration itself (not the schema) version controlled?9. Release DocumentationRelease notes / changelog – A simple (source controlled) text file is usually enough, as it gives crucial information on what a build contains. Should include new features, bug fixes, new known issues, and of course how-to deployment instructions. Configuration documentation – especially on large teams, you should maintain a single place where all mandatory configurations are documented. This makes it easy for anyone to configure and run the system.10. Packaging and DeploymentHave your build process package an entire release into a self-contained package. Have your builds versioned – every commit to source control should increase the revision number, and the version should be integrated into the assemblies automatically – this ensures you always know what version you’re running. Depending on your IT staff and how complicated manual deployment is, you might want to invest in a deployment script – a script that gets a release package and does everything (or almost everything) need in order to deploy it. This is a prerequisite for writing effective system tests.11. ToolingSource control: SVN, TFS, Git, whatever. Choose one (according to your needs and budget) and put everything you develop under it. One painful issue is your branch management. Some prefer to work constantly on a single always-stable trunk, other prefer feature branches. The choice is affected by your chosen SCM tool, the size of your team, and the level of experience you have with the tool. Build System – Unit tests that are never or seldom run are hardly effective. Use TeamCity to make sure your code is always well tested (at least as well tested as you thought). IDE – Some programming languages have only one significant IDE, other have a few. (Note – I don’t really consider Visual Studio to be an IDE without Resharper) Bug tracking – have a simple place to collect and process bugs. Feature and backlog management – have an easy-to-access place that shows you and the entire team: What features are you currently working on, what tasks are left to do in order to complete features, what prioritized features are on the backlog – this is crucial to help you choose what to do next (I prefer the sprint-based approach) Documentation standard. It can be a wiki, shared folders, Google Docs, or (ugh) SharePoint, but you should decide on a single organizational scheme. I strongly suggest that you not send documents as attachments, because then you can’t tell when they change. Basic IT – Backup, shared storage, VPN, email, …Do you agree with this list? What did I miss? What can be delayed for later stages in the project? Reference: This comes BEFORE your business logic! from our JCG partner Ron Gross at A Quantum Immortal. Related Articles :Using FindBugs to produce substantially less buggy code Measuring Code Complexity Why Automated Tests Boost Your Development Speed Not doing Code Reviews? What’s your excuse? Java Tools: Source Code Optimization and Analysis How many bugs do you have in your code?...
software-development-2-logo

Measuring Code Complexity

Lately, development managers have put a lot of interest in measuring code quality. Therefore, things like code reviews and analysis tools have become very popular at identifying “Technical Debt” early. Several tools exist for Java: Sonar, JavaNCSS, Eclipse plugins; as well as other languages: Visual Studio Code Analysis, PHPDepend, among others. What is Technical Debt? From the Sonar site, technical debt is the cost (in man days) measured by the following formula: TD = (cost to fix duplications) + (cost to fix violations) + (cost to comment public API) + (cost to fix uncovered complexity) + (cost to bring complexity below threshold) Many organizations are actually resorting to this metric as a long term investment. Although very hard to quantify, reducing technical debt can reduce the total number of open bugs, improve code quality, lower developer ramp up time (thereby fighting Brooks law), and more importantly decrease the number of man-(hours|days) it takes to resolve an issue — A good investment. In this article, I want to focus on one factor of Technical Debt called “Code Complexity”. Code complexity in itself is very “complex” to measure. It is influenced by numerous factors such as: Average Hierarchy Height, Average Number of Derived Classes, Afferent Coupling, Efferent Coupling, number of “if” condition statements, and many others. Let’s talk about some of the most important metrics briefly in order to understand what these tools capture and how it’s measured. McCabe Cyclomatic Complexity (MCC) You’ve probably come across this one before. The McCabe Cyclomatic Metric was introduced by Thomas McCabe in 1976 (link to this paper at the bottom). It measures the number of independent paths (term taken from graph theory) through a particular method (let’s talk Java parlance, although the sample applies to whole programs or subroutines). For example, for a simple method that has no conditionals, the MCC is 1. Programs that have many conditionals are harder to follow, harder to test, and as a result feature a higher MCC. The MCC formula is: M = E – N + X where M is the McCabe Cyclomatic Complexity (MCC) metric, E is the number of edges, N is the number of nodes or decision points (conditional statements), and X is the number of exits (return statements) in the graph of the method. Quick Example:In this example, MCC = 3 A simpler method of computing the MCC is demonstrated in the equation below. If D is the number of decision points in the program, then M = D + 1 (Each decision point normally has two possible paths) As mentioned earlier, MCC also is useful in determining the testability of a method. Often, the higher the value, the more difficult and risky the method is to test and maintain. Some standard values of Cyclomatic Complexity are shown below:M = D + 1 Assesment1-10 not much risk11-20 moderate risk21-50 high risk51+ untestable, very high riskOne final word on MCC that also applies to most of the other metrics: each element in the formulae is assumed to have the same weight. In MCC’s case, both branches are assumed to be equally complex. However, in most cases this is not the case. Think of the if statement with code for only one branch–yet each branch is treated as having the same weight. Also, measures of expressions are all the same, even for those that contain many factors and terms. Be aware of this and be prepared, if your tool gives you the ability, to add weight to different branches. This metric is called an Extended McCabe Complexity Metric. Afferent Coupling (Ca) Afferent coupling is the number of other packages that depend upon classes within this package. This value is good indicator of how changes to classes in this package would influence other parts of the software. Efferent Coupling (Ce) Efferent coupling is the number of other packages that classes from this package depend upon. This value indicates how sensitive this package is for changes to other packages. Code with high Ca and high Ce is very hard to test and maintain, therefore, has very high complexity and is largely unstable. Instability (I) Instability is the ratio between efferent coupling (Ce) and the total package coupling (Ce + Ca) which is based on the following formula (Ce / (Ce + Ca)) and produces results in the range [0,1]. As I -> 0, this indicates a maximally stable package that is completely independent. On the other hand, as I -> 1 this indicates a totally instable package that has no incoming dependencies but depends upon other packages. ResourcesEvaluate your technical debt with Sonar OO Design Quality Metrics – An Analysis of Dependencies (PDF) A Complexity Measure (GDocs PDF)Reference: Measuring Code Complexity from our JCG partner Luis Atencio at Reflective Thought. Related Articles :Using FindBugs to produce substantially less buggy code Why Automated Tests Boost Your Development Speed Not doing Code Reviews? What’s your excuse? Java Tools: Source Code Optimization and Analysis How many bugs do you have in your code?...
spring-logo

Swapping out Spring Bean Configuration at Runtime

Most Java developers these days deal with Spring on a regular basis and there are lots of us out there that have become familiar with its abilities as well as its limitations. I recently came across a problem that I hadn’t hit before: introducing the ability to rewire a bean’s internals based on configuration introduced at runtime. This is valuable for simple configuration changes or perhaps swapping out something like a Strategy or Factory class, rather than rebuilding of a complex part of the application context. I was able to find some notes about how to do this, but I thought that some might find my notes and code samples useful, especially since I can confirm this technique works on versions of Spring back to 1.2.6. Unfortunately, not all of us are lucky enough to be on the latest and greatest of every library. Scope of the Problem The approach I’m going to outline is meant primarily to target changes to a single bean, though this code could easily be extended to change multiple beans. It could be invoked through JMX or some other UI exposed to administrators. One thing it does not cover is rewiring a singleton all across an application – this could conceivably be done via some reflection and inspection of the current application context, but is likely to be unsafe in most applications unless they have some way of temporarily shutting down or blocking all processing for a period while the changes are made all over the application. The Code Here’s the sample code. It will take a list of Strings which contains bean definitions, and wire them into a new temporary Spring context. You’ll see a parent context can be provided, which is useful in case your new bean definitions need to refer to beans already configured in the application. public static <T> Map<String, T> extractBeans(Class<T> beanType, List<String> contextXmls, ApplicationContext parentContext) throws Exception {List<String> paths = new ArrayList<String>(); try { for (String xml : contextXmls) { File file = File.createTempFile("spring", "xml"); // ... write the file using a utility method FileUtils.writeStringToFile(file, xml, "UTF-8"); paths.add(file.getAbsolutePath()); }String[] pathArray = paths.toArray(new String[0]); return buildContextAndGetBeans(beanType, pathArray, parentContext);} finally { // ... clean up temp files immediately if desired } }private static <T> Map<String, T> buildContextAndGetBeans(Class<T> beanType, String[] paths, ApplicationContext parentContext) throws Exception {FileSystemXmlApplicationContext context = new FileSystemXmlApplicationContext(paths, false, parentContext) { @Override // suppress refresh events bubbling to parent context public void publishEvent(ApplicationEvent event) { } };try { // avoid classloader errors in some environments context.setClassLoader(beanType.getClassLoader()); context.refresh(); // parse and load context Map<String, T> beanMap = context.getBeansOfType(beanType);return beanMap; } finally { try { context.close(); } catch (Exception e) { // ... log this } } }If you look at buildContextAndGetBeans(), you’ll see it does the bulk of the work by building up a Spring context with the supplied XML bean definition files. It then returns a map of the constructed beans of the type requested. Note: Since the temporary Spring context is destroyed, ensure your beans do not have lifecycle methods that cause them to be put into an invalid state when stopped or destroyed. Here’s an example of a Spring context that might be used to rewire a component. Imagine we have an e-commerce system that does fraud checks, but various strategies for checking for fraud. We may wish to swap these from our service class without having to stop and reconfigure the application, since we lose business when we do so. Perhaps we are finding a specific abuse of the system that would be better dealt with by changing the strategy used to locate fraudulent orders. Here’s a sample XML definition that could be used to rewire our FraudService. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"> <beans> <bean id="fraudStrategy" class="com.example.SomeFraudStategory"> <!-- example of a bean defined in the parent application context that we can reference --> <property name="fraudRuleFactory" ref="fraudRuleFactory"/> </bean> </beans>And here is the code you could use to rewire your bean with a reference to the defined fraudStrategy, assuming you have it in a utility class called SpringUtils: public class FraudService implements ApplicationContextAware {private ApplicationContext context; // volatile for thread safety (in Java 1.5 and up only) private volatile FraudStrategy fraudStrategy;@Override // get a handle on the the parent context public void setApplicationContext(ApplicationContext context) { this.context = context; }public void swapFraudStategy(String xmlDefinition) throws Exception { List<Sting> definitions = Arrays.asList(xmlDefinition); Map<String, FraudStrategy> beans = SpringUtils.extractBeans(FraudStrategy.class, definitions, context); if (beans.size() != 1) { throw new RuntimeException("Invalid number of beans: " + beans .size()); } this.fraudStrategy = beans.values().iterator().next(); }}And there you have it! This example could be extended a fair bit to meet your needs, but I think it shows the fundamentals of how to create a Spring context on the fly, and use its beans to reconfigure your application without any need for downtime. Reference: Swapping out Spring Bean Configuration at Runtime from our JCG partners at the Carfey Software blog. Related Articles :Spring configuration with zero XML The evolution of Spring dependency injection techniques Spring MVC3 Hibernate CRUD Sample Application Aspect Oriented Programming with Spring AOP Spring MVC Development – Quick Tutorial...
software-development-2-logo

When Inheriting a Codebase, there are more questions than answers…

There will be that time in your life when you inherit someone else’s source tree or codebase and you’ll need to think of a plan to deal with this situation. Now, there are codebases and there are codebases, none are ever perfect, but some are just unbelievable… The idea of this blog is to go through some of the question’s you’ll need to ask in order to figure out what you can do and whether or not you’ll be able to make a difference or just end up picking away at the edges going nowhere. So, what sort of questions should you ask? The first thing to find out is what does the customer want to achieve? What kind of future do they envisage? Where do they want to go? After all it’s usually their code you’re working on. Assuming that your customer wants you to take their codebase forward, you need to consider how you’re going to achieve this. Will it be by Revolution or Evolution? Revolution or the big re-write approach always seems to fail. At best you re-write your application into that killer-app with all the new features, but in re-writing the basic functionality you’ll lose ground to your competitors. At worst, you’ll end up repeating the mistakes that were made the first time around and your codebase will still degenerate into a same kind of mess. I always prefer the evolution approach, fixing various sections of the code a bit at a time whilst keeping the overall application working. You should also consider how the application is built. Does it adhere to the 10 minute build rule? Is there a CI machine? Does it use Maven or Ant or something else entirely? If you’re not using Maven then I suggest trying to get that accepted as the way of doing things though actually implementing Maven may be a lot easier than getting approval for its use. Politics eh… Is the Codebase modularised or amorphous mass? Try to figure out whether or not anyone thought about splitting the codebase into distinct modules. Is it nicely layered or is everything stuffed into the default package? If it’s all bundled up together then try splitting out different modules – one at a time. Remember that Maven favours smaller modules than Ant. When splitting out modules choose whether to work on layer based (eg. DAOs and Business Logic) or functionality based (eg: ‘book an appointment’) boundaries. It this a dish of spaghetti code? If the code resembles a heap of spaghetti it’s not the end of the world, because you can ask the next question: Does the code have any unit tests? If it doesn’t have any unit tests (and this isn’t uncommon) then start writing some. The more tests you have the better. When appropriate, take each class and start creating some unit tests. This will give you the confidence to refactor the code, applying the usual SOLID principles. Start with the single responsibility principle and work from there applying it at a method and class level retesting each time to keep those tests passing. When using unit tests, the spaghetti code should soon unravel. If it has tests, are they any good? There are tests and there are tests. Figure out if the tests actually mean anything. Do they relate back to any scenarios or requirements? I’ve seen tests that simply said: String result = testInstance.doSomething(); assertNotNull(result);…and where result is a complex XML string. In this case the test is pretty meaningless as you need to verify that the XML string is the XML string you’re expecting, which usually means a lot of parsing and checking. So, if the tests are useless, then write some more. Did the previous owners follow good or bad practices? One of the worst practises you see is good old cut ‘n’ paste programming. Maybe a bug as arisen in one class or JSP and someone come along and found a bit of code that fixes it somewhere else in the codebase and copied it into the broken class. When this happens, the codebase has grown needlessly, becoming a more of a mess, and when it comes to fix a bug in the pasted code then you’ll fix it two, three, four etc times. THERE IS NO NEED FOR THIS. As the old sayings go: “Don’t Repeat Yourself” or “Duplication is Evil”… If you’re ever in this situation the very least you can do is to create a simple static helper class and re-use its code. Inheriting source code should be treated as an opportunity to apply the tools in your toolbox whether they be Test Driven Development, Single Responsibility Principal, continuous integration, dependency injection or whatever, but having said that, ultimately, you need to find out just what your client want you to do. Are they confident enough to allow the code to evolve and improve, or, perhaps because other programmers haven’t added any unit tests, are they afraid to change something in case you brake it? If they’re afraid of change then all you’ll do is just pick away at the edges and not get to really make a difference. At this point I really should talk about cultivating client trust, working together, professional relationships and communications, but always remember that in the back of you mind your should always know when to walk away… Reference: When Inheriting a Codebase, there are more questions than answers…  from our JCG partner Roger Hughes at the Captain Debug’s Blog.Related Articles:Why Automated Tests Boost Your Development Speed The Ten Minute Build How many bugs do you have in your code? Using FindBugs to produce substantially less buggy code Java Tools: Source Code Optimization and Analysis Not doing Code Reviews? What’s your excuse?...
aspectj-logo

Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy

The ability to inject pieces of code into compiled classes and methods, either statically or at runtime, may be of immense help. This applies especially to troubleshooting problems in third-party libraries without source codes or in an environment where it isn’t possible to use a debugger or a profiler. Code injection is also useful for dealing with concerns that cut across the whole application, such as performance monitoring. Using code injection in this way became popular under the name Aspect-Oriented Programming (AOP). Code injection isn’t something used only rarely as you might think, quite the contrary; every programmer will come into a situation where this ability could prevent a lot of pain and frustration. This post is aimed at giving you the knowledge that you may (or I should rather say “will”) need and at persuading you that learning basics of code injection is really worth the little of your time that it takes. I’ll present three different real-world cases where code injection came to my rescue, solving each one with a different tool, fitting best the constraints at hand. Why You Are Going to Need It A lot has been already said about the advantages of AOP – and thus code injection – so I will only concentrate on a few main points from the troubleshooting point of view. The coolest thing is that it enables you to modify third party, closed-source classes and actually even JVM classes. Most of us work with legacy code and code for which we haven’t the source codes and inevitably we occasionally hit the limitations or bugs of these 3rd-party binaries and need very much to change some small thing in there or to gain more insight into the code’s behavior. Without code injection you have no way to modify the code or to add support for increased observability into it. Also you often need to deal with issues or collect information in the production environment where you can’t use a debugger and similar tools while you usually can at least manage somehow your application’s binaries and dependencies. Consider the following situations:You’re passing a collection of data to a closed-source library for processing and one method in the library fails for one of the elements but the exception provides no information about which element it was. You’d need to modify it to either log the offending argument or to include it in the exception. (And you can’t use a debugger because it only happens on the production application server.) You need to collect performance statistics of important methods in your application including some of its closed-source components under the typical production load. (In the production you of course cannot use a profiler and you want to incur the minimal overhead.) You use JDBC to send a lot of data to a database in batches and one of the batch updates fails. You would need some nice way to find out which batch it was and what data it contained.I’ve in fact encountered these three cases (among others) and you will see possible implementations later. You should keep the following advantages of code injection in your mind while reading this post:Code injection enables you to modify binary classes for which you haven’t the source codes The injected code can be used to collect various runtime information in environments where you cannot use the traditional development tools such as profilers and debuggers Don’t Repeat Yourself: When you need the same piece of logic at multiple places, you can define it once and inject it into all those places. With code injection you do not modify the original source files so it is great for (possibly large-scale) changes that you need only for a limited period of time, especially with tools that make it possible to easily switch the code injection on and off (such as AspectJ with its load-time weaving). A typical case is performance metrics collection and increased logging during troubleshooting You can inject the code either statically, at the build time, or dynamically, when the target classes are being loaded by the JVMMini Glossary You might encounter the following terms in relation to code injection and AOP: Advice The code to be injected. Typically we talk about before, after, and around advices, which are executed before, after, or instead of a target method. It’s possible to make also other changes than injecting code into methods, e.g. adding fields or interfaces to a class. AOP (Aspect Oriented Programming) A programming paradigm claiming that “cross-cutting concerns” – the logic needed at many places, without a single class where to implement them – should be implemented once and injected into those places. Check Wikipedia for a better description. Aspect A unit of modularity in AOP, corresponds roughly to a class – it can contain different advices and pointcuts. Joint point A particular point in a program that might be the target of code injection, e.g. a method call or method entry. Pointcut Roughly spoken, a pointcut is an expression which tells a code injection tool where to inject a particular piece of code, i.e. to which joint points to apply a particular advice. It could select only a single such point – e.g. execution of a single method – or many similar points – e.g. executions of all methods marked with a custom annotation such as @MyBusinessMethod. Weaving The process of injecting code – advices – into the target places – joint points.The Tools There are many very different tools that can do the job so we will first have a look at the differences between them and then we will get acquainted with three prominent representatives of different evolution branches of code injection tools. Basic Classification of Code Injection Tools I. Level of Abstraction How difficult is it to express the logic to be injected and to express the pointcuts where the logic should be inserted? Regarding the “advice” code:Direct bytecode manipulation (e.g. ASM) – to use these tools you need to understand the bytecode format of a class because they abstract very little from it, you work directly with opcodes, the operand stack and individual instructions. An ASM example: methodVisitor.visitFieldInsn(Opcodes.GETSTATIC, “java/lang/System”, “out”, “Ljava/io/PrintStream;”); They are difficult to use due to being so low-level but are the most powerful. Usually they are used to implement higher-level tools and only few actually need to use them. Intermediate level – code in strings, some abstraction of the classfile structure (Javassist) Advices in Java (e.g. AspectJ) – the code to be injected is expressed as syntax-checked and statically compiled JavaRegarding the specification of where to inject the code:Manual injection – you have to get somehow hold of the place where you want to inject the code (ASM, Javassist) Primitive pointcuts – you have rather limited possibilities for expressing where to inject the code, for example to a particular method, to all public methods of a class or to all public methods of classes in a group (Java EE interceptors) Pattern matching pointcut expressions – powerful expressions matching joint points based on a number of criteria with wildcards, awareness of the context (e.g. “called from a class in the package XY”) etc. (AspectJ)II. When the Magic Happens The code can be injected at different points in time:Manually at run-time – your code has to explicitly ask for the enhanced code, e.g. by manually instantiating a custom proxy wrapping the target object (this is arguably not true code injection) At load-time – the modification are performed when the target classes are being loaded by the JVM At build-time – you add an extra step to your build process to modify the compiled classes before packaging and deploying your applicationEach of these modes of injection can be more suitable at different situations. III. What It Can Do The code injection tools vary pretty much in what they can or cannot do, some of the possibilities are:Add code before/after/instead of a method – only member-level methods or also the static ones? Add fields to a class Add a new method Make a class to implement an interface Modify an instruction within the body of a method (e.g. a method call) Modify generics, annotations, access modifiers, change constant values, … Remove method, field, etc.Selected Code Injection Tools The best-known code injection tools are:Dynamic Java Proxy The bytecode manipulation library ASM JBoss Javassist AspectJ Spring AOP/proxies Java EE interceptorsPractical Introduction to Java Proxy, Javassist and AspectJ I’ve selected three rather different mature and popular code injection tools and will present them on real-world examples I’ve personally experienced. The Omnipresent Dynamic Java Proxy Java.lang.reflect.Proxy makes it possible to create dynamically a proxy for an interface, forwarding all calls to a target object. It is not a code injection tool for you cannot inject it anywhere, you must manually instantiate and use the proxy instead of the original object, and you can do this only for interfaces, but it can still be very useful as we will see. Advantages:It’s a part of JVM and thus is available everywhere You can use the same proxy – more exactly an InvocationHandler – for incompatible objects and thus reuse the code more than you could normally You save effort because you can easily forward all calls to a target object and only modify the ones interesting for you. If you were to implement a proxy manually, you would need to implement all the methods of the interface in questionDisadvantages:You can create a dynamic proxy only for an interface, you can’t use it if your code expects a concrete class You have to instantiate and apply it manually, there is no magical auto-injection It’s little too verbose Its power is very limited, it can only execute some code before/after/around a methodThere is no code injection step – you have to apply the proxy manually. Example I was using JDBC PreparedStatement’s batch updates to modify a lot of data in a database and the processing was failing for one of the batch updates because of integrity constraint violation. The exception didn’t contain enough information to find out which data caused the failure and so I’ve created a dynamic proxy for the PreparedStatement that remembered values passed into each of the batch updates and in the case of a failure it automatically printed the batch number and the data. With this information I was able to fix the data and I kept the solution in place so that if a similar problems ever occurs again, I’ll be able to find its cause and resolve it quickly. The crucial part of the code: LoggingStatementDecorator.java – snippet 1 class LoggingStatementDecorator implements InvocationHandler {private PreparedStatement target; ...private LoggingStatementDecorator(PreparedStatement target) { this.target = target; }@Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {try { Object result = method.invoke(target, args); updateLog(method, args); // remember data, reset upon successful execution return result; } catch (InvocationTargetException e) { Throwable cause = e.getTargetException(); tryLogFailure(cause); throw cause; }}private void tryLogFailure(Throwable cause) { if (cause instanceof BatchUpdateException) { int failedBatchNr = successfulBatchCounter + 1; Logger.getLogger("JavaProxy").warning( "THE INJECTED CODE SAYS: " + "Batch update failed for batch# " + failedBatchNr + " (counting from 1) with values: [" + getValuesAsCsv() + "]. Cause: " + cause.getMessage()); } } ...Notes: To create a proxy, you first need to implement an InvocationHandler and its invoke method, which is called whenever any of the interface’s methods is invoked on the proxy You can access the information about the call via the java.lang.reflect.* objects and for example delegate the call to the proxied object via method.invoke We’ve also an utility method for creating a proxy instance for a Prepared statement: LoggingStatementDecorator.java – snippet 2 public static PreparedStatement createProxy(PreparedStatement target) { return (PreparedStatement) Proxy.newProxyInstance( PreparedStatement.class.getClassLoader(), new Class[] { PreparedStatement.class }, new LoggingStatementDecorator(target)); };Notes:You can see that the newProxyInstance call takes a classloader, an array of interfaces that the proxy should implement, and the invocation handler that calls should be delegated to (the handler itself has to manage a reference to the proxied object, if it needs it)It is then used like this: Main.java ... PreparedStatement rawPrepStmt = connection.prepareStatement("..."); PreparedStatement loggingPrepStmt = LoggingStatementDecorator.createProxy(rawPrepStmt); ... loggingPrepStmt.executeBatch(); ...Notes:You see that we have to manually wrap a raw object with the proxy and use the proxy further onAlternative Solutions This problem could be solved in different ways, for example by creating a non-dynamic proxy implementing PreparedStatement and forwarding all calls to the real statement while remembering batch data but it would be lot of boring typing for the interface has many methods. The caller could also manually keep track of the data it has send to the prepared statement but that would obscure its logic with an unrelated concern. Using the dynamic Java proxy we get rather clean and easy to implement solution. The Independent Javassist JBoss Javassist is an intermediate code injection tool providing a higher-level abstraction than bytecode manipulation libraries and offering little limited but still very useful manipulation capabilities. The code to be injected is represented as strings and you have to manually get to the class-method where to inject it. Its main advantage is that the modified code has no new run-time dependencies, on Javassist or anything else. This may be the decisive factor if you are working for a large corporation where the deployment of additional open-source libraries (or just about any additional libraries) such as AspectJ is difficult for legal and other reasons. Advantages:Code modified by Javassist doesn’t require any new run-time dependencies, the injection happens at the build time and the injected advice code itself doesn’t depend on any Javassist API Higher-level than bytecode manipulation libraries, the injected code is written in Java syntax, though enclosed in strings Can do most things that you may need such as “advising” method calls and method executions You can achieve both build-time injection (via Java code or a custom Ant task to do execution/call advising) and load-time injection (by implementing your own Java 5+ agent [thx to Anton])Disadvantages:Still little too low-level and thus harder to use – you have to deal a little with structure of methods and the injected code is not syntax-checked Javassist has no tools to perform the injection and you thus have to implement your own injection code – including that there isn’t support for injecting the code automatically based on a pattern(See GluonJ below for a solution without most of the disadvantages of Javassist.) With Javassist you create a class, which uses the Javassist API to inject code int targets and run it as a part of your build process after the compilation, for example as I once did via a custom Ant task. Example We needed to add some simple performance monitoring to our Java EE application and we were not allowed to deploy any non-approved open-source library (at least not without going through a time-consuming approval process). We’ve therefore used Javassist to inject the performance monitoring code to our important methods and to the places were important external methods were called. The code injector: JavassistInstrumenter.java public class JavassistInstrumenter {public void insertTimingIntoMethod(String targetClass, String targetMethod) throws NotFoundException, CannotCompileException, IOException { Logger logger = Logger.getLogger("Javassist"); final String targetFolder = "./target/javassist";try { final ClassPool pool = ClassPool.getDefault(); // Tell Javassist where to look for classes - into our ClassLoader pool.appendClassPath(new LoaderClassPath(getClass().getClassLoader())); final CtClass compiledClass = pool.get(targetClass); final CtMethod method = compiledClass.getDeclaredMethod(targetMethod);// Add something to the beginning of the method: method.addLocalVariable("startMs", CtClass.longType); method.insertBefore("startMs = System.currentTimeMillis();"); // And also to its very end: method.insertAfter("{final long endMs = System.currentTimeMillis();" + "iterate.jz2011.codeinjection.javassist.PerformanceMonitor.logPerformance(\"" + targetMethod + "\",(endMs-startMs));}");compiledClass.writeFile(targetFolder); // Enjoy the new $targetFolder/iterate/jz2011/codeinjection/javassist/TargetClass.classlogger.info(targetClass + "." + targetMethod + " has been modified and saved under " + targetFolder); } catch (NotFoundException e) { logger.warning("Failed to find the target class to modify, " + targetClass + ", verify that it ClassPool has been configured to look " + "into the right location"); } }public static void main(String[] args) throws Exception { final String defaultTargetClass = "iterate.jz2011.codeinjection.javassist.TargetClass"; final String defaultTargetMethod = "myMethod"; final boolean targetProvided = args.length == 2;new JavassistInstrumenter().insertTimingIntoMethod( targetProvided? args[0] : defaultTargetClass , targetProvided? args[1] : defaultTargetMethod ); } }Notes:You can see the “low-levelness” – you have to explicitly deal with objects like CtClass, CtMethod, explicitly add a local variable etc. Javassist is rather flexible in where it can look for the classes to modify – it can search the classpath, a particular folder, a JAR file, or a folder with JAR files You would compile this class and run its main during your build processJavassist on Steroids: GluonJ GluonJ is an AOP tool building on top of Javassist. It can use either a custom syntax or Java 5 annotations and it’s build around the concept of “revisers”. Reviser is a class – an aspect – that revises, i.e. modifies, a particular target class and overrides one or more of its methods (contrary to inheritance, the reviser’s code is physically imposed over the original code inside the target class). Advantages:No run-time dependencies if build-time weaving used (load-time weaving requires the GluonJ agent library or gluonj.jar) Simple Java syntax using GlutonJ’s annotation – though the custom syntax is also trivial to understand and easy to use Easy, automatic weaving into the target classes with GlutonJ’s JAR tool, an Ant task or dynamically at the load-time Support for both build-time and load-time weavingDisadvantages:An aspect can modify only a single class, you cannot inject the same piece of code to multiple classes/methods Limited power – only provides for field/method addition and execution of a code instead of/around a target method, either upon any of its executions or only if the execution happens in a particular context, i.e. when called from a particular class/methodIf you don’t need to inject the same piece of code into multiple methods then GluonJ is easier and better choice than Javassist and if its simplicity isn’t a problem for you then it also might be a better choice than AspectJ just thanks to this simplicity. The Almighty AspectJ AspectJ is a full-blown AOP tool, it can do nearly anything you might want, including the modification of static methods, addition of new fields, addition of an interface to a class’ list of implemented interfaces etc. The syntax of AspectJ advices comes in two flavours, one is a superset of Java syntax with additional keywords like aspect and pointcut, the other one – called @AspectJ – is standard Java 5 with annotations such as @Aspect, @Pointcut, @Around. The latter is perhaps easier to learn and use but also little less powerful as it isn’t as expressive as the custom AspectJ syntax. With AspectJ you can define which joint points to advise with very powerful expressions but it may be little difficult to learn them and to get them right. There is a useful Eclipse plugin for AspectJ development – the AspectJ Development Tools (AJDT) – but the last time I’ve tried it it wasn’t as helpful as I’d have liked. Advantages:Very powerful, can do nearly anything you might need Powerful pointcut expressions for defining where to inject an advice and when to activate it (including some run-time checks) – fully enables DRY, i.e. write once & inject many times Both build-time and load-time code injection (weaving)Disadvantages:The modified code depends on the AspectJ runtime library The pointcut expressions are very powerful but it might be difficult to get them right and there isn’t much support for “debugging” them though the AJDT plugin is partially able to visualize their effects It will likely take some time to get started though the basic usage is pretty simple (using @Aspect, @Around, and a simple pointcut expression, as we will see in the example)Example Once upon time I was writing a plugin for a closed-source LMS J2EE application having such dependencies that it wasn’t feasible to run it locally. During an API call, a method deep inside the application was failing but the exception didn’t contain enough information to track the cause of the problem. I therefore needed to change the method to log the value of its argument when it fails. The AspectJ code is quite simple: LoggingAspect.java @Aspect public class LoggingAspect {@Around("execution(private void TooQuiet3rdPartyClass.failingMethod(..))") public Object interceptAndLog(ProceedingJoinPoint invocation) throws Throwable { try { return invocation.proceed(); } catch (Exception e) { Logger.getLogger("AspectJ").warning( "THE INJECTED CODE SAYS: the method " + invocation.getSignature().getName() + " failed for the input '" + invocation.getArgs()[0] + "'. Original exception: " + e); throw e; } } }Notes:The aspect is a normal Java class with the @Aspect annotation, which is just a marker for AspectJ The @Around annotation instructs AspectJ to execute the method instead of the one matched by the expression, i.e. instead of the failingMethod of the TooQuiet3rdPartyClass The around advice method needs to be public, return an Object, and take a special AspectJ object carrying information about the invocation – ProceedingJoinPoint – as its argument and it may have an arbitrary name (Actually this is the minimal form of the signature, it could be more complex.) We use the ProceedingJoinPoint to delegate the call to the original target (an instance of the TooQuiet3rdPartyClass) and, in the case of an exception, to get the argument’s value I’ve used an @Around advice though @AfterThrowing would be simpler and more appropriate but this shows better the capabilities of AspectJ and can be nicely compared to the dynamic java proxy example aboveSince I hadn’t control over the application’s environment, I couldn’t enable the load-time weaving and thus had to use AspectJ’s Ant task to weave the code at the build time, re-package the affected JAR and re-deploy it to the server. Alternative Solutions Well, if you can’t use a debugger then your options are quite limited. The only alternative solution I could think of is to decompile the class (illegal!), add the logging into the method (provided that the decompilation succeeds), re-compile it and replace the original .class with the modified one. The Dark Side Code injection and Aspect Oriented Programming are very powerful and sometimes indispensable both for troubleshooting and as a regular part of application architecture, as we can see e.g. in the case of Java EE’s Enterprise Java Beans where the business concerns such as transaction management and security checks are injected into POJOs (though implementations actually more likely use proxies) or in Spring. However there is a price to be paid in terms of possibly decreased understandability as the runtime behavior and structure are different from what you’d expect based on the source codes (unless you know to check also the aspects’ sources or unless the injection is made explicit by annotations on the target classes such as Java EE’s @Interceptors). Therefore you must carefully weight the benefits and drawbacks of code injection/AOP – though when used reasonably, they do not obscure the program flow more than interfaces, factories etc. The argument about obscuring code is perhaps often over-estimated. If you want to see an example of AOP gone wild, check the source codes of Glassbox, a JavaEE performance monitoring tool (for that you might need a map not to get too lost). Fancy Uses of Code Injection and AOP The main field of application of code injection in the process of troubleshooting is logging, more exactly gaining visibility into what an application is doing by extracting and somehow communicating interesting runtime information about it. However AOP has many interesting uses beyond – simple or complex – logging, for example:Typical examples: Caching & et al (ex.: on AOP in JBoss Cache), transaction management, logging, enforcement of security, persistence, thread safety, error recovery, automatic implementation of methods (e.g. toString, equals, hashCode), remoting Implementation of role-based programming (e.g. OT/J, using BCEL) or the Data, Context, and Interaction architecture TestingTest coverage – inject code to record whether a line has been executed during test run or not Mutation testing (µJava, Jumble) – inject “random” mutation to the application and verify that the tests failed Pattern Testing – automatic verification that Architecture/Design/Best practices recommendations are implemented correctly in the code via AOP Simulate hardware/external failures by injecting the throwing of an exceptionHelp to achieve zero turnaround for Java applications – JRebel uses an AOP-like approach for framework and server integration plugins – namely its plugins use Javassist for “binary patching” Solving though problems and avoiding monkey-coding with AOP patterns such as Worker Object Creation (turn direct calls into asynchronous with a Runnable and a ThreadPool/task queue) and Wormhole (make context information from a caller available to the callee without having to pass them through all the layers as parameters and without a ThreadLocal) – described in the book AspectJ in Action Dealing with legacy code – overriding the class instantiated on a call to a constructor (this and similar may be used to break tight-coupling with feasible amount of work), ensuring backwards-compatibility o , teaching components to react properly on environment changes Preserving backwards-compatibility of an API while not blocking its ability to evolve e.g. by adding backwards-compatible methods when return types have been narrowed/widened (Bridge Method Injector – uses ASM) or by re-adding old methods and implementing them in terms of the new API Turning POJOs into JMX beansSummary We’ve learned that code injection can be indispensable for troubleshooting, especially when dealing with closed-source libraries and complex deployment environments. We’ve seen three rather different code injection tools – dynamic Java proxies, Javassist, AspectJ – applied to real-world problems and discussed their advantages and disadvantages because different tools may be suitable for different cases. We’ve also mentioned that code injection/AOP shouldn’t be overused and looked at some examples of advanced applications of code injection/AOP. I hope that you now understand how code injection can help you and know how to use these three tools. Source Codes You can get the fully-documented source codes of the examples from GitHub including not only the code to be injected but also the target code and support for easy building. The easiest may be: git clone git://github.com/jakubholynet/JavaZone-Code-Injection.git cd JavaZone-Code-Injection/ cat README mvn -P javaproxy test mvn -P javassist test mvn -P aspectj test(It may take few minutes for Maven do download its dependencies, plugins, and the actual project’s dependencies.) Additional ResourcesSpring’s introduction into AOP dW: AOP@Work: AOP myths and realities Chapter 1 of AspectJ in Action, 2nd. ed.Acknowledgements I would like to thank all the people who helped me with this post and the presentation including my colleges, the JRebel folk, and GluonJ’s co-author prof. Shigeru Chiba. Reference: Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy from our JCG partner Jakub Holý at The Holy Java blog. Related Articles:Domain Driven Design with Spring and AspectJ Aspect Oriented Programming with Spring AspectJ and Maven Aspect Oriented Programming with Spring AOP 10 Tips for Proper Application Logging...
jcg-logo

Best Of The Week – 2011 – W38

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew JavaCodeGeeks attention: * Client-Side Improvements in Java 6 and Java 7: This article discusses the improvements to the client and desktop parts of Java SE 6 and Java SE 7, including the new applet plug-in, the Java Deployment Toolkit, shaped and translucent windows, heavyweight-lightweight mixing, and Java Web Start. * Unit and Functional Testing in Android: An article providing hands-on examples on how to implement unit and functional testing in Android. The author starts by giving an example of a typical unit test scenario and then he adapts it to run as an Android test. * The Six Pillars of Complete Developer Documentation: In this article, the characteristics of a complete API documentation are investigated, namely Class Reference, Changelog, Code Samples, Code Playground, Developers Guide and Articles. * Fork and Join: Java Can Excel at Painless Parallel Programming Too!: A detailed article examining the parallel programming features of Java and mainly the new Fork/Join framework. Also check out our own article Java Fork/Join for Parallel Programming. * Memcached surpasses EhCache and Coherence in Java usage: The word is about caching solutions and how Memcached has managed to surpass traditional Java based caches like Teracotta’s EhCache and Oracle’s Coherence. * Comparing AppDynamics vs DynaTrace, CA Wily, Precise and HP: An exhaustive comparison of Application Performance Management (APM) products based on these criteria:Fit for Production Environments, Fit for Modern Application Architectures, Ease of Use, Scalability, Innovation and Cost of Ownership. * Did CEP deliver for SOA and can it for Cloud?: In this article the author examines how Complex Event Processing (CEP) has been used within the context of SOA and how it could be now be leveraged in the cloud. * Why you really do Performance Management in production: Title says it all. The reason should not to be fix a CPU hot spot or improve garbage collection, but rather to understand the impact that the applications performance has on customers and thus business. That’s all for this week. Stay tuned for more, here at JavaCodeGeeks. Related Articles:Best Of The Week – 2011 – W37 Best Of The Week – 2011 – W36 Best Of The Week – 2011 – W35 Best Of The Week – 2011 – W34 Best Of The Week – 2011 – W32 Best Of The Week – 2011 – W31 Best Of The Week – 2011 – W30 Best Of The Week – 2011 – W29 Best Of The Week – 2011 – W28 Best Of The Week – 2011 – W27...
android-logo

Android Tutorial: Gestures in your app

Gestures in mobile apps are pretty common place these days. Almost anyone with smart phone experience knows that pinching will usually zoom in on an image. Now, using Gestures in Android is a snap. OnTouch All gestures are really handled with one event on an Activity. OnTouch is called anytime a View is touched on the screen. It provides all the information you’ll need to create some pretty interesting gestures. Android provides a few simple classes that allow you to quickly add some gestures to your application without really getting into the details of gestures. When OnTouch is fired, you receive a MotionEvent and a reference to the View that was touched. Explanation of a Motion Event The Motion Event is what the Android OS returns every time the screen is touched in the OnTouch callback. The MotionEvent object contains information about how many fingers are touching the screen and the velocity of a finger that is moving. It has all the details needed to handle any kind of Gesture. Android also goes a step further. Android provides a few nice classes for some basic gesture detection, such as single finger drag and drop. They provide the developer with an easy way to implement a few gestures without really getting your hands dirty by using the SimpleOnGestureListener. How to use the OnGestureListener Using the OnGestureListener is very easy. In your activity, hook up your OnTouch Listener to the root view of your activity (if you’re looking to have gestures on the full screen). rootLayout.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { yourGestureListener.onTouchEvent(event); return false; } });Now all of the touch events on the rootlayout will be handled by your SimpleOnGestureListener. All that is left is to implement what your Gesture Listener does on certain touch events. By simply overriding the methods you need, you can implement gesture functionality without determining what type of gesture the user has done. For example, you can override the OnScroll method of the SimpleOnGestureListener to implement your own function for scrolling on your View. Why the Simple GestureDetector isn’t so great The Simple GestureDetector is great for any gesture that only requires some basic gestures. However, for any gesture looking for two (or more) touches, you’re out of luck. If you really want to get into the thick of Gestures, you really need to look into the Motion Event object a bit further. Creating More Complicated Gestures Lets go in a different direction from the above example. Lets say, instead of just calling the Gesture Listener in our OnTouch callback, lets look at the Motion Event object a bit more. Each Motion Event we receive has its own action, such as DOWN, where a finger has been pressed on the view, or MOVE, where a finger has moved from one position on the two dimensional plane to another. The Motion Event also has an action that is very useful for handling multi-touch gestures, called POINTER_DOWN, which is used when a second touch is placed on the view. Using these actions, complex gestures can be created. For example, a pinch gesture (usually used to zoom in on a map) could work like this: 1. Received DOWN. We record the initial spot where the user touched the screen. 2. Received POINTER_DOWN. Since we know this could be a two finger gesture, we record the spot of the second touch 3. Received MOVE. Now we check to see if the two pointers we have detected have moved towards each other. * *Note that within a Motion Event object, it is possible to get the coordinates of different touches by using the event.getX() method. Gesture Predictions Android also has a way to load in specific gestures to your application as well. These specific gestures can be more unique than the ones mentioned above, such as drawing a Z symbol on the screen. Creating these types of gestures requires the Emulator. You first draw your gestures in the emulator, and then you save them into a file. You can then take your created gesture file and load it into your application. By adding a GestureOverLayView onto your Layout, you can use your newly-created gestures to do whatever you would like. Click here to get some more information on Gesture Predictions. Reference: Android Tutorial: Adding Gestures to Your App from our JCG partner Isaac Taylor at the Programming Mobile blog. Related Articles :“Android Full Application Tutorial” series Android Google Maps Tutorial Android Quick Preferences Tutorial Android Text-To-Speech Application Android HTTP Camera Live Preview Tutorial Android Dependency Injection and Testing Libraries...
enterprise-java-logo

Configuration Management in Java EE

Configuration Management has a lot of relevance in Cloud Computing as I tried to argue earlier. Actually, I would boldly claim that Configuration Management is a corner stone in any serious attempt to squeeze a few dollars out of software. So what is Configuration Management and its key goals? Without over-complicating things I think the two following goals is not too far away from the truth.Establish a configuration in a predicatble way that guarantee a correctly behaving system. Maintain configuration consistency as changes occur over time.In other words, being able to manage change of behaviour in a reliable and secure way throughout the software lifecycle. But what is configuration? Is it source code? Is it loaded statically into the current class loader? Can it be changed at runtime? Is it persistent data? Is it tracked by VCS? Actually, where is it stored? Can every computer in the cluster access it? What happens when configuration change? Do we care if changes are validated? Are the user changing configuration authorized to do so? How are changes propagated to cluster members? Will applications be notified about configuration changes? Before going into this I would like to recall something about maintenence. Maintenance typically consumes about 40 to 80 percent (60 percent average) of software costs. Therefore, it is probably the most important life cycle phase. Frequently Forgotten Fundamental Facts about Software Engineering In short (without arguing about numbers), OAM is difficult and costly, clearly more so in dynamic and elastic Cloud Computing environments. And from a productivity perspective, if we can design our software so that we can avoid bouncing VCS, iterating a product release all the way through the deployment pipeline into production and still be able to manage change of behaviour, we maybe should consider it right? Obviously this would also make software more adaptable to different environments, the spirit and soul of Java. I would argue that we use configuration to delay decisions, not only with respect to the environment and its resources, but also to application-business specific decisions. Business must be able to quickly configure offerings/rules that are not related to application server resources/infrastructure. Therefore I believe that parts that must not change after release (behavioral integrity) is part of the program and configuration is a runtime behavioural invariant, strictly governed by program policies so that predictable system behaviour can be guaranteed, enforced on different levels depending on rate of change – BUT (and here is the pitch) in a productive, non-intrusive and reliable way. The Open/Closed principle comes to mind. In the context of Java EE, this definition still is not clear enough. Java EE 6 released the DataSourceDefinition annotation, which sort-of assume that configuration is code. A bit more configuration flexibility is given by the Assembler/Deployer roles. Simply put, the intention is that the application (in particular its xml descriptors) can be modified just before deployment, possibly overriding hardcoded values. This approach have always puzzled me, but maybe it is a matter of perspective on how different people perceive what type of data is considered configuration? However, I have never in my career heard, read about or met anyone that actually use this mechanism as intended. And there may be good reasons for that. In the Maven feedback-loop compile and packaging practically goes hand in hand – and almost every Maven project is intended to produce an artifact in the form of an archive. Descriptors are generated by Maven or statically tracked by the VCS. Either way, this process seals the application for further modification, unless the archive is unzipped and modified. But I cannot visualize a situation where it would be good idea to open up a JAR file, modify a text file, repackage and redeploy (using tools that are proprietary mind you – asadmin, wlst etc). Why? Consider what happens when a new release of the *authentic* archive is released. The changes that the assembler/deployer did will either be overwritten or needs to be re-configured again. Because of this, it is arguably not a good idea to do ad-hoc changes to version controlled files if those changes never make their way back to be tracked by the VCS. Even if they did, we would loose flexibility. It can be worth mentioning that many open source projects signs releases with a digital signature so that security-conscious users can find a digital trust path to the tarball. How do you change configuration for such an archive without breaking the signature? Consider impacts on development, where every developer may have a separate database tablespace for their integration tests. A clever developer probably builds some profile-sensitive maven plugins to search/replace his private data in the deployment descriptors. But why should he burdened with this and taking a turnaround hit whenever changing a configuration value, for example, between two JUnit tests (I dare not think what those tests would look like)? Xml files alone cannot validate changes themselves, we need a program to do that for us. Waiting 1-2 min for the application to deploy only to discover that your values were invalid and then do it all over again would be a disaster to developer productivity. If we look further at how software might be deployed using a stage then switch approach for clustered systems, deployment descriptors become even more problematic since two versions of the same archives would be needed. And why should the production system be disturbed (upgraded), dealing with quiescing, because an unrelated value needed to change? Think about when a value is rejected – do you change the value back (repackage application etc) and roll back over the cluster, correct the value and try again? I dont know… but I am starting to feel uneasy about maintaining SLA reliability and configuration consistency across the cluster now. In the context of multi-tenancy, a flat name=value type of configuration is also feel constraining. A configuration specification that is hierarchical or graph-like is better fit for modeling tenants enabling configuration compositions etc. Maybe something like this: import javax.validation.constraints.Max; import javax.validation.constraints.Min; import javax.validation.constraints.Pattern; import javax.validation.constraints.Size;@Configuration public class MysqlXADataSource { @Property(desc = "User name to use for connection authentication.") @Size(min = 6, max = 255) private String user = ""; // default value - hence optional property@Property(desc = "Password to use for connection authentication.") @Size(min = 6, max = 255) private String password = ""; // default value - hence optional property@Property(desc = "A JDBC URL.") @Pattern(regexp = "([a-zA-Z]{3,})://([\w-]+\.)+[\w-]+(/[\w- ./?%&=]*)?") private URL url; // required property@Property(desc = "Port number where a server is listening for requests.") @Min(0) @Max(65535) private Integer portNumber = 1521; // default value - hence optional property@Resource private List<ConfigurableItem> items; // configuration childs }@Stateless public class SessionBean { @Resource(name = "jdbc/mysql-ds") private DataSource ds; }This would be the application view and note that no assumption is taken on where and how configuration is instantiated and we can fail-fast by enforcing type-safety at compile time using a annotation processor. This is my spontaneous reflection and maybe it is too enterprisey. But i still think that both large and small applications would benefit from being configured at runtime, unware and separated from configuration sources (file, db, ldap, mib etc) and how they are managed. I even think Java SE would benefit from Configuration Management aswell. There are many more aspects around Configuration Management to discuss, such as security, administration, notifications, schema registration/discovery etc. But im going to stop here for comments/reflections/opinions – are deployment descriptors a good way for managing configuration or do we need something more sophisticated? This is a post is related to the “[jsr342-experts] Re: Configuration” and “[jsr342-experts] Re: resource configuration” threads on the Java EE 7 Expert Group mailing list. Please feel free to comment here or on the mailing list. Reference: Configuration Management in Java EE from our JCG partner Kristoffer Sjögren at the Deep Hacks blog. Related Articles :When Clouds Clear Developing and Testing in the Cloud Failure Isolation and Recovery: Learning from High-Scale and Extreme-Scale Computing App Engine Java Development with Netbeans Things Every Programmer Should Know...
jooq-2-logo

Database schema navigation in Java

An important part of jOOQ is jooq-meta, the database schema navigation module. This is used by the code generator to discover relevant schema objects. I was asked several times why I rolled my own instead of using other libraries, such as SchemaCrawler or SchemaSpy, and indeed it’s a pity I cannot rely on other stable third party products. Here are some thoughts on database schema navigation: Standards The SQL-92 standard defines how RDBMS should implement an INFORMATION_SCHEMA containing their dictionary tables. And indeed, some RDBMS do implement parts of the standard specification. These RDBMS ship with some implementation of the standard.Close to the standardHSQLDB: very close to the true standard Postgres: close to the standard, with some tweaks (also has proprietary dictionary tables) SQL Server: close to the standard but quite incomplete (also has proprietary dictionary tables)Liberal interpretation of the standardH2 (some backwards-incompatible changes, recently) MySQL (only since 5.0, also has proprietary dictionary tables)Other RDBMS provide their own idea of dictionary tables. This is something very tricky for schema navigation tools like jOOQ, to get a hold of. The dictionary table landscape can be described like this (my biased opinion): Neat and well-documented dictionary tablesDB2: These dictionary tables somehow look like the standard, with different names. They feel intuitive. Oracle: In my opinion has a better set of dictionary views than the ones proposed by the standard. Very easy to understand and well-documented all over the Internet SQLite: There are no dictionary tables, but the SQLite stored procedures are very simple to use. It’s a simple database, after allHard to understand, not well-documented dictionary tablesDerby: Created the notion of conglomerates instead of using normal database-speak, such as relations, keys, etc. MySQL: the old mysql schema was quite a pain. Fortunately, this is no longer true with MySQL 5.0 Ingres: Well… Ingres is an old database. Usability was not one of the main things in the 70?s… Sybase SQL Anywhere: Lots of objects that have to be joined in complicated relations. Documentation is scarce Sybase ASE: Even more difficult than SQL Anywhere. Some data can only be obtained with “tricks”JDBC abstraction The variety of dictionary tables seems to scream for standard abstraction. While the SQL-92 standard could in fact be implemented on most of these RDBMS, JDBC abstraction is even better. JDBC knows of the DatabaseMetaData object and allows for navigating database schemata easily. Unfortunately, every now and then, this API will throw a SQLFeatureNotSupportedException. There is no general rule about which JDBC driver implements how much of this API and when a workaround is needed. For jOOQ code generation, these facts make this API quite useless. Other tools There are some other tools in the open source world, as mentioned previously. Here are some drawbacks of using those tools in jOOQ:Both tools that I know of are licensed with LGPL, which is not nicely compatible with jOOQ’s Apache 2 license. Both tools navigate the entity-relationships very well, but seem to lack support for many non-standard constructs, such as UDT’s, advanced stored procedure usage (e.g. returning cursors, UDT’s, etc), ARRAY’s SchemaCrawler supports only 8 RDBMS, jOOQ has 12 now Both tools are rather inactive. See here and hereFor more information, visit their sites:SchemaCrawler SchemaSpyjooq-meta Because of the above reasons, jOOQ ships with its own database schema navigation: jooq-meta. This module can be used independently as an alternative to JDBC’s DatabaseMetaData, SchemaCrawler or SchemaSpy. jooq-meta uses jOOQ-crafted queries to navigate database meta-data, hence it is also part of the integration test suite. As an example, see how the Ingres foreign key relationships are navigated with jooq-meta: Result<Record> result = create() .select( IirefConstraints.REF_CONSTRAINT_NAME.trim(), IirefConstraints.UNIQUE_CONSTRAINT_NAME.trim(), IirefConstraints.REF_TABLE_NAME.trim(), IiindexColumns.COLUMN_NAME.trim()) .from(IICONSTRAINTS) .join(IIREF_CONSTRAINTS) .on(Iiconstraints.CONSTRAINT_NAME.equal(IirefConstraints.REF_CONSTRAINT_NAME)) .and(Iiconstraints.SCHEMA_NAME.equal(IirefConstraints.REF_SCHEMA_NAME)) .join(IICONSTRAINT_INDEXES) .on(Iiconstraints.CONSTRAINT_NAME.equal(IiconstraintIndexes.CONSTRAINT_NAME)) .and(Iiconstraints.SCHEMA_NAME.equal(IiconstraintIndexes.SCHEMA_NAME)) .join(IIINDEXES) .on(IiconstraintIndexes.INDEX_NAME.equal(Iiindexes.INDEX_NAME)) .and(IiconstraintIndexes.SCHEMA_NAME.equal(Iiindexes.INDEX_OWNER)) .join(IIINDEX_COLUMNS) .on(Iiindexes.INDEX_NAME.equal(IiindexColumns.INDEX_NAME)) .and(Iiindexes.INDEX_OWNER.equal(IiindexColumns.INDEX_OWNER)) .where(Iiconstraints.SCHEMA_NAME.equal(getSchemaName())) .and(Iiconstraints.CONSTRAINT_TYPE.equal("R")) .orderBy( IirefConstraints.REF_TABLE_NAME.asc(), IirefConstraints.REF_CONSTRAINT_NAME.asc(), IiindexColumns.KEY_SEQUENCE.asc()) .fetch();Conclusion Once more it can be said that the world of RDBMS is very heterogeneous. Database abstraction in Java is established only to a certain degree in technologies such as JDBC, Hibernate/JPA, and third party libraries such as SchemaCrawler, SchemaSpy, and jooq-meta. Reference: Database schema navigation in Java from our JCG partner at the “Java, SQL, and jOOQ” blog. Related Articles :Java Persistence API: a quick intro… GWT 2 Spring 3 JPA 2 Hibernate 3.5 Tutorial JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Hibernate mapped collections performance problems...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books