Featured FREE Whitepapers

What's New Here?

jenkins-logo

Android and Jenkins: Continuous Integration

By using Jenkins, it’s pretty easy to get a Continuous Integration server set up with an Android project. But before you dive into setting up the software itself, it’s very helpful to have some basic concepts on a few different types of software that you will run into. For those unaware, Continuous Integration is a way to improve your code by following the “fail fast” concept. If any bug or problem crops up in your application, you want to find them as early as possible. By building and testing your application frequently, you can do just that. By finding the bug quickly, it will be easier to fix the problem, as the code was written recently. What CI really does for you is two things:Compiles the application when any new changes are checked in Runs automated tests every time we recompileSo CI is really only helpful if you are growing your Unit Tests as well as your application. Without the tests to support your application, Continuous Integration quickly loses its usefulness. Jenkins is really an automated build server for a number of different types of applications. Jenkins itself is written in Java, so it’s only natural for it to support Java projects at its core, which works out great for Android.Jenkins Plugins One of the really awesome things about Jenkins is it’s plugin capability. Jenkins alone can pull your software (from SVN) and build it, if you’re a Java project, but that’s really it. By adding a few plugins, you can add some more interesting and useful features. Without a handful of plugins, using Jenkins as a CI server wouldn’t be possible. For Android, these two Jenkins plugins are incredibly useful:Android EmulatorAllows automated unit testing using the emulatorxUnitImproves the basic jUnit support of JenkinsAnt When using Jenkins, some basic knowledge of Ant is also necessary. If you’re building Android projects with Eclipse, you likely don’t ever deal with the configuration that is required to build your application. With Jenkins, you have to do a bit of work to get it building properly by creating an Ant script for your app. I won’t go into the details of actually creating an Ant script. You can find more info. on Ant from the above link, and this blog post helps immensely when generating the script from scratch.Unit Testing Robodium is a framework that beefs up Android Unit Test capabilities. It allows much easier testing of UI elements within Android. If you are looking to write unit tests for your application, you should really check it out. I hope this article helps you get the basics of what’s really needed for an Android CI server using Jenkins. To get into the details to truly set the server up, check out this blog article: Don’t forget to share! Reference: Android and Jenkins: Continuous Integration from our JCG partner Isaac Taylor at the Programming Mobile blog....
software-development-2-logo

A Classification of Tests

There are many ways of testing software. This post uses the five Ws to classify the different types of tests and shows how to use this classification. Programmer vs Customer (Who) Tests exist to give confidence that the software works as expected. But whose expectations are we talking about? Developers have different types of expectations about their code than users have about the application. Each audience deserves its own set of tests to remain confident enough to keep going.Functionality vs Performance vs Load vs Security (What) When not specified, it’s assumed that what is being tested is whether the application functions the way it’s supposed to. However, we can also test non-functional aspects of an application, like security.Before Writing Code vs After (When) Tests can be written after the code is complete to verify that it works (test-last), or they can be written first to specify how the code should work (test-first). Writing the test first may seem counter-intuitive or unnatural, but there are some advantages:When you write the tests first, you’ll guarantee that the code you later write will be testable (duh). Anybody who’s written tests for legacy code will surely acknowledge that that’s not a given if you write the code first Writing the tests first can prevent defects from entering the code and that is more efficient than introducing, finding, and then fixing bugs Writing the tests first makes it possible for the tests to drive the design. By formulating your test, in code, in a way that looks natural, you design an API that is convenient to use. You can even design the implementationUnit vs Integration vs System (Where)Tests can be written at different levels of abstraction. Unit tests test a single unit (e.g. class) in isolation. Integration tests focus on how the units work together. System tests look at the application as a whole. As you move up the abstraction level from unit to system, you require fewer tests.Verification vs Specification vs Design (Why) There can be different reasons for writing tests. All tests verify that the code works as expected, but some tests can start their lives as specifications of how yet-to-be-written code should work. In the latter situation, the tests can be an important tool for communicating how the application should behave. We can even go a step further and let the tests also drive how the code should be organized. This is called Test-Driven Design (TDD).Manual vs Automated Tests (How) Tests can be performed by a human or by a computer program. Manual testing is most useful in the form of exploratory testing. When you ship the same application multiple times, like with releases of a product or sprints of an Agile project, you should automate your tests to catch regressions. The amount of software you ship will continue to grow as you add features and your testing effort will do so as well. If you don’t automate your tests, you will eventually run out of time to perform all of them.Specifying Tests Using the Classification With the above classifications we can be very specific about our tests. For instance:Tests in TDD are automated (how) programmer (who) tests that design (why) functionality (what) at the unit or integration level (where) before the code is written (when) BDD scenarios are automated (how) customer (who) tests that specify (why) functionality (what) at the system level (where) before the code is written (when) Exploratory tests are manual (how) customer (who) tests that verify (why) functionality (what) at the system level (where) after the code is written (when) Security tests are automated (how) customer (who) tests that verify (why) security (what) at the system level (where) after the code is written (when)By being specific, we can avoid semantic diffusion, like when people claim that “tests in TDD do not necessarily need to be written before the code”.Reducing Risk Using the Classification Sometimes you can select a single alternative along a dimension. For instance, you could perform all your testing manually, or you could use tests exclusively to verify. For other dimensions, you really need to cover all the options. For instance, you need tests at the unit and integration and system level and you need to test for functionality and performance and security. If you don’t, you are at risk of not knowing that your application is flawed. Proper risk management, therefore, mandates that you shouldn’t exclusively rely on one type of tests. For instance, TDD is great, but it doesn’t give the customer any confidence. You should carefully select a range of test types to cover all aspects that are relevant for your situation. Reference: A Classification of Tests from our JCG partner Remon Sinnema at the Secure Software Development blog....
software-development-2-logo

Domain Modeling: Naive OO Hurts

I’ve read a post recently on two ways to model data of business domain. My memory is telling me it was Ayende Rahien, but I can’t find it on his blog. One way is full-blown object-relational mapping. Entities reference each other directly, and the O/R mapper automatically loads data for you as you traverse the object graph. To obtain Product for an OrderLine, you just call line.getProduct() and are good to go. Convenient and deceptively transparent, but can easily hurt performance if you aren’t careful enough. The other way is what that post may have called a document-oriented mapping. Each entity has its ID and its own data. It may have some nested entities if it’s an aggregate root (in domain-driven design terminology). In this case, OrderLine only has productId, and if you want to get the product you have to call ProductRepository.getProduct(line.getProductId()). It’s a bit less convenient and requires more ceremony, but thanks to its explicitness it also is much easier to optimize or avoid performance pitfalls. So much for the beforementioned post. I recently had an opportunity to reflect more on this matter on a real world example.The Case The light dawned when I set out to create a side project for a fairly large system that has some 200+ Hibernate mappings and about 300 tables. I knew I only needed some 5 core tables, but for the sake of consistency and avoiding duplication I wanted to reuse mappings from the big system. I knew there could be more dependencies on things I don’t need, and I did not have a tool to generate a dependency graph. I just included the first mapping, watched Hibernate errors for unmapped entities, added mappings, checked error log again… And so on, until Hibernate was happy to know all the referenced classes. When I finished, the absolutely minimal and necessary “core” in my side project had 110 mappings. As I was adding them, I saw that most of them are pretty far from the core and from my needs. They corresponded to little subsystems somewhere on the rim. It felt like running a strong magnet over a messy workplace full of all kinds of metal things when all I needed was two nails.Pain Points It turns out that such object orientation is more pain than good. Having unnecessary dependencies in a spin-off reusing the core is just one pain point, but there are more. It also is making my side project slower and using too many resources – I have to map 100+ entities and have them supported in my 2nd level cache. When I’m loading some of the core entities, I also pull many things I don’t need: numerous fields used in narrow contexts, even entire eagerly-loaded entities. At all times I have too much data floating around. Such a model also is making development much slower. Build and tests take longer, because there are many more tables to generate, mappings to scan etc. It’s also slower for another reason: If a domain class references 20 other classes, how does a developer know which are important and which are not? In any case it may lead to very long and somewhat unpleasant classes. What should be core becomes a gigantic black hole sucking in the entire universe. When an unaware newbie goes near, most of the time he will either sink trying to understand everything, or simply break something – unaware of all the links in his context, unable to understand all links present in the class. Actually, even seniors can be deceived to make such mistakes. The list is probably much longer.Solution? There are two issues here. How did that happen? I’m writing a piece of code that’s pretty distant from the core, but could really use those two new attributes on this core entity. What is the fastest way? Obvious: Add two new fields to the entity. Done. I need to add a bunch of new entities for a new use case that are strongly related to a core entity. The shortest path? Easy, just reference a few entites from the core. When I need those new objects and I already have the old core entity, Hibernate will do the job of loading the new entities for me as I call the getters. Done. Sounds natural and I can see how I could make such mistakes a few years ago, but the trend could have been stopped or even reversed. With proper code reviews and retrospectives, the team may have found a better way earlier. Having some slack and good will it may have even refactored the existing code. Is there a better way to do it? Let’s go back to the opening section on two ways to map domain classes: “Full-blown ORM” vs. document/aggregate style. Today I believe full-blown ORM may be a good thing for a fairly small project with a few closely related use cases. As soon as we branch out new bigger chunks of functionality and introduce more objects, they should become their own aggregates. They should never be referenced from the core, even though they themselves may orbit around and have a direct link to the core. The same is true for the attributes of core entites: If something is needed in a faraway use case, don’t spoil the core mapping with a new field. Even introduce a new entity if necessary. In other words, learn from domain-driven design. If you haven’t read the book by Eric Evans yet, go do it now. It’s likely the most worthwhile and influential software book I’ve read to date. Don’t forget to share! Reference: Domain Modeling: Naive OO Hurts from our JCG partner Konrad Garus at the Squirrel’s blog....
devops-logo

Devops and Maintenance go together like Apple Pie and Ice Cream

One of the things I like about devops is that it takes on important but neglected problems in the full lifecycle of a system: making sure that the software is really ready to go into production, getting it into production, and keeping it running in production. Most of what you read and hear about devops is in online startups – about getting to market faster and building tight feedback loops with Continuous Delivery and Continuous Deployment. But devops is even more important in keeping systems running – in maintenance and sustaining engineering and support. Project teams working on the next new new thing can gloss over the details of how the software will actually run in production, how it will be deployed and how it should be hardened. If they miss something the problems won’t show up until the system starts to get used by real customers for real business under real load – which can be months after the system is launched, by which time the system might already be handed over to a sustaining engineering team to keep things turning. This is when priorities change. The system always has to work. You can’t ignore production – you’re dragged down into the mucky details of what it takes to keep a system running. The reality is that you can’t maintain a system effectively without understanding operational issues and without understanding and working with the people who operate and support the system and its infrastructure. Developers on maintenance teams and Ops are both measured onSystem reliability and availability Cycle time / turnaround on changes and fixes System operations costs Security and complianceDevops tools and practices and ideas are the same tools and practices and ideas that people maintaining a system also need:Version control and configuration management to track everything that you need to build and test and deploy and run the system Fast and simple and repeatable build and deployment to make changes safe and cheap Monitoring and alerting and logging to make support and troubleshooting more effective Developers and operations working together to investigate and solve problems and to understand and learn together in blameless postmortems, building and sharing a culture of trust and transparencyDevops isn’t just for online startups Devops describes the reality that maintenance and sustaining engineering teams would be if they could be working in. An alternative to late nights trying to get another software release out and hoping that this one will work; and to fire fighting in the dark; and to ass covering and finger pointing; and to filling out ops tickets and other bullshit paperwork. A reason to get up in the morning. The dirty secret is that as developers most of us will spend most of our careers maintaining software so more of us should learn more about devops and start living it. Reference: Devops and Maintenance go together like Apple Pie and Ice Cream from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

Version control branching strategies

Almost two years ago we started a new project (related to SOA/BPM infrastructure) on a large telco organization. Project is going very well and run on production since last summer. We develop more and more modules and things going well. Now it’s time to transfer the knowledge to client’s developers. The main things that we have to taught them are naming conventions, source code, architecture and administration tasks. As we will be in the same team for a few more months, first of all we have to explain them our version control branching strategy. So, on this post I will explain our approach on version control branching strategies. An approach that we defined after spending a lot hours on conversations and reading books. Facts:We have many individual/autonomous applications that are related to each other. Each application is dependant to at least another application (based on WSDLs). We do not have quality assurance manager, neither release manager, so we do all the work, which is related to design, development, release management and administration tasks. We have defined 3 types of testing (related to release):Unit testing: Testing that can be done by us (developers) without communicate with external systems (like CRM, Billing, etc.). Integration testing: Testing that includes communication with external systems (Billing, CRM, etc.) and is done by developer(s). User Acceptance Test: End-to-End testing that is performed by client without us.Rules:We follow the unstable branching strategy. This means that trunk contains the latest code, regardless of how must stable it is. Branching is used only for release candidates, releases, bugfixes and experimental code. Use consistent tag-naming schemes for the tags:Branch tags will always start with ‘BT_’ Release candidate prefix is ‘RC_’ Release prefix is ‘R_’ Bugfix prefix is ‘BF_’ Experimental code prefix is ‘EC_’ Developer tags start with ‘DEV_’ Branch names will always start with ‘BR_’A tag is always created on trunk:Before create a branch: <branchName>_BASE (e.g RC_2_0_ApplicationName_BASE) Before merge a branch to trunk: <branchTabName>_PMB (PMB: Pre Merge Branch) After merge branch to trunk: <branchTagName>_AMB (AMB: After Merge Branch) (e.g. RC_2_0_ApplicationName_AMB)Always do tags in branches after do work, with suffix ‘–counter’ and always increase counter (e.g. BT_RC_2_0–1_ApplicationName). Merge back frequently; the fewer changes there are to the trunk, the easier they are to merge. Create a Wiki page for each application and write details for each individual tag and branch. Record tags in related issues/bugs on your bug tracking system.The above rules/steps are for only one application, but they are the same for all applications. OK, it’s time for the simple release management example. As at most times, an image is better than thousands of words. This is not always true, try to find an image to explain wise. I am sure you cannot. So many books have written, but still we cannot understand/define wise. So sad for human kind…. Let’s go to something more simple like release management. So, check the following image:Release Candidate scenario of application PName:Development is done on TRUNC and all developers work there. Application is at stable release 1_9. and after a lot changes want to go to release 2_0. The underscore is used instead of dot to comply with most version control software variants (CVS is not accept the dot in tags) When developer finish his unit testing and as many integration tests he could accomplish, then decide that application is ready for integration and acceptance tests and so he creates a Tag on trunk with name ‘RC_2_0_PName_BASE’. This the release candidate timestamp on TRUNK, as tags are timestamps. Release Candidate 2_0 is branched using name ‘BR_RC2_0_PName’. During Integration Test an issue is raised. It is fixed on branch and then tagged as BT_RC2_0–1_PName. Developer decide that is better to merge the changes back to TRUNC, while the rest of the development team continues to work towards to RC3_0. So, two things must be done:Commits paused for AppName application (using a loud announcement like “Go for coffee I must merge to trunk”). He tags TRUNK with RC2_0–1_PName_PMB.Developer merges the branch to TRUNC. Developer do the required commits (if any) to have a valid application on TRUNC and create a new tag on TRUNC named RC2_0–1_PName_AMB. Developer announces “Go back to work” and unpauses commits on PName application. During Acceptance Test another issue is raised. Code needs a few modifications, after a few commits, issue is resolved and committed on branch. Then branch is tagged as BT_RC2_0–2_PName. No merge to trunk here (developer decides here as he is the release manager in our case) During Acceptance Test another issue is raised. Code needs a few modifications, after a few commits, issue is resolved and committed on branch. Then branch is tagged as BT_RC2_0–3_PName. Acceptance Tests are completed with the modified code unattached. Now release “2_0? is ready. Developer create a tag BT_R2_0_PName on branch, which is exactly the same code as BT_RC2_0–3_PName. Now, developer must merge the code to TRUNK. First pause the commits on the trunk and then creates a tag named R2_0_PName_PMB. Developer merge the changes to trunk. Then do additional required commits and create another tag named R2_0_PName_AMB and afterwards unpause the commits on TRUNK. While application (release 2_0) is running on production another issue is raised that needs a quick fix. Problem is fixed on branch and a tag is created on branch with name BT_RC2_0_1_PName. Additionally the quick fix is also tagged as release with tag name BT_R2_0_1_PName. Developer must (in most cases) merge the changes back to TRUNC while the rest of the development continues to work towards RC3_0. So, he pause the commits on application and tags TRUNK with R2_0_1_PName_PMB. Developer merge the changes from branch to trunk. Developer do the required commits to have a valid application on TRUNC and create a new tag on TRUNC named R2_0_1_PName_AMB. Development on TRUNK is still towards to RC3_0 and work is never ends…The above rules/steps are referred to only one application, but are the same for all applications. Please read again the first post of this series to understand the rules. I know that you will need explanations on the image, so please ask. A few important explanations:Branch tags will always start with ‘BT_’ Release candidate prefix is ‘RC_’ Release prefix is ‘R_’ Release numbers are separated with underscore (e.g. 2_0) Tags that are not releases but are related to a release candidate are using 2 dashes ‘–” (e.g. 2_0–1)Using this procedure we can go back to any release of PName that we want (using the branch tags) and continue the branch with a quick fix, or even create a new branch of branch. I know that there are better and more elegant procedures, but this is just the procedure that we follow and works in our small development team. It is simple, without any additional tools, but needs strong communication between developers, an issue tracker (we use mantisbt) and require developers always follow the rules. Also, you many need a wiki (we use MediaWiki) to record all the tags for each application. The release notes are recorded into the issue tracker. Without rules and release management procedure, even a small application can be very complex after a few production releases. Always define rules and be sure that are acceptable by all developers. If there is a leak in the defined procedure, then the developers will find it and apply it for sure. As now the developers are the users of the product (product = release management procedure) and users are always unpredictable. Don’t forget to share! Reference: Version control branching strategies 1/2,  Version control branching strategies 2/2 from our JCG partner Adrianos Dadis at the Java, Integration and the virtues of source blog....
enterprise-java-logo

Web Service security and the human dimension of SOA roadmap

In most non-trivial SOA landscapes, keeping track of the constantly evolving integrations among systems can be hard unless there is in place a clearly identified way to publish and find the appropriate pieces of information. An overview of the IT landscape, defining what is currently or will be connected to what, is a prerequisite for being able to maintain the environment. Absence of this typically leads to a feeling of “Spaghetti Oriented Environment” and reluctance to start anything big. This statement sounds obvious but it is not always taken into account in practice. Some organizations either do not have in place such a centralized control of integration or have stopped using it because it “just got in the way of anything”. At best, this means that the integration information is kept in the head of some key individuals, which is risky. More often, teams in such places do not dare updating the service contracts “in case something is still relying on them” and rather duplicate them anytime an update is needed, which goes at counter-propose of SOA. Sometimes a good idea needs only a few steps back to be applied correctly. I am explaining in this post why I think the need of SOA roadmap should motivate the presence of security access restrictions on most Web services, including non sensitive ones. Why is such a simple idea hard in practice? Several factors can motivate teams to skip this important documentation step:Urgency of other important short-term tasks and the feeling that the team is “extinguishing fires” constantly, not having time for anything else Lack of clearly identified central repository where to access and publish such information (such as an SOA registry or repository), or lack of usage of it. Lack of centralized governance overlooking the integrationsFrom the human factors point of view, this situation can be worsened by the “I have enough already” syndrome. Within complex multi-teams/multi-projects environments, individuals already overwhelmed by the problems at hands are typically not taking the initiative of hunting for hard-to-find (and to solve) dependency problems with other projects. We need to predict this and assist proactively those teams, keeping in mind that those other problems they are dealing with are of course important as well. The core root of the above is a feeling that it is easier to skip the validation/documentation steps of the integration whenever possible. We have to reverse this feeling by advertising the value of centralized integration information as well as raising the difficulty of implementing undocumented integrations. What we need We need an easy to use process that collects, validates and publishes current and future dependencies among systems. A key aspect is to keep it simple and close to the people who will actually use it, in a “just enough governance” fashion. The four main components seem to be:A clear procedure in place for requesting a new integration or updating an existing one. This includes validation from both business and technical perspectives, ensuring that the environment remains as clean and as future-proof as possible. If an EA effort is in place, most of those requests come from and to the EA team, which makes this step trivial! In practice, such requests will also come from project teams when they identify a required dependency during detailed design or implementation phase. A clearly identified and easy-to access repository where to look for the current and planned integrations. This repository must include versioning of each future dependency as well as a deprecation/decommission planning. A team responsible for updating the central repository, keeping the roadmap up to date. This would typically be the EA team, if available. At technical level, impossibility to perform an integration if the above three components have not been involved. This should avoid “phantom dependencies” that remain hidden until a contract update triggers a problem.This fourth component should in practice be an enterprise-wide IT principle stating that each Web Service implementation must require security authorization of the calling application. This will not prevent the presence of other security mechanisms when required by the service, for example transporting a ticket with the identity of the human user initiating the original business action (both REST and SOAP allow the presence of several simultaneous security tokens). The implementation of this principle must be made easy, typically by attaching technical documentation and code samples to the IT principle. Because we do not expect colleagues to be hacking each other, this can take a very low-risk approach, the point is just to make sure it is easier to involve the EA team then to put in place a phantom dependency. When using SOAP, my recommendation would be to use a simple WS-UsernameToken policy and associate one username/password pair per client application. When using REST, a well known mechanism is using HMAC, hashing part of the request together with a nonce and/or an expiration date (this mechanism is similar to that used by Amazon S3). Conclusion In this post, I have tried to explain why I think a simple security policy systematically put in place in each Web Service helps keeping track of the IT landscape and ensures that no “phantom dependencies” exists out of sight of the SOA governance team. The implementation of this security policy must be simple to do, supported by helper documents and not be very strong, just enough to ensure the EA team is aware of all integration implementations. Reference: Web Service security and the human dimension of SOA roadmap from our JCG partner Svend Vanderveken at the Svend blog blog....
software-development-2-logo

Outbound Passwords

Much has been written on how to securely store passwords. This sort of advice deals with the common situation where your users present their passwords to your application in order to gain access. But what if the roles are reversed, and your application is the one that needs to present a password to another application? For instance, your web application must authenticate with the database server before it can retrieve data. Such credentials are called outbound passwords.Outbound Passwords Must Be Stored Somewhere Outbound passwords must be treated like any other password. For instance, they must be as strong as any password. But there is one exception to the usual advice about passwords: outbound passwords must be written down somehow. You can’t expect a human to type in a password every time your web application connects to the database server. This begs the question of how we’re supposed to write the outbound password down.Storing Outbound Passwords In Code Is A Bad Idea The first thing that may come to mind is to simply store the outbound password in the source code. This is a bad idea. With access to source code, the password can easily be found using a tool like grep. But even access to binary code gives an attacker a good chance of finding the password. Tools like javap produce output that makes it easy to go through all strings. And since the password must be sufficiently strong, an attacker can just concentrate on the strings with the highest entropy and try those as passwords. To add insult to injury, once a hard-coded password is compromised, there is no way to recover from the breach without patching the code!Solution #1: Store Encrypted Outbound Passwords In Configuration Files So the outbound password must be stored outside of the code, and the code must be able to read it. The most logical place then, is to store it in a configuration file. To prevent an attacker from reading the outbound password, it must be encrypted using a strong encryption algorithm, like AES. But now we’re faced with a different version of the same problem: how does the application store the encryption key? One option is to store the encryption key in a separate configuration file, with stricter permissions set on it. That way, most administrators will not be able to access it. This scheme is certainly not 100% safe, but at least it will keep casual attackers out.A more secure option is to use key management services, perhaps based on the Key Management Interoperability Protocol (KMIP). In this case, the encryption key is not stored with the application, but in a separate key store. KMIP also supports revoking keys in case of a breach.Solution #2: Provide Outbound Passwords During Start Up An even more secure solution is to only store the outbound password in memory. This requires that administrators provide the password when the application starts up. You can even go a step further and use a split-key approach, where multiple administrators each provide part of a key, while nobody knows the whole key. This approach is promoted in the PCI DSS standard. Providing keys at start up may be more secure than storing the encryption key in a configuration file, but it has a big drawback: it prevents automatic restarts. The fact that humans are involved at all makes this approach impractical in a cloud environment.Creating Outbound Passwords If your application has some control over the external system that it needs to connect to, it may be able to determine the outbound password, just like your users define their passwords for your application. For instance, in a multi-tenant environment, data for the tenants might be stored in separate databases, and your application may be able to pick an outbound password for each one of those as it creates them. Created outbound passwords must be sufficiently strong. One way to accomplish that is to use random strings, with characters from different character classes. Another approach is Diceware. Make sure to use a good random number generator. In Java, for example, prefer SecureRandom over plain old Random. Reference: Outbound Passwords from our JCG partner Remon Sinnema at the Secure Software Development blog....
apache-maven-logo

Cobertura and Maven: Code Coverage for Integration and Unit Tests

On the turmeric project, we maintain a nightly dashboard. On the dash board we collect statistics about the project, including code coverage, findbugs analysis and other metrics. We had been using the Maven EMMA plugin to provide code coverage, but ran into a problem with EMMA. It was causing test failures after the classes were instrumented. So we disabled the code coverage, as we needed accurate test results during our builds. However we still needed code coverage, more importantly we also need coverage for the existing test suite, which really an integration test suite instead of a unit test suite. Cobertura and EMMA plugins both really are designed to work with unit tests. So we have to work around the limitation.First we need to instrument the classes. Second we need to jar up the instrumented classes and have them used by the build later. Need to tell the Integration Tests to use the instrumented classes for it’s dependencies. Generate an XML report of the results.I tried doing this without falling back to ant, but everytime I tried to use the maven-site-plugin and configure it to generate the reports, it would complain that cobertura:check wasn’t configured correctly. In our case I didn’t need check to run, I just needed the reports generated. So Ant and AntContrib to the rescue. The following is the complete maven profile I came up with: <profile> <id>cobertura</id> <dependencies> <dependency> <groupId>net.sourceforge.cobertura</groupId> <artifactId>cobertura</artifactId> <optional>true</optional> <version>1.9.4.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <configuration> <instrumentation> <excludes> <exclude>org/ebayopensource/turmeric/test/**/*.class</exclude> <exclude>org/ebayopensource/turmeric/common/v1/**/*.class</exclude> </excludes> </instrumentation> </configuration> <executions> <execution> <id>cobertura-instrument</id> <phase>process-classes</phase> <goals> <goal>instrument</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <id>cobertura-jar</id> <phase>post-integration-test</phase> <goals> <goal>jar</goal> </goals> <configuration> <classifier>cobertura</classifier> <classesDirectory>${basedir}/target/generated-classes/cobertura</classesDirectory> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-install-plugin</artifactId> <version>2.3.1</version> <executions> <execution> <id>cobertura-install</id> <phase>install</phase> <goals> <goal>install</goal> </goals> <configuration> <classifier>cobertura</classifier> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>verify</phase> <configuration> <tasks> <taskdef classpathref='maven.runtime.classpath' resource='tasks.properties' /> <taskdef classpathref='maven.runtime.classpath' resource='net/sf/antcontrib/antcontrib.properties' /> <available file='${project.build.directory}/cobertura/cobertura.ser' property='ser.file.exists' /> <if> <equals arg1='${ser.file.exists}' arg2='true' /> <then> <echo message='Executing cobertura report' /> <mkdir dir='${project.build.directory}/site/cobertura' /> <cobertura-report format='xml' destdir='${project.build.directory}/site/cobertura' datafile='${project.build.directory}/cobertura/cobertura.ser' /> </then> <else> <echo message='No SER file found.' /> </else> </if> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>ant-contrib</groupId> <artifactId>ant-contrib</artifactId> <version>20020829</version> </dependency> </dependencies> </plugin> </plugins> </build> </profile> Note: Do not use the cobertura:cobertura goal with this profile. It will fail the build because it will try to instrument the classes twice. The use of Ant and AntContrib was a necessity because there is no cobertura:report goal, as it expects to run during the site generation phase. However, this causes the check goal to run as well, and we didn’t need that. So maybe, I’ll work up a patch to add a reporting goal just to run the report without having to run the site goal as well. Hopefully, this helps some people, as I lost much hair working this out. Happy coding and don’t forget to share! Reference: Enable Code Coverage for Integration and Unit Tests using Cobertura and Maven from our JCG partner David Carver at the Intellectual Cramps blog....
software-development-2-logo

XACML In The Cloud

The eXtensible Access Control Markup Language (XACML) is the de facto standard for authorization. The specification defines an architecture (see image on the right) that relates the different components that make up an XACML-based system. This post explores a variation on the standard architecture that is better suitable for use in the cloud.Authorization in the Cloud In cloud computing, multiple tenants share the same resources that they reach over a network. The entry point into the cloud must, of course, be protected using a Policy Enforcement Point (PEP). Since XACML implements Attribute-Based Access Control (ABAC), we can use an attribute to indicate the tenant, and use that attribute in our policies. We could, for instance, use the following standard attribute, which is defined in the core XACML specification: urn:oasis:names:tc:xacml:1.0:subject:subject-id-qualifier This identifier indicates the security domain of the subject. It identifies the administrator and policy that manages the name-space in which the subject id is administered. Using this attribute, we can target policies to the right tenant.Keeping Policies For Different Tenants Separate We don’t want to mix policies for different tenants. First of all, we don’t want a change in policy for one tenant to ever be able to affect a different tenant. Keeping those policies separate is one way to ensure that can never happen. We can achieve the same goal by keeping all policies together and carefully writing top-level policy sets. But we are better off employing the security best practice of segmentation and keeping policies for different tenants separate in case there was a problem with those top-level policies or with the Policy Decision Point (PDP) evaluating them (defense in depth).Multi-tenant XACML ArchitectureWe can use the composite pattern to implement a PDP that our cloud PEP can call. This composite PDP will extract the tenant attribute from the request, and forward the request to a tenant-specific Context Handler/PDP/PIP/PAP system based on the value of the tenant attribute. In the figure on the right, the composite PDP is called Multi-tenant PDP. It uses a component called Tenant-PDP Provider that is responsible for looking up the correct PDP based on the tenant attribute. Don’t forget to share! Reference: XACML In The Cloud from our JCG partner Remon Sinnema at the Secure Software Development blog....
android-logo

Android Homescreen Widget with AlarmManager

In this tutorial we will learn to create widget with update interval less than 30 mins using AlarmManager. New update: In Android 4.1, a new feature has been introduced for Homescreen widget which enables widget to reorganize its view when resized . To support this feature a new method onAppWidgetOptionsChanged() has been introduced in AppWidgetProvider class. This method gets called in response to the ACTION_APPWIDGET_OPTIONS_CHANGED broadcast when this widget has been layed out at a new size. Project Information: Meta-information about the project. Platform Version : Android API Level 16. IDE : Eclipse Helios Service Release 2 Emulator: Android 4.1 Prerequisite: Preliminary knowledge of Android application framework, Intent Broadcast receiver and AlarmManager.Example with fixed update interval less than 30 mins. In this tutorial we will create time widget which shows current time. This widget will get updated every second and we will be using AlarmManager for it. Here, repeating alarm is set for one second interval. But in real world scenario, it is not recommended to use one second repeating alarm because it drains the battery fast. You have to follow the similar steps mentioned in previous widget tutorial to write widget layout file. But this time we are introducing a TextView field in the layout which will display the time. The content of the “time_widget_layout.xml” is given below. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:background="@drawable/widget_background" ><TextView android:id="@+id/tvTime" style="@android:style/TextAppearance.Medium" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:layout_margin="4dip" android:gravity="center_horizontal|center_vertical" android:textColor="#000000" /></LinearLayout> Follow the same procedure to create the AppWidgetProvider metadata file. The content of metadata file ”widget_metadata.xml” is given below. <appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android" android:initialLayout="@layout/time_widget_layout" android:minHeight="40dp" android:minWidth="130dp" android:updatePeriodMillis="1800000" > </appwidget-provider>In this tutorial, onEnabled(), onDsiabled(), onUpdate() and onAppWidgetOptionsChanged() have been defined unlike the previous widget tutorial where only onUpdate() was defined.onEnabled(): An instance of AlarmManager is created here to start the repeating timer and register the intent with the AlarmManager. As this method gets called at the very first instance of widget installation, it helps to set repeating alarm only once. onDisabled(): In this method, alarm is canceled because this method gets called as soon as the very last instance of widget is removed/uninstalled and we don’t want to leave the registered alarm even when it’s not being used. onUpdate(): This method updates the time on remote TextView. onAppWidgetOptionsChanged(): This method gets called when the widget is resized.package com.rakesh.widgetalarmmanagerexample;import android.app.AlarmManager; import android.app.PendingIntent; import android.appwidget.AppWidgetManager; import android.appwidget.AppWidgetProvider; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.widget.RemoteViews; import android.widget.Toast;public class TimeWidgetProvider extends AppWidgetProvider {@Override public void onDeleted(Context context, int[] appWidgetIds) { Toast.makeText(context, "TimeWidgetRemoved id(s):"+appWidgetIds, Toast.LENGTH_SHORT).show(); super.onDeleted(context, appWidgetIds); }@Override public void onDisabled(Context context) { Toast.makeText(context, "onDisabled():last widget instance removed", Toast.LENGTH_SHORT).show(); Intent intent = new Intent(context, AlarmManagerBroadcastReceiver.class); PendingIntent sender = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE); alarmManager.cancel(sender); super.onDisabled(context); }@Override public void onEnabled(Context context) { super.onEnabled(context); AlarmManager am=(AlarmManager)context.getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(context, AlarmManagerBroadcastReceiver.class); PendingIntent pi = PendingIntent.getBroadcast(context, 0, intent, 0); //After after 3 seconds am.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis()+ 100 * 3, 1000 , pi); }@Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { ComponentName thisWidget = new ComponentName(context, TimeWidgetProvider.class);for (int widgetId : appWidgetManager.getAppWidgetIds(thisWidget)) {//Get the remote views RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.time_widget_layout); // Set the text with the current time. remoteViews.setTextViewText(R.id.tvTime, Utility.getCurrentTime("hh:mm:ss a")); appWidgetManager.updateAppWidget(widgetId, remoteViews); } }@Override public void onAppWidgetOptionsChanged(Context context, AppWidgetManager appWidgetManager, int appWidgetId, Bundle newOptions) { //Do some operation here, once you see that the widget has change its size or position. Toast.makeText(context, "onAppWidgetOptionsChanged() called", Toast.LENGTH_SHORT).show(); } }Broadcast receiver is defined to handle the intent registered with alarm. This broadcast receiver gets called every second because repeating alarm has been set in the AppWidgetProvider classs for 1 second. Here, onReceive() method has been defined which updates the widget with the current time and getCurrentTime() has been used to get the current time. package com.rakesh.widgetalarmmanagerexample;import android.app.AlarmManager; import android.app.PendingIntent; import android.appwidget.AppWidgetManager; import android.content.BroadcastReceiver; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.os.PowerManager; import android.widget.RemoteViews; import android.widget.Toast;public class AlarmManagerBroadcastReceiver extends BroadcastReceiver {@Override public void onReceive(Context context, Intent intent) { PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE); PowerManager.WakeLock wl = pm.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "YOUR TAG"); //Acquire the lock wl.acquire();//You can do the processing here update the widget/remote views. RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.time_widget_layout); remoteViews.setTextViewText(R.id.tvTime, Utility.getCurrentTime("hh:mm:ss a")); ComponentName thiswidget = new ComponentName(context, TimeWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(thiswidget, remoteViews); //Release the lock wl.release(); } } It’s always a good idea to keep utility methods in some utility class which can be accessed from other packages. getCurrentTime() has been defined in the Uitility class. This method is used in AppWidgetProvider and BroadcastReciever classes. package com.rakesh.widgetalarmmanagerexample;import java.text.Format; import java.text.SimpleDateFormat; import java.util.Date;public class Utility { public static String getCurrentTime(String timeformat){ Format formatter = new SimpleDateFormat(timeformat); return formatter.format(new Date()); } }In Android manifest file, we need to include WAKE_LOCK permission because wake lock is used in broadcast receiver. AlarmManagerBroadcastReceiver has been registered as broadcast receiver. Remaining part is simple to understand. <manifest android:versioncode="1" android:versionname="1.0" package="com.rakesh.widgetalarmmanagerexample" xmlns:android="http://schemas.android.com/apk/res/android"> <uses-sdk android:minsdkversion="16" android:targetsdkversion="16"/><uses-permission android:name="android.permission.WAKE_LOCK"/> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name"> <activity android:label="@string/title_activity_widget_alarm_manager" android:name=".WidgetAlarmManagerActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <receiver android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:name=".TimeWidgetProvider"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE"/> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_metadata"/> </receiver> <receiver android:name=".AlarmManagerBroadcastReceiver"/> </application> </manifest>Once the code is executed, the widget gets registered. When you install widget on homescreen, it appears as shown below.you can download source code from here. Reference: Tutorial on Android Homescreen Widget with AlarmManager. from our JCG partner Rakesh Cusat at the Code4Reference blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books