Featured FREE Whitepapers

What's New Here?

enterprise-java-logo

A closer look at Oracle IDM Auditing

Reporting is a vital functionality in any product which deals with sensitive information. Same applies to Identity & Access Management tools. Oracle IDM’s Auditing module acts as a foundation for its OOTB Reporting capabilities. Let’s take a quick look at Auditing engine and how it facilitates the Reporting functionality within OIM. The use case presented here is simple – change to a user record in OIM. What are the sequence of events which get triggered from an Audit perspective? This is best explained by a diagram. I came up with the figure below in an attempt to better articulate the process.Although the diagram is self explanatory, a theoretical translation of the same is not going to harm us!The updated/created user record gets pushed into the USR table (stores the user information) – Its a normal process by which the information gets recorded in the OIM Database The information is further propagated by the OIM Auditing engine (as a part of core back end server logic) and it initiates a transaction The Audit Engine inserts a new entry in the AUD_JMS table as a part of the audit transaction completion. The AUD_JMS table is nothing but a staging table The Issue Audit Messages scheduled job picks up the Audit messages in the AUD_JMS table and submits the key to the oimAuditQueue JMS queue. The MDB corresponding to the queue initiates the Audit data processing – the data is seeded into the UPA table. This data is in the form of XML. These are snapshots of the user profile at the instant when the user record was actually modified/created. The UPA table also stores the delta (changes to the profile) Finally, the Post processors of the Audit engine pick up the XML snapshots from the central UPA table and store them in specific audit tables (in a de-normalized format) like UPA_USR, UPA_USR_FIELDS, UPA_RESOURCE, UPA_UD_FORMS etc These tables serve as the primary source of information for the Reporting module. If you have ever worked on the OIM Reporting module, I am sure you can relate to the Data Sources which you configure on your BI Publisher instance – these are for executing direct queries on the above mentioned Audit tables for its data.That’s pretty much it ! This was not a coverage of the entire Audit module in OIM, but a preview of HOW the process is orchestrated on a high level. Thanks for reading!Reference: A closer look at Oracle IDM Auditing from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
agile-logo

Nightmare on Agile Street

I’m awake. I’m lying in my bed. I’m sweating but I’m cold. Its the small hours of the morning and the dream is as vivid as it is horrid…. I’m standing in a clients offices, I’ve been here before, I know whats happening. They are building an website. Quite a complex one, this will be the primary purchasing venue for many customers. This will project the company image – and with the right bits it can up-sell to customers – it can even help reduce costs by servicing the customers after the sale. All good stuff. But it is atrociously “behind” schedule, someone said it would be finished in a year, that was three years ago before any code was written. Now its two years to completion but in my dream people say 2+3=2. How can that be? I can’t say it but the only way out I can see is cancellation. If I was suddenly in charge of the client I’d cancel the thing. I’d salvage what I could and I’d launch a new, smaller, initiative to replace the website. But its too big to fail, even the board knows how much money they are spending. Who’s going to walk in there and say: “Scrap it.” Saying “Scrap it” would be to admit one failure and invite a messenger shooting. And if I was the head of the supplier I’d say the same thing. I’d say to my customer: “I know I’m earning oodles of cash out of this, I know its a high profile feather in our cap but really its out of control you really shouldn’t continue.” But of course they won’t. Forget the money they’d lose, they weren’t hired to answer back – like my tailor friend. And of course I’m neither of those. I’m just the guy having the nightmare and in the nightmare I’m the consultant who is trying to fix, it. In the nightmare I’m not fixing it I’m providing cover, while I’m there its Agile, while its Agile its good, Agile is a good drug and I’m the pusher. “You can’t cancel it because all the competitors have one and so we must have one” tells me a ghostly apparition. “We must be best in class” says another apparition. “We must be head-and-shoulders above the opposition” says third – aren’t the opposition seven times the size? And don’t the competition buy large parts of their solution off the shelf? But every time I look the work seems to grow. Every discussion ends in more stories. Not just stories, epics, super-stories, sub-epics, mezzanine-stories. But its OK, this is Agile. The business keeps throwing new requests at it which are just accepted – because they are Agile! Some of these are quite big. But that’s OK because the team are Agile. And Agile means the team do what the business want right? I watch the Analysts work over the stories in the backlog, as they do each grows and replicates like an alien parasite. The Analysts find more edge cases, extra detail which need to be included, more scenarios which need to be catered for. Each becomes a story itself. But that’s OK because the team are Agile. And those damn competitors don’t stop adding and improving their site which mean the client must add that too. But that’s OK because the team are Agile. And the points…. points are the new hours, in the dream I have a book “The Mythical Man Point”. The backlog is measured in thousands of points. The burn-down charts go down – but only if you look at the sprint burn-down, hunt around Jira and you can find a project wide burn-down, O my god, no….. its full of stories! This is not a burn-down chart carrying us to a safe landing, its a fast climbing interceptor… The backlog is a demon… its… its… undead. The faces of those who’ve seen the chart are prematurely aged. Open Jira and show someone the chart and…. their hair turns grey, the wrinkles appear, in moments they are…. One man is immune. As the points grow his power grows, he is… he is… The Product Owner. He introduces himself: “Snape is the name, Severus Snape” – I knew I’d seen him somewhere before. In the planning meeting, he sees the poker cards pulled out, he focuses on the developer with the highest score, there is a ray of cutting sarcasm… he withers. The developers submit, the numbers are lowered. The Product Owner chuckles to himself – no over estimating on his watch! One of the developers suggest “Maybe we should wait until we finish the current work” Snape sneers: “I thought you were Agile boy?” “If you can’t handle it I have some friends in Transylvanian who are really Agile…. do you want to lose the contract boy? … Off-shore is so so cheap…” There is a reality distortion field around the Product Owner. Show him a burn-down chart and it looks good, his presentations to the steering committee always show perfect burn-down. I’m in my pyjamas standing outside the building at night: a sinister looking figure is breaking and entering, he sneaks into the building, he opens Jira and … inserts stories! His mask falls, it is….The Product Owner! Of course, without stories in the backlog he would cease to exist, his power comes from the size of the backlog, more stories more power. Ever since his boss came down with a rare form of chronic flu a link in the reporting chain has been missing. Made worse when the next man up was dismissed for inappropriate behaviour in the canteen. Since when the Product Owner reports to the COO, a COO who doesn’t really have time for him and only has a shaky understanding of any IT related topic. I do the maths. The backlog isn’t so much a backlog as a mortgage, and the team are under water! The payments they make against the mortgage aren’t even covering the growth in stories. The backlog growth is an interest rate they can’t pay. It takes months for stories to progress through the backlog and reach developers. When work finally gets to developers they too uncover more edge cases, more details, more scenarios, more of just about everything. Why didn’t the Analysts find these? Did they find them and then lose them? Then there is a stream of bugs coming in – oozing through the walls. The technical practices aren’t solid, they are… custard! Bugs get caught but more get through! Bugs can’t be fixed because: “bugs are OpEx and we are funded from CapEx.” Someone has slain the Bug Fixing Fairy, her body is found slumped in the corner, a nice your girl straight out of college. They are hiring another fresh young graduate to send to the slaughter, fortunately Bug Fixing Fairies are Plug Compatible with one another. Release dates can’t be honoured. Woody Allen and Anne Hall walk in – since when did Woody Allen do horror films? ‘two elderly women are at a Catskill mountain resort, and one of ‘em says, “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know; and such small portions.”’ I have X-Ray vision: I can see WIP where it lies, there are piles of it on the floor. Its stacked up like beer barrels in a brewery. But the beer isn’t drinkable. Its a fiendish plan. If anyone drinks that beer, if the WIP is shipped, they will discover…. its full of holes! Quality control is… offshore. Why is there so much WIP lying around? Why is the WIP rising? Because they are Agile comes the reply… the business can change their mind at anytime, and they do. I’m drowning in WIP. WHIP to the left of me, WHIP to the right of me. The developers are half way through a piece of work and the team are told to put it to one side and do something else. Nothing gets delivered, everything is half baked. WHIP – work hopefully in process that is. When, and IF, the team return to a piece of WHIP things have changed, the team members might have changed, so picking it up isn’t easy. WHIP goes off, the stench of slowly rotting software. But that’s OK because the team are Agile. Arhhh, the developers are clones, they are plug compatible, you can switch them into and out as you like… but they have no memory…. It gets worse, the client has cunningly outsourced their network ops to another supplier, and their support desk to another one, and the data-centre to the another… no one contractor has more than one contract. Its a perverse form of WIP limit, no supplier is allowed more than one contract. O my god, I’m flying through the data centre, the data centre supplier has lost control, the are creepers everywhere, each server is patched in a different way, there is a stack of change configuration requests in a dark office, I approach the clerk, its its…. Terry Gilliam, the data centre is in Brazil…. Even when the business doesn’t change its mind the development team get stuck. They have dependencies on other teams and on other some other sub-contractor. So work gets put to one side again, more WIP. All roads lead to Dounreay in Scotland, a really good place if you want to build something really dangerous, but why does this project require a fast breeder nuclear reactor? But that’s OK because the team are Agile. The supplier is desperate to keep their people busy, if The Product Owner sees a programmer who’s fingers are not moving on the keyboard he turns them to stone. The team manager is desperate to save his people, he rummages in the backlog and finds… a piece of work they can do. (With a backlog that large you can always find something even if the business value is negative – and there are plenty of them.) You can’t blame the development team, they need to look busy, they need to justify themselves so they work on what they can. But that’s OK because the team are Agile. Get me out of here!!!!! I’m in my kitchen. My hands are wrapped around a hot-chocolate, I need a fresh pair of dry pyjamas but that can wait while I calm down. I’ve wrapped a blanket around me and have the shivers under control. Are they Agile? Undoubtedly it was sold as Agile. It certainty ain’t pretty but it is called Agile. They have iterations. They have planing meeting. They have burn-downs. They have a Scrum Master and they have Jira. They have User Stories. They have some, slow, automated acceptance tests, some developers are even writing automated unit tests. How could it have gone so wrong? Sure the development team could be better. You could boost the supply curve. But that would be like administering morphine. The pain would be relieved for a while but the fever would return and it would be worse. The real problem is elsewhere. The real problem is rampant demand. The real problem is poor client management. The real problem is a client who isn’t looking at the feedback. The real problems are multiple, thats what is so scary about the dream. They are all interconnected. In the wee-small hours I see no way of winning, its a quagmire: To save this project we need to destroy this project. But we all know what happened in Vietnam. What is to be done? – I can’t go back to sleep until I have an answer. Would the team be better off doing Waterfall? The business would still change its mind, project management would put a change request process in place and the propagation delay would be worse. There would probably be more bugs – testing would be postponed. Releases would be held back. This would look better for a few months until they came to actually test and release. If they did waterfall, if they did a big requirements exercise, a big specification, a big design, a big estimation and a big plan they might not choose to do it. But frankly Agile is telling them clearly this will never be done. In fact its telling them with a lot more certainty because they are several years in and have several years of data to look at. Agile is the cover. Because they are Agile they are getting more rope to hang themselves with. But all this is a dream, a horrid dream, none of this ever happened.Reference: Nightmare on Agile Street from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
devops-logo

Devops isn’t killing developers – but it is killing development and developer productivity

Devops isn’t killing developers – at least not any developers that I know. But Devops is killing development, or the way that most of us think of how we are supposed to build and deliver software. Agile loaded the gun. Devops is pulling the trigger. Flow instead of Delivery A sea change is happening in the way that software is developed and delivered. Large-scale waterfall software development projects gave way to phased delivery and Spiral approaches, and then to smaller teams delivering working code in time boxes using Scrum or other iterative Agile methods. Now people are moving on from Scrum to Kanban, and to One-Piece Continuous Flow with immediate and Continuous Deployment of code to production in Devops. The scale and focus of development continues to shrink, and so does the time frame for making decisions and getting work done. Phases and milestones and project reviews to sprints and sprint reviews to Lean controls over WIP limits and task-level optimization. The size of deliverables: from what a project team could deliver in a year to what a Scrum team could get done in a month or a week to what an individual developer can get working in production in a couple of days or a couple of hours. The definition of “Done” and “Working Software” changes from something that is coded and tested and ready to demo to something that is working in production – now (“Done Means Released”). Continuous Delivery and Continuous Deployment replace Continuous Integration. Rapid deployment to production doesn’t leave time for manual testing or for manual testers, which means developers are responsible for catching all of the bugs themselves before code gets to production – or do their testing in production and try to catch problems as they happen (aka “Monitoring as Testing“). Because Devops brings developers much closer to production, operational risks become more important than project risks, and operational metrics become more important than project metrics. System uptime and cycle time to production replace Earned Value or velocity. The stress of hitting deadlines is replaced by the stress of firefighting in production and being on call. Devops isn’t about delivering a project or even delivering features. It’s about minimizing lead time and maximizing flow of work to production, recognizing and eliminating junk work and delays and hand offs, improving system reliability and cutting operational costs, building in feedback loops from production to development, standardizing and automating steps as much as possible. It’s more manufacturing and process control than engineering. Devops kills Developer Productivity too Devops also kills developer productivity. Whether you try to measure developer productivity by LOC or Function Points or Feature Points or Story Points or velocity or some other measure of how much code is written, less coding gets done because developers are spending more time on ops work and dealing with interruptions, and less time writing code. Time learning about the infrastructure and the platform and understanding how it is setup and making sure that it is setup right. Building Continuous Delivery and Continuous Deployment pipelines and keeping them running. Helping ops to investigate and resolve issues, responding to urgent customer requests and questions, looking into performance problems, monitoring the system to make sure that it is working correctly, helping to run A/B experiments, pushing changes and fixes out… all take time away from development and pre-empt thinking about requirements and designing and coding and testing (the work that developers are trained to do and are good at). The Impact of Interruptions and Multi-Tasking You can’t protect developers from interruptions and changes in priorities in Devops, even if you use Kanban with strict WIP limits, even in a tightly run shop – and you don’t want to. Developers need to be responsive to operations and customers, react to feedback from production, jump on problems and help detect and resolve failures as quickly as possible. This means everyone, especially your most talented developers, need to be available for ops most if not all of the time. Developers join ops on call after hours, which means carrying a pager (or being chased by Pager Duty) after the day’s work is done. And time wasted on support calls for problems that end up not being real problems, and long nights and weekends on fire fighting and tracking down production issues and helping to recover from failures, coming in tired the next day to spend more time on incident dry runs and testing failover and roll-forward and roll-back recovery and participating in post mortems and root cause analysis sessions when something goes wrong and the failover or roll-forward or roll-back doesn’t work. You can’t plan for interruptions and operational problems, and you can’t plan around them. Which means developers will miss their commitments more often. Then why make commitments at all? Why bother planning or estimating? Use just-in-time prioritization instead to focus in on the most important thing that ops or the customer need at the moment, and deliver it as soon as you can – unless something more important comes up and pre-empts it. As developers take on more ops and support responsibilities, multi-tasking and task switching – and the interruptions and inefficiency that come with it – increase, fracturing time and destroying concentration. This has an immediate drag on productivity, and a longer term impact on people’s ability to think and to solve problems. Even the Continuous Deployment feedback loop itself is an interruption to a developer’s flow. After a developer checks in code, running unit tests in Continuous Integration is supposed to be fast, a few seconds or minutes, so that they can keep moving forward with their work. But to deploy immediately to production means running through a more extensive set of integration tests and systems tests and other checks in Continuous Delivery (more tests and more checks takes more time), then executing the steps through to deployment, and then monitoring production to make sure that everything worked correctly, and jumping in if anything goes wrong. Even if most of the steps are automated and optimized, all of this takes extra time and the developer’s attention away from working on code. Optimizing the flow of work in and out of operations means sacrificing developer flow, and slowing down development work itself. Expectations and Metrics and Incentives have to Change In Devops, the way that developers (and ops) work change, and the way that they need to be managed changes. It’s also critical to change expectations and metrics and incentives for developers. Devops success is measured by operational IT metrics, not on meeting project delivery goals of scope, schedule and cost, not on meeting release goals or sprint commitments, or even meeting product design goals.How fast can the team respond to important changes and problems: Change Lead Time and Cycle Time to production instead of delivery milestones or velocity How often do they push changes to production (which is still the metric that most people are most excited about – how many times per day or per hour or minute Etsy or Netflix or Amazon deploy changes) How often do they make mistakes – Change / Failure ratio System reliability and uptime – MTBF and especially MTTD and MTTR Cost of change – and overall Operations and Support costsDevops is more about Ops than Dev As more software is delivered earlier and more often to production, development turns into maintenance. Project management is replaced by incident management and task management. Planning horizons get much shorter – or planning is replaced by just-in-time queue prioritization and triage. With Infrastructure as Code Ops become developers, designing and coding infrastructure and infrastructure changes, thinking about reuse and readability and duplication and refactoring, technical debt and testability and building on TDD to implement TDI (Test Driven Infrastructure). They become more agile and more Agile, making smaller changes more often, more time programming and less on paper work. And developers start to work more like ops. Taking on responsibilities for operations and support, putting operational risks first, caring about the infrastructure, building operations tools, finding ways to balance immediate short-term demands for operational support with longer-term design goals. None of this will be a surprise to anyone who has been working in an online business for a while. Once you deliver a system and customers start using it, priorities change, everything about the way that you work and plan has to change too. This way of working isn’t better for developers, or worse necessarily. But it is fundamentally different from how many developers think and work today. More frenetic and interrupt-driven. At the same time, more disciplined and more Lean. More transparent. More responsibility and accountability. Less about development and more about release and deployment and operations and support. Developers – and their managers – will need to get used to being part of the bigger picture of running IT, which is about much more than designing apps and writing and delivering code. This might be the future of software development. But not all developers will like it, or be good at it.Reference: Devops isn’t killing developers – but it is killing development and developer productivity from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

Test Attribute #6 – Maintenance

I always hated the word “maintainability” in the context of tests. Tests, like any other code are maintainable. Unless there comes a time, where we decide we can’t take it anymore, and the code needs a rewrite, the code is maintainable. We can go and change it, edit or replace it. The same goes for tests. Once we’ve written them, they are maintainable.   So why are we talking about maintainable tests? The trouble with tests is that they are not considered “real” code. They are not production code. Developers, starting out on the road to better quality, seem to regard tests not just as extra work, but also second-class work. All activities that are not directed at running code on production server, or a client computer, are regarded as “actors in supporting roles”. Obviously writing the tests has an associated future cost. It’s a cost on supporting work, which is considered less valuable. One of the reasons developers are afraid to start writing tests is the accumulated multiplier effect: “Ok, I’m willing to write the tests, which doubles my work load. I know that this code is going to change in the future, and therefore I’ll have to do double the work, many times in the future. Is it worth it?” Test maintenance IS costly But not necessarily because of that. The first change we need to do is a mental one. We need to understand that all our activities, including the “supporting” ones, are all first-class. That also includes the test modifications in the future: After all, if we’re going to change the code to support that requirement, that will require tests for that requirement. The trick is to minimize the effort to a minimum. And we can do that, because some of that future effort is waste that we’re creating now. The waste happens when the requirements don’t change, but the tests fail, and not because of a bug. We then need to fix the test, although there wasn’t a real problem. Re-work. Here’s a very simple example, taken from the Accuracy attribute post: [Test]public void AddTwoPositiveNumbers_GetResult() { PositiveCalculator calculator = new PositiveCalculator(); Assert.That(calculator.Add(2, 2), Is.EqualTo(4)); } What happens if we decide to rename the PositiveCalculator to Calculator?  The test will not compile. We’ll need to modify the test in order to pass. Renaming stuff doesn’t seem that much of a trouble, though – we’re relying on modern tools to replace the different occurrences. However, this is very dependent on tools and technology . If we did this in C# or in Java, there is not only automation, but also quick feedback mechanisms that catch this, and we don’t even think we’re maintaining the tests. Imagine you’d get the compilation error only after 2 hours of compiling, rather than immediately after you’ve done the changes. Or only after the automated build cycle. The further we get from automation and quick feedback, we tend to look at the maintenance as a bigger monster. Lowering maintenance costs The general advice is: “Don’t couple your tests to your code”. There’s a reason I chose this example: Tests are always coupled to the code. The level of coupling, and the feedback mechanisms we use effect how big these “maintenance” tasks are going to be. Here are some tips for lowering the chance of test maintenance.Check outputs, not algorithms. Because tests are coupled to the code, the less implementation details the test knows about, the better. Robust tests do not rely on specific method calls inside the code. Instead, they treat the tested system as a black box, even though they may know how it’s internally built. These tests, by the way, are also more readable. Work against a public interface. Test from the outside and avoid testing internal methods. We want to keep the internal method list (and signature) inside our black box. If you feel that’s unavoidable, consider extracting the internal method to a new public object. Use the minimal amount of assert. Being too specific in our assert criteria, especially when using verification of method calls on dependencies, can lead to breaking tests without a benefit. Do we need to know a method was called 5 times, or that it was called at least once? When it was called, do we need to know the exact value of its argument, or maybe a range suffices? With every layer of specificity, we’re adding opportunities for breaking the test. Remember we with failure, we want information to help solve the problem. If we don’t gain additional information from these asserts, lower the criteria. Use good refactoring tools. And a good IDE. And work with languages that support these. Otherwise, we’re delaying the feedback on errors, and causing the cost of maintenance to rise. Use less mocking. Using mocks is like using x-rays. They are very good at what they do, but over-exposure is bad. Mocks couple the code to the test even more. They allow us to specify internal implementation of the code in the test. We’re now relying on the internal algorithm, which can change. And then our test will need some fixing. Avoid hand-written mocks. The-hand written ones are the worst, because unless they are very simple, it is very easy to copy the behavior of the tested code into the mocks. Frameworks encourage setting the behavior through the interface.There’s a saying: Code is a liability, not an asset. Tests are the same – maintenance will not go away completely. But we can lower the cost if we stick to these guidelines.Reference: Test Attribute #6 – Maintenance from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

ngImprovedTesting: mock testing for AngularJS made easy

Being able to easily test your application is one of the most powerful features that AngularJS offers. All the services, controllers, filters even directives you develop can be fully (unit) tested. However the learning curve for writing (proper) unit tests tends to be quite steep. This is mainly because AngularJS doesn’t really offer any high level API’s to ease the unit testing. Instead you are forced to use the same (low level) services that AngularJS uses internally. That means you have to gain in dept knowledge about the internals of $controller, when to $digest and how to use $provide in order to mock these services. Especially mocking out a dependency of controller, filter or another service is too cumbersome. This blog will show how you would normally create mocks in AngularJS, why its troublesome and finally introduces the new ngImprovedTesting library that makes mock testing much easier. Sample application Consider the following application consisting of the “userService” and the “permissionService”: var appModule = angular.module('myApp', []); appModule.factory('userService', function($http) { var detailsPerUsername = {}; $http({method: 'GET', url: '/users'}) .success(function(users) { detailsPerUsername = _.indexBy(users, 'username'); }); return { getUserDetails: function(userName) { return detailsPerUsername[userName]; } }; }); appModule.factory('permissionService', function(users) { return { hasAdminAccess: function(username) { return users.getUserDetails(username).admin === true; } }; }); When it comes to unit testing “permissionService” there are two default strategies:using mock $httpBackend (from the ngMock module) to simulate $http trafic from the “userService” using a mock instead of the actual “userService” dependencyReplacing the “userService” with a mock using vanilla AngularJS Using vanilla AngularJS you have to do all the hard work yourself when you like to create a mock. You will have to manually create an object with its relevant fields and methods. Finally you will have to register the mock (using $provide) to overwrite the existing service implementation. Using the following vanilla AngularJS we can replace “userService” with a mock in our unit tests: describe('Vanilla mocked style permissions service specification', function() { var userServiceMock; beforeEach(module('myApp', function ($provide) { userServiceMock = { getUserDetails: jasmine.createSpy() }; $provide.value('userService', userServiceMock); })); // ... The imperfections of the vanilla style of mocking To ability to mock services in unit tests is a really great feature in AngularJS but it’s far from perfect. As a developer I really don’t want to be bothered with having to manually create a mock object. For instance I might just simply forget to mock the “userService” dependency when testing the “permissionService” meaning I would accidentally test it using the actual “userService”. And what if you would refactor the “userService” and would rename its method to “getUserInfo”. Then you would except the unit test of “permissionService” to fail, right? But it won’t since the mocked “userService” still has the old “getUserDetails” (spy) method. Make things even worse… what if you would rename service to “userInfoService”. This makes the “userService” dependency of the “permissionService” to be no longer resolvable. Due to this modification the application will no longer bootstrap when executed inside a browser. But when executed from the unit test it won’t fail since its still uses its own mock. However other unit tests using the same module but not mocking the service will fail. How mock testing could be improved Coming from a Java background if found the manual creation of mocks felt quite weird to me. In static languages the existence of interfaces (and classes) make it way more easy to automatically create mocks. Using AngularJS we could do something similar … … what if we would use the original service as a template for creating a mocked version. Then we could automatically create mocks that contain the same properties as the original object. Each non-method property could be copied as-is and each method would instead be a Jasmine spy. Instead of manually registering a mock service using $provide we could instead automate this. This would also allow us to automatically check if a service you want to mock actually exists. Also we could check if the service being mock is indeed being used as dependency of a component. Introducing the ngImprovedTesting library With the intention of making (unit) testing more easy I created the “ngImprovedTesting” library. The just released 0.1 version supports (selectively) mocking out dependencies of a controller, filter or another service. Mock out the “userService” dependency when testing the “permissionService” is now extremely easy: describe('ngImprovedTesting mocked style permissions service specification', function() { beforeEach(ModuleBuilder.forModule('myApp') .serviceWithMocksFor('permissionService', 'userService') .build()); // ... continous in next code snippets Instead of using the traditional “beforeEach(module(‘myApp’))” we are using the ModuleBuilder of “ngImprovedTesting” to build a module specifically for our test. In this case we would like to test the actual “permissionService” in a test in combination with a mock for its “userService” dependency. But what if I would like to set some behavior on the automatically created mock … … how do I actually get a hold on the actual mock instance? Well simple… besides the component being tested all its dependencies including the mocked one can be injected. To differentiate a mock from a regular one it’s registered with “Mock” appended in its name. So to inject the mocked out version of “userService” just use “userServiceMock” instead: describe('hasAdminAccess method', function() { it('should return true when user details has property: admin == true', inject(function(permissions, userServiceMock) { userServiceMock.getUserDetails.andReturn({admin: true}); expect(permissions.hasAdminAccess('anAdminUser')).toBe(true); })); }); As you can see in the example the “userServiceMock.getUserDetails” method is a just a Jasmine spy. It therefor allows invocation of “andReturn” on in order to set the return value of the method. However it does not allow an “andCallThrough” as the spy is not on the original service. Exploring the ModuleBuilder API of ngImprovedTesting Since I didn’t get round to writing and generating JSDocs / NGDocs, I instead will quickly explain it here. To instantiate a “ModuleBuilder” use its static “forModule” method. The “ModuleBuilder” (in version 0.1) consists of the following instance methods:serviceWithMocksFor: registers a service for testing and mock specified dependencies serviceWithMocks: registers a service for testing and mock all dependencies serviceWithMocksExcept: registers a service for testing and mock dependencies except the specified controllerWithMocksFor: registers a controller for testing and mock specified dependencies controllerWithMocks: registers a controller for testing and mock all dependencies controllerWithMocksExcept: registers a controller for testing and mock dependencies except the specified controllerAsIs: registers a controller so that it can be instantiated through $controller filterWithMocksFor: registers a filter for testing and mock specified dependencies filterWithMocks: registers a filter for testing and mock all dependencies filterWithMocksExcept: registers a filter for testing and mock dependencies except the specified filterAsIs: registers a filter so that is can be using through $filterLimitations in the initial (0.1) of ngImprovedTesting Although version 0.1 is quite production ready (and well unit tested) is has its limitations:Services registered with the “provider” method currently cannot be used as to be tested service; meaning it cannot be used as first parameter of “serviceWithMocks…”, however it can be used as a (potentially mocked) dependency. Services which are registered using “$provide” (i.e. inside a config function of a module) instead of through “angular.Module” cannot be used as to be tested service. Mock testing of directives is currently not supported.How to get started with ngImprovedTesting All sources from this blog post can be found as part of a sample application:https://github.com/evangalen/ng-improved-testing-sample.gitThe sample applications demonstrates three different flavors of testing:One that uses the $httpBackend Another using vanilla mocking support And one using ngImprovedTestingTo execute the tests on the command-line use the following commands (requires NodeJS, NPM, Bower and Grunt to be installed): npm install bower update gruntThe actual sources of ngImprovedTesting itself are also hosted on GitHub:https://github.com/evangalen/ng-improved-testing.git: contains the source code on ngImprovedTesting itself. https://github.com/evangalen/ng-module-introspector.git: specifically developed AngularJS module introspector that allows us to retrieve the exact declaration of a controller, filter and service and its dependencies.Furthermore ngImprovedTesting is also available through bower itself. You can easily install and add it to an existing project using the following command: bower install ng-improved-testing --save-dev   Your feedback is more than welcome My goal for ngImprovedTesting is to ease mock testing in your AngularJS unit tests. I’m very interested in your feedback… is ngImprovedTesting any useful… and how could it be improved?Reference: ngImprovedTesting: mock testing for AngularJS made easy from our JCG partner Emil van Galen at the JDriven blog....
javascript-logo

Java EE 7 with Angular JS – Part 1

Today’s post will show you how to build a very simple application using Java EE 7 and Angular JS. Before going there let me tell you a brief story: I have to confess that I was never a big fan of Javascript, but I still remember the first time I have used it. I don’t remember the year exactly, but probably around mid 90′s. I had a page with 3 frames (yes frames! remember those? very popular around that time) and I wanted to reload 2 frames when I clicked a link on the 3rd frame. At the time, Javascript was used to do some fancy stuff on webpages, not every browser have Javascript support and some even required you to turn it on. Fast forwarding to today the landscaped changed dramatically. Javascript is a full development stack now and you can develop entire applications written only in Javascript. Unfortunately for me, sometimes I still think I’m back in the 90′s and don’t give enough credit to Javascript, so this is my attempt to get to know Javascript better. Why Java EE 7? Well, I like Java and the new Java EE version is pretty good. Less verbose and very fast using Wildfly or Glassfish. It provides you with a large set of specifications to suit your needs and it’s a standard in the Java world. Why Angular JS? I’m probably following the big hype around Angular here. Since I don’t have much experience with Javascript I don’t know the offers very well, so I’m just following advice of some friends and I have also noticed a big acceptance of Angular in the last Devoxx. Every room with an Angular talk was full, so I wanted to give it a try and found out for myself. The Application For the application, it’s a simple list with pagination and a REST service that feeds the list data. Every time I start a new enterprise project it’s usually the first thing we code: create a table, store some data and list some random data, so I think it’s appropriate. The SetupJava EE 7 Angular JS ng-grid UI Bootstrap WildflyThe Code (finally!) Backend – Java EE 7 Starting with the backend, let’s define a very simple Entity class (some code is omitted for simplicity): Person.java @Entity public class Person { @Id private Long id;private String name;private String description;} If you’re not familiar with Java EE JPA specification, this will allow to model an object class into a database table by using the annotation @Entity to connect to the database table with the same name and the annotation @Id to identify the table primary key. Following by a persistence.xml: persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="myPU" transaction-type="JTA"> <properties> <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/> <property name="javax.persistence.schema-generation.create-source" value="script"/> <property name="javax.persistence.schema-generation.drop-source" value="script"/> <property name="javax.persistence.schema-generation.create-script-source" value="sql/create.sql"/> <property name="javax.persistence.schema-generation.drop-script-source" value="sql/drop.sql"/> <property name="javax.persistence.sql-load-script-source" value="sql/load.sql"/> </properties> </persistence-unit> </persistence> Two of my favourite new features on Java EE 7: now you can run sql in a standard way by using the properties javax.persistence.schema-generation.* and it also binds you to a default datasource if you don’t provide one. So for this case, it’s going to use the internal Wildfly H2 database for our application. Finally, to provide the list data we need to query the database and expose it as a REST service: PersonResource.java @Stateless @ApplicationPath("/resources") @Path("persons") public class PersonResource extends Application { @PersistenceContext private EntityManager entityManager;private Integer countPersons() { Query query = entityManager.createQuery("SELECT COUNT(p.id) FROM Person p"); return ((Long) query.getSingleResult()).intValue(); }@SuppressWarnings("unchecked") private List<Person> findPersons(int startPosition, int maxResults, String sortFields, String sortDirections) { Query query = entityManager.createQuery("SELECT p FROM Person p ORDER BY " + sortFields + " " + sortDirections); query.setFirstResult(startPosition); query.setMaxResults(maxResults); return query.getResultList(); }public PaginatedListWrapper<Person> findPersons(PaginatedListWrapper<Person> wrapper) { wrapper.setTotalResults(countPersons()); int start = (wrapper.getCurrentPage() - 1) * wrapper.getPageSize(); wrapper.setList(findPersons(start, wrapper.getPageSize(), wrapper.getSortFields(), wrapper.getSortDirections())); return wrapper; }@GET @Produces(MediaType.APPLICATION_JSON) public PaginatedListWrapper<Person> listPersons(@DefaultValue("1") @QueryParam("page") Integer page, @DefaultValue("id") @QueryParam("sortFields") String sortFields, @DefaultValue("asc") @QueryParam("sortDirections") String sortDirections) { PaginatedListWrapper<Person> paginatedListWrapper = new PaginatedListWrapper<>(); paginatedListWrapper.setCurrentPage(page); paginatedListWrapper.setSortFields(sortFields); paginatedListWrapper.setSortDirections(sortDirections); paginatedListWrapper.setPageSize(5); return findPersons(paginatedListWrapper); } } The code is exactly as a normal Java POJO, but using the Java EE annotations to enhance the behaviour. @ApplicationPath("/resources") and @Path("persons") will expose the REST service at the url yourdomain/resources/persons, @GET marks the logic to be called by the http GET method and @Produces(MediaType.APPLICATION_JSON) formats the REST response as JSON format. Pretty cool with only a few annotations. To make it a little easier to exchange the needed information for the paginated list, I have also created the following wrapper class: PaginatedListWrapper.java public class PaginatedListWrapper<T> { private Integer currentPage; private Integer pageSize; private Integer totalResults;private String sortFields; private String sortDirections; private List<T> list; } And we are done with the backend stuff. UI – Angular JS To display the data we are going to use Angular JS. Angular extends the traditional HTML with additional custom tag attributes to bind data represented in Javascript variables by following a MVC approach. So, lets look to our html page: index.html <!DOCTYPE html> <!-- Declares the root element that allows behaviour to be modified through Angular custom HTML tags. --> <html ng-app="persons"> <head> <title></title> <script src="lib/angular.min.js"></script> <script src="lib/jquery-1.9.1.js"></script> <script src="lib/ui-bootstrap-0.10.0.min.js"></script> <script src="lib/ng-grid.min.js"></script><script src="script/person.js"></script><link rel="stylesheet" type="text/css" href="lib/bootstrap.min.css"/> <link rel="stylesheet" type="text/css" href="lib/ng-grid.min.css"/> <link rel="stylesheet" type="text/css" href="css/style.css"/> </head><body><br><div class="grid"> <!-- Specify a JavaScript controller script that binds Javascript variables to the HTML.--> <div ng-controller="personsList"> <!-- Binds the grid component to be displayed. --> <div class="gridStyle" ng-grid="gridOptions"></div><!-- Bind the pagination component to be displayed. --> <pagination direction-links="true" boundary-links="true" total-items="persons.totalResults" page="persons.currentPage" items-per-page="persons.pageSize" on-select-page="refreshGrid(page)"> </pagination> </div> </div></body> </html> Apart from the Javascript and CSS declarations there is very little code in there. Very impressive. Angular also have a wide range of ready to use components, so I’m using the ng-grid to display the data and UI Bootstrap that provides a pagination component. The ng-grid also have a pagination component, but I liked the UI Bootstrap pagination component more. There is something still missing. The Javascript file where everything happens: person.js var app = angular.module('persons', ['ngGrid', 'ui.bootstrap']); // Create a controller with name personsList to bind to the html page. app.controller('personsList', function ($scope, $http) { // Makes the REST request to get the data to populate the grid. $scope.refreshGrid = function (page) { $http({ url: 'resources/persons', method: 'GET', params: { page: page, sortFields: $scope.sortInfo.fields[0], sortDirections: $scope.sortInfo.directions[0] } }).success(function (data) { $scope.persons = data; }); };// Do something when the grid is sorted. // The grid throws the ngGridEventSorted that gets picked up here and assigns the sortInfo to the scope. // This will allow to watch the sortInfo in the scope for changed and refresh the grid. $scope.$on('ngGridEventSorted', function (event, sortInfo) { $scope.sortInfo = sortInfo; });// Watch the sortInfo variable. If changes are detected than we need to refresh the grid. // This also works for the first page access, since we assign the initial sorting in the initialize section. $scope.$watch('sortInfo', function () { $scope.refreshGrid($scope.persons.currentPage); }, true);// Initialize required information: sorting, the first page to show and the grid options. $scope.sortInfo = {fields: ['id'], directions: ['asc']}; $scope.persons = {currentPage : 1}; $scope.gridOptions = { data: 'persons.list', useExternalSorting: true, sortInfo: $scope.sortInfo }; }); The Javascript code is very clean and organised. Notice how everything gets added to an app controller, allowing you to have multiple separation of concerns on your business logic. To implement the required behaviour we just need to add a few functions to refresh the list by calling our REST service and monitor the grid data to refresh the view. This is the end result:Next Steps: For the following posts related with these series, I’m planning to:Implement filtering Implement detail view Implement next / prev browsing Deploy in the cloud Manage Javascript dependenciesResources You can clone a full working copy from my github repository and deploy it to Wildfly. You can find instructions there to deploy it. Should also work on Glassfish. Java EE – Angular JS Source Update In the meanwhile I have updated the original code with the post about Manage Javascript dependencies. Please, download the original source of this post from the release 1.0. You can also clone the repo, and checkout the tag from release 1.0 with the following command: git checkout 1.0. I hope you enjoyed the post! Let me know if you have any comments about this.Reference: Java EE 7 with Angular JS – Part 1 from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
spring-logo

How to compose html emails in Java with Spring and Velocity

In this post I will present how you can format and send automatic emails with Spring and Velocity. Spring offers alone the capability to create simple text emails, which is fine for simple cases, but in typical enterprise application you wouldn’t want to do that for a number of reasons:creating HTML-based email content in Java code is tedious and error prone there is no clear separation between display logic and business logic changing the display structure of the email requires writing Java code, recompiling, redeploying etcTypically the approach taken to address these issues is to use a template library such as FreeMarker or Velocity to define the display structure of email content. For Podcastpedia I chose Velocity, which is a free open source Java-based templating engine from Apache. In the end my only coding task will be to create the data that is to be rendered in the email template and sending the email. I will base the demonstration on a real scenario from Podcastpedia.org Scenario On Podcastpedia.org’s Submit podcast page, we encourage our visitors and podcast producers to submit their podcasts to be included in our podcast directory. Once a podcast is submitted, an automatic email will be generated to notify me (adrianmatei [AT] gmail DOT com ) and the Podcastpedia personnel ( contact [AT] podcastpedia DOT org) about it. Let’s see now how Spring and Velocity play together: 1. Prerequisites 1.1. Spring setup “The Spring Framework provides a helpful utility library for sending email that shields the user from the specifics of the underlying mailing system and is responsible for low level resource handling on behalf of the client.”[1] 1.1.1. Library depedencies The following additional jars need to be on the classpath of your application in order to be able to use the Spring Framework’s email library.The JavaMail mail.jar library The JAF activation.jar libraryI load these dependencies with Maven, so here’s the configuration snippet from the pom.xml: Spring mail dependencies <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>jaf</groupId> <artifactId>activation</artifactId> <version>1.0.2</version> <scope>provided</scope> </dependency> 1.2. Velocity setup To use Velocity to create your email template(s), you will need to have the Velocity libraries available on your classpath in the first place. With Maven you have the following dependencies in the pom.xml file: Velocity dependencies in Maven <!-- velocity --> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity-tools</artifactId> <version>2.0</version> </dependency> 2. Email notification service I defined the EmailNotificationService interface for email notification after a successful podcast submission. It has just one operation, namely to notify the Podcastpedia personnel about the proposed podcast. The code bellow presents the EmailNotificationServiceImpl, which is the implementation of the interface mentioned above: Java code to send notification email package org.podcastpedia.web.suggestpodcast; import java.util.Date; import java.util.HashMap; import java.util.Map; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import org.apache.velocity.app.VelocityEngine; import org.podcastpedia.common.util.config.ConfigService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.mail.javamail.MimeMessageHelper; import org.springframework.mail.javamail.MimeMessagePreparator; import org.springframework.ui.velocity.VelocityEngineUtils; public class EmailNotificationServiceImpl implements EmailNotificationService { @Autowired private ConfigService configService; private JavaMailSender mailSender; private VelocityEngine velocityEngine; public void sendSuggestPodcastNotification(final SuggestedPodcast suggestedPodcast) { MimeMessagePreparator preparator = new MimeMessagePreparator() { @SuppressWarnings({ "rawtypes", "unchecked" }) public void prepare(MimeMessage mimeMessage) throws Exception { MimeMessageHelper message = new MimeMessageHelper(mimeMessage); message.setTo(configService.getValue("EMAIL_TO_SUGGEST_PODCAST")); message.setBcc("adrianmatei@gmail.com"); message.setFrom(new InternetAddress(suggestedPodcast.getEmail()) ); message.setSubject("New suggested podcast"); message.setSentDate(new Date()); Map model = new HashMap(); model.put("newMessage", suggestedPodcast); String text = VelocityEngineUtils.mergeTemplateIntoString( velocityEngine, "velocity/suggestPodcastNotificationMessage.vm", "UTF-8", model); message.setText(text, true); } }; mailSender.send(preparator); } //getters and setters omitted for brevity } Let’s go a little bit through the code now: 2.1. JavaMailSender and MimeMessagePreparator The org.springframework.mail package is the root level package for the Spring Framework’s email support. The central interface for sending emails is the MailSender interface, but we are using the org.springframework.mail.javamail.JavaMailSender   interface (lines 22, 42), which adds specialized JavaMail features such as MIME message support to the MailSender interface (from which it inherits). JavaMailSender also provides a callback interface for preparation of JavaMail MIME messages, called org.springframework.mail.javamail.MimeMessagePreparator (lines 26-42) . 2.2. MimeMessageHelper Another helpful class when dealing with JavaMail messages is the org.springframework.mail.javamail.MimeMessageHelper class, which shields you from having to use the verbose JavaMail API. As you can see by using the MimeMessageHelper, it becomes pretty easy to create a MimeMessage: Usage of MimeMessageHelper MimeMessageHelper message = new MimeMessageHelper(mimeMessage); message.setTo(configService.getValue("EMAIL_TO_SUGGEST_PODCAST")); message.setBcc("adrianmatei@gmail.com"); message.setFrom(new InternetAddress(suggestedPodcast.getEmail()) ); message.setSubject("New suggested podcast"); message.setSentDate(new Date()); 2.3. VelocityEngine The next thing to note is how the email text is being created: Create email text with Velocity template Map model = new HashMap(); model.put("newPodcast", suggestedPodcast); String text = VelocityEngineUtils.mergeTemplateIntoString( velocityEngine, "velocity/suggestPodcastNotificationMessage.vm", "UTF-8", model); message.setText(text, true);the VelocityEngineUtils.mergeTemplateIntoString method merges the specified template (suggestPodcastNotificationMessage.vm present in the velocity folder from the classpath) with the given model (model – “newPodcast”), which a map containing model names as keys and model objects as values. you also need to specify the velocityEngine you work with and, finally, the result is returned as a string2.3.1. Create velocity template You can see below the Velocity template that is being used in this example. Note that it is HTML-based, and since it is plain text it can be created using your favorite HTML or text editor. Velocity template <html> <body> <h3>Hi Adrian, you have a new suggested podcast!</h3> <p> From - ${newMessage.name} / ${newMessage.email} </p> <h3> Podcast metadataline </h3> <p> ${newMessage.metadataLine} </p> <h3> With the message </h3> <p> ${newMessage.message} </p> </body></html> 2.4. Beans configuration Let’s see how everything is configured in the application context: Email service configuration <!-- ********************************* email service configuration ******************************* --> <bean id="smtpSession" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="java:comp/env/mail/Session"/> </bean> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="session" ref="smtpSession" /> </bean> <bean id="velocityEngine" class="org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value> resource.loader=class class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> <bean id="emailNotificationServiceSuggestPodcast" class="org.podcastpedia.web.suggestpodcast.EmailNotificationServiceImpl"> <property name="mailSender" ref="mailSender"/> <property name="velocityEngine" ref="velocityEngine"/> </bean>the JavaMailSender has a JNDI reference to a smtp session. A generic example how to configure an email session with a google account can be found in the Jetty9-gmail-account.xml file the VelocityEngineFactoryBean is a factory that configures the VelocityEngine and provides it as a bean reference. the ClasspathResourceLoader is a simple loader that will load templates from the classpathSummary You’ve learned in this example how to compose html emails in Java with Spring and Velocity. All you need is mail, spring and velocity libraries, compose your email template and use those simple Spring helper classes to add metadata to the email and send it. Resources Source code – GitHub repositoriesPodcastpedia-weborg.podcastpedia.web.suggestpodcast.EmailNotificationService.java -Java Interface for email notification org.podcastpedia.web.suggestpodcast.EmailNotificationServiceImp.java – Java implementation of the interface main / resources / suggestPodcastNotificationMessage.vm – Velocity template src / main / resources / config / Jetty9-gmail-account.xml – example email session configuration for gmail accountPodcastpedia-commonsrc / main / resources / spring / pcm-common.xml – email related bean configuration in Spring application contextWebSpring Email integration  Apache Velocity ProjectReference: How to compose html emails in Java with Spring and Velocity from our JCG partner Adrian Matei at the Codingpedia.org blog....
software-development-2-logo

Applying S.T.O.P. To Software Development

The acronym STOP (or STOPP) is used by several organizations (United States Army, Hunter’s Ed, Mountain Rescue, Search and Rescue, Boy Scouts of America), often for describing how to cope with wilderness survival situations or other situations when one is lost (especially outdoors). The “S” typically stands for “Stop” (some say it stands for “Sit“), the “T” stands for “Think” (some say “Take a Breath”), the “O” stands for “Observe”, and the “P” stands for “Plan” (some say it stands for “Prepare”). When there is a second “P”, that typically stands for “Proceed.” In other words, the best approach to use for wilderness survival is to stop, think, observe, and plan before proceeding (taking action). Proceeding without a plan based on thinking and observation is rarely a good idea in for those in a survival situation. Our approaches to developing software and fixing issues with existing software can benefit from the general guidance STOP provides. In this, my 1000th blog post, I look at applying the principles of STOP to software development.Developing New Software For many of us who consider ourselves software developers, programmers, or even software engineers, it can be difficult to ignore the impulse to jump right in and write some code. This is especially true when we’re young and relatively inexperienced with the costs associated with that approach. These costs can include bad (no) overall design and spaghetti code. Code written with this approach often suffers from “stream of consciousness programming” syndrome, in which the code comes out in the way one is thinking it. The problem with “steam of consciousness” programming is that it may only be coherent to the author at that particular moment and not later when outside of that “stream of consciousness.” It is likely not to be coherent by anyone else. By first considering at least at a high level how to organize the code, the developer is more likely to build something that he or she and others will understand later. At some point, we all write lines of code based on our “stream of consciousness,” but that’s much more effective if it’s implementing a small number of lines in well-defined methods and classes. When implementing a new feature, the software developer generally benefits from taking the following general steps:Stop:Be patient and don’t panic. Don’t allow schedule pressure to force you into hasty decisions that may not save any time in the long-run and can lead to problematic code that you and others will have to deal with over the long run. Gather available facts such as what the desired functionality is (customer requirements or expressed desires).Think:Consider what portions of the desired new feature might already be provided. Consider alternative approaches that might be used to implement the desired feature. Consider which existing tools, libraries, and people are already available to might satisfy the need or help satisfy it. Consider design and architectural implications related to existing functionality and likely potential future enhancements.Observe:Confirm available existing tools and libraries that might be used to implement the new feature or which could be expanded to work with the new feature. If necessary, search for blogs, forums, and other sources of information on approaches, libraries, and tools that might be used to implement the new feature. Use others’ designs and code as inspiration for how to implement similar features to what they implemented (or, in some cases, how not to implement a similar feature).Plan:“Design” implementation. In simpler cases, this may simply be a mental step without any formal tools or artifacts. If adhering to test-driven development principles, plan the tests to write first. Even if not strictly applying TDD, make testability part of your consideration of how to design the software. Allocate/estimate the time needed to implement or effort needed to accomplish this, even if you call it by a fancy name such as Story Points.Proceed:Implement and test and implement and test functionality. Get feedback on implemented functionality from customers and other stakeholders and repeat cycle as necessary.The above are just some of the practices that can go into applying the STOP principle to new software development. There are more that could be listed. These steps, especially for simpler cases, might take just a few minutes to accomplish, but those extra few minutes can lead to more readable and maintainable code. These steps can also prevent pollution of an existing baseline and can, in some cases, be the only way to get to a “correct” results. More than once, I have found myself un-doing a bunch of stream of consciousness programming (or doing some significant code changing) because I did not apply these simple steps before diving into coding. Fixing and Maintaining Software When fixing a bug in the software, it is very easy to make the mistake of fixing a symptom rather than the root cause of the problem. Fixing the symptom might bring short-tem benefit of addressing an obviously wrong behavior or outcome, but often hides a deeper problem that may manifest itself with other negative symptoms or, even worse, might contribute to other undetected but significant problems. Applying STOP to fixing bugs can help address these issues.Stop:Be patient and don’t panic. Don’t allow schedule pressure to force you into merely covering up a potentially significant problem. Although a bad enough (in terms of financial loss or loss of life) problem may require you to quickly address the symptom, ensure that the root cause is addressed in a timely fashion as well.Think:Consider anything you or your team recently added to the baseline that may have introduced this bug or that may have revealed a pre-existing bug. Consider the effects/costs of this bug and determine where fixing the bug falls in terms of priority. Consider whether this bug could be related to any other issues you’re already aware of. Consider whether this “bug” is really a misunderstood feature or a user error before fixing something that wasn’t broken.Observe:Evaluate appropriate pieces of evidence to start determining what went wrong. These might be one or more of the following: reading code itself and thinking through its flows, logs, debugger, tools (IDEs warnings and hints and Java examples include VisualVM, jstack, jmap, JConsole), application output, defect description, etc. Building on the Thinking step:Ensure that unit tests and other tests ran without reporting any breakage or issue. Evaluate revision history in your configuration management system to see if anything looks suspicious in terms of being related to the bug (same class or dependency class changed, for example). Evaluate whether any existing bugs/JIRAs in your database seem to be related. Even resolved defects can provide clues as to what is wrong or may have been reintroduced.Plan:Plan new unit test(s) that can be written to find this type of defect in the future in case it is reintroduced at some point and as a part of your confirmation that you have resolved the defect. Plan/design solution for this defect. At this point, if the most thorough solution is considered prohibitively expensive, you may need to choose a cheaper solution, but you are doing so based on knowledge and a deliberate decision rather than just doing what’s easiest and sweeping the real problem under the rug. Document in the DR’s/JIRA’s resolution that this decision was made and why it was made. Plan for schedule time to implement and test this solution.Proceed:Implement and test and implement and test functionality. Get feedback on implemented functionality from customers and other stakeholders and repeat cycle as necessary.There are other tactics and methodologies that might be useful in resolve defects in our code as part of the STOP approach. The important thing is to dedicate at least a small amount of time to really thinking about the problem at hand before diving in and ending up in some cases “fixing” the problem multiple times until the actual and real problem (the root cause) is really fixed. Conclusion Most software developers have a tendency to dive right in and implement a new feature or fix a broken feature as quickly as possible. However, even a small amount of time applying S.T.O.P. in our development process can bring benefits of more efficiency and a better product. Stopping to think, observe, and plan before proceeding is as effective in software development as it is in wilderness survival. Although the stakes often aren’t as high in software development as they are in wilderness survival, there is no reason we cannot still benefit from remembering and adhering to the principles of S.T.O.P.Reference: Applying S.T.O.P. To Software Development from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
devops-logo

Caching Architecture (Adobe AEM) – Part 1

Cache (as defined by Wikipedia) is a component that transparently stores data such that future requests for data can be faster. I hereby presume that you understand cache as a component and any architectural patterns around caching and thereby with this presumption I will not go into depth of caching in this article. This article will cover some of the very basics of fundamentals of caching (wherever relevant) and then will take a deep dive into the point-of-view on the caching architecture with respect a Content Management Plan in context to Adobe’s AEM implementation.       Problem Statement Principles for high performance and high availability don’t change but for conversation sakes lets assume we have a website where we have to meet the following needs.1 Billion hits on a weekend (hit is defined by a call to the resource and includes static resources like CSS, JS, Images, etc.) 700 million hits in a day 7.2 million page views in a day 2.2 million page views in an hour 80K hits in a second 40K page views in a minute 612 page views in a second 24×7 site availability 99.99% uptime Content availability to consumers in under 5 minutes from the time editors publish contentWhile the data looks steep the use case is not uncommon one. In current world where everyone is moving to devices, and digital there will be cases when brands are running campaigns. When those campaigns are running there will be needs for support such steep loads. These loads don’t stay for long but when then come they come fast, they come thick and we will have to support them. For the record, this is not some random theory I am writing, I have had the opportunity of being on a project (I cant name) where we supported similar number. The use case I picked here is of a Digital Media Platform where we have a large portion of the content is static, but the principles I am going to talk here will apply to any other platform or application. The problems that we want to solve here are:Performance: Caching is a pattern that we employ to increase the overall performance the application by storing the (processed) data in a store that is a) closest to the consumer of the data and b) is accessible quickly Scalability: In cases when we need to make the same data-set available to various consumers of the system, caching as a pattern makes it possible for us to scale the systems much better. Caching as we discussed earlier allows us to have processed data which takes away the need to run the same processing time and again which facilitates scalability Availability: Building on similar principles as of scalability, caching allows us to put in place data in areas where systems/components can survive outages be it network or other components. While it may lead to surfacing stale data at points, the systems are still available to the end users.Adobe AEM Perspective Adobe AEM as an application container (let’s remember AEM is not built on top of a true application container, though you can deploy in one), have it’s own nuances with scalability. In this article I will not dive into the scalability aspects, but as you scale the AEM publishers horizontally it leads to increase in several concerns around operations, licensing, cost, etc. The OOTB architecture and components we get with Adobe AEM themselves tell you to make use of cache on the web servers (using dispatcher). However, when you have to support Non Functional Requirements (NFRs) I listed above, the standard OOTB architecture will need a massive infrastructure falls short. We can’t just setup CQ with some backend servers and an apache front-ending with local cache, throw hardware at it with capacity and hope it will come together magically. As I explained, there is no magic in this world and everything needs to happen via some sort of science. Let’s put in perspective the fact that a standard apache web servers can handle a few thousand requests in a second and when you need to handle 80K hits in a second which includes resources like HTML, JS, CSS, images, etc.; with a variety of sizes. Without going into the sizing aspects, it is pretty clear that you would need not just a cluster of servers but a farm of servers to cater to all that traffic. With a farm of servers, you get yourself a nightmare to setup an organization and processes around operations and maintenance to ensure that you keep the site up and running 24×7. Solution Cache, cache and cache Definitions Before we dive into the design here, we need to understand some key aspects around caching. These concepts will be talked about in the POV below and it is critial that you understand these terminologies clearly.Cache miss refers to a scenario when a process requests the data from a cache store and the object does not exist in the store Cache hit refers to a scenario when a process requests the data from a cache store the data is available in the store. This event can only happen only when a cache object for the requests has been primed CachePrimeis a term associated with the process where we fill up the caching storage with the data. You can do this in two waysinwhichthiscan be achievedPre-prime is the method where we run a process (generally at the startup of the application) proactively to load all the various objects whose state we are aware of we can cache. This can be achieved by either using an asynchronous process or by an event notification OnDemand-prime is a method where the cache objects are primes real-time i.e. when a process which needs the resource does not finds it in the cache store, the cache is primed up as the resource is served back to the process itselfExpiration is a mechanism that allows the cache controller to remove objects from memory based on either a time duration or an eventTTL known as Time To Live a method which defines a time duration for which a data can live on a computer. In caching world this is a common expiration strategy where a cache object is expired (flagged to be evicted if needed) based on a time duration provided when the cache object is created Event-based (I can’t find a standards naming convention for this) cache expiration method is one where we can fire an event to mark a cache object as expired so that if needed it can be evicted from memory to make way for new objects or re-primed on the next requestEviction is a mechanism when the cache objects is removed from the cache memory to optimize for spaceThe Design // Where Data should be Cached This point-of-view builds on top of an architecture that designed by a team which I was a part of (I was the load architect) for a digital media platform. The NFRs we spoke about earlier are the ones that we have successfully supported on the platform during a weekend as part of a campaign the brand was running. Also, since then we continue to support very high traffic week everytime there are such events and campaigns. The design that we have in places takes care of various layers of caching in the architecture. Local clients When we talk about such high traffic load, we must understand that this traffic has certain characteristics that work in our favor.All this traffic is generated by a smaller subset of people who access the site. In our case, when we say that we serve 1 billion hits on a single weekend, it is worthwhile to note that there are only 300,000 visitors on the site on that day who generate this much load A very large portion of all this traffic is static in nature (this is also one of the characteristics of a Digital Media Platform) // these construct of resources like javascript, css and also media assets like images and videos. These are files which once deployed seldom change or change with a release that doesn’t happen every day As the users interact with the site/platform, there is content and data which maps back to their profile and preferences and it does not necessarily changes frequently and also this is managed directly and only by the user themselvesWith these characteristics in play, there are things which off of the bat we can cache on the client’s machine i.e. the browser (or device). The mime-types which classify for client-side caching would be images, fonts, css, javascripts. Some of these can be cached infinitely while others should be cached for a medium’ish duration like a couple of days and what have you. Akamai (CDN) Content Delivery Networks (CDN) are service providers that enable serving content to end consumers with high performance and availability. To know more about CDN networks, you can read here. In the overall architecture CDN plays a very critical role. Akamai, AWS’s CoudFront and CloudFlare are some of the CDN providers with which we integrate very well. CDN for us over other things provides a highly available architecture. Some of these CDN provide you an ability to configure your environment such that if the origin servers (pointing to your data center) are unavailable they continue to serve content for a limited time from their local cache. This essentially means that while the backend services may be down, the consumer facing site is never down. Some aspects of the platform like content delivery and new publications are activated and in certain cases like breaking news we may have an impact on a SLA, but the consumers /end users never see your site as unavailable. In our architecture we use CDN to cache all the static content be it HTML pages, images or Videos. Once a static content is published via the Content Delivery Network, those pages are cached on the CDN for a certain duration. These durations are determined based on the refresh duration but the underlying philosophy is to break the content in tiers of Platinum, Gold, Silver and then assign duration for which each of these would be cached. In a platform like NFL where we are say pushing game feeds these need to be classified as Platinum and they had a TTL of 5 seconds, while content types like Home Page, News (not breaking news) etc. have a TTL of 10 minutes and has been classifies as Gold. Then on the same project we have TTL of 2 hours (or so) so sections like search and have been classified as Bronze. The intent was to identify and classify if not all most of the key sections and ensure that we leverage CDN cache effectively. We have observed that even for shorter TTLs like Platinum with increase/spike in traffic the Offload %age (defined as the number of hits served by CDN to humber of hits sent to backend) grows and touched a peak of 99.9% where the average offload %age is around 98%. Varnish Varnish is a web-accelerator which (if i may classify it as is as web server on steroids). If you are hearing its name for the first time, I strongly urge you to hop over here to get to know more about it. We had introduced Varnish as a layer to solve for the following:Boost Performance (reduce number of servers) – We have realized that Varnish bring an in-memory accelerator gives you a boost of anywhere between 5x-10x over using apache. This basically means that you can handle several times the load with Varnish sitting on top of Apache. We had done rigorous testing to prove these numbers out. The x-factor was mostly dependent on the page size aka the amount of content we loaded over the network Avoid DoS attacks – we had realized that if there are cases where you see a lot of influx of traffic coming into your server (directed and intentional or arbitrary) and if you cant to block all such traffic, your chances of successfully blocking the traffic on varnish without bringing down the server when compared to doing same on apache increase many fold. We also use Varnish as a mechanism to block any traffic that we don’t want to hit our infrastructure and those could be spiders and bots from markets and regions not targeted by campaigns we run Avoid Dog-pile effect – if you have heard this term the first time, then hop over here hype-free: Avoiding the dogpile effect. In high traffic situations and even when you have CDN networks setup, it is quite normal for your infrastructure to be hit by a dog-pile as cache expired. Chances of dog-pile effect to increase, increase as you hit lower TTL. Using varnish we have setup something that we call a Grace Configuration where we don’t allow requests for same URLs to pass through. These are queued and after a certain while if the primary request is still not getting through consequent requests are served off of stale cache.Apache If you haven’t heard about Apache WebServer, you might have heard about httpd. If none of these ring a bell, then this (Welcome! – The Apache HTTP Server Project) will make explain things. AEM’s answer to scale is what sits on this layer and is famously known as Dispatcher. This is a neat little module which can be installed on the Apache HTTP server and acts as a reverse proxy with a local disk cache. This module only supports one model of cache eviction which is event based. We can configure either the authoring or the publishing systems to send events of deleting and invalidating cache files on these servers in which case the next call on this server will be passed back to the publisher. The simplest of the model in AEM world and also recommended by one of the Adobe is to let everything invalidate (set statlevelfile = 0 or 1). This design simplifies the page/component design as now we don’t have to figure out any inter-component dependencies. While this is the simplest of the things to do, when we have to support such complex needs it calls for some sophistication in design and setups. I would recommend that is not the right way to go as it reduces the cache usage. We made sure that the site hierarchy is such that when a content is published we would never invalidate the entire site hierarchy and only relevant and contextual content is what gets evicted (invalidated in case of dispatcher). Publisher AEM publishing layer which is the last layer in this layer cake, seems like something which should be simplest and all figured out. That’s not the case. This is where you can be hit most (and it will be below the belt). AEM’s architecture is designed to work in a specific way and if you deviate from it, you are bound to fall into this trap. There are 2 things you need to be aware ofWhen we start writing components that are heavily dependent on queries it will eventually lead to system crumbling. You should be very careful with AEM Queries (which is dependent on underlying AEM’s Lucene implementation). This article tells us that we have about 4 layers of caching before anything should hit publisher. This means that the number of calls that should ever hit this layer is only a minuscule number. From here on, you need to establish how many calls your servers will receive in a second/minute. We have seen in cases where we have used search heavily AEM’s supported TPS takes a nose dive. I have instances across multiple projects where this number is lower then 5 transactions per second.The answer is to build for some sort of an application cache which we used to do in a typical JEE application. This will solve this issues, assuming that content creation either manually by authors or via ingestion is limited which means the load we put on search can be reduced significantly. The caveat you should be aware of is that we are adding one more layer of cache which is difficult to mange and if you have cluster of publishers this is one layer which will have distributed cache across servers and can lead to old pages cached on dispatcher. The chances of that happening will increase as the number of calls coming into publishers increase or as the number of servers in the cluster increase.Reference: Caching Architecture (Adobe AEM) – Part 1 from our JCG partner Kapil Viren Ahuja at the Scratch Pad blog....
java-logo

JSR 303 loading messages from an I18N property file

Overview This article will illustrate how to adapt the JSR 303 validation API to load messages from an I18n property file, and this by conserving all benefits of internationalisation and support for multiple languages. To achieve this we are going to implement a custom MessageInterpolator which will be based upon Spring API for managing I18N messages.       Dependencies Below the required maven dependencies to make this work, the Javax validation and Hibernate validation are not listed in here : <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.0.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.webflow</groupId> <artifactId>spring-binding</artifactId> <version>2.3.2.RELEASE</version> </dependency> </dependencies> Configuration of MessageSource The first step is the configuration of the MessageSource bean which is responsible of scanning and indexing the content of properties files. <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="defaultEncoding" value="UTF-8"/> <property name="basenames"> <list> <value>com.myproject.i18n.MyMessages</value> <value>com.myproject.i18n.ErrorMessages</value> </list> </property> </bean> MyMessages and ErrorMessages are the properties files we wanted to scan, the name of the files support the conventions for multiple language. For example if our application must support english and french then we should have : MyMessages_en.properties and MyMessages_fr.properties. Custom MessageInterpolator In this custom MessageInterpolator we redefine the way JSR 303 resolve messages to display, we provide a custom implementation which uses Spring MessagesSource and the MessageBuild to search and prepare for the message to be displayed. import java.util.Locale;import javax.validation.MessageInterpolator;import org.springframework.binding.message.MessageBuilder; import org.springframework.context.MessageSource;public class SpringMessageInterpolator implements MessageInterpolator { @Autowired private MessageSource messageSource,@Override public String interpolate(String messageTemplate, Context context) { String[] params = (String[]) context.getConstraintDescriptor().getAttributes().get("params");MessageBuilder builder = new MessageBuilder().code(messageTemplate); if (params != null) { for (String param : params) { builder = builder.arg(param); } }return builder.build().resolveMessage(messageSource, Locale.FRANCE).getText(); }@Override public String interpolate(String messageTemplate, Context context, Locale locale) { String[] params = (String[]) context.getConstraintDescriptor().getAttributes().get("params");MessageBuilder builder = new MessageBuilder().code(messageTemplate); if (params != null) { builder = builder.args(params); }return builder.build().resolveMessage(messageSource, local).getText(); } } Usage on a custom JSR 303 Let say that we create a new JSR 303 validation annotation, which validate will check if a field is not blank. To use the custom Spring message interpolator, we need to declare a message on one of the properties files loaded by the Spring Message source, lets declare that on the ErrorMessages.properties: {com.myproject.validation.NotBlank} Mandatory field Best practice is to name the key of the message like the complete classe name of our validation annotation, you are free to choose any key name you want but it must be between the brackets {} to work. Our custom annotation will look like below : @Target({ElementType.METHOD, ElementType.FIELD, ElementType.ANNOTATION_TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented @Constraint(validatedBy = NotBlankValidator.class) public @interface NotBlank { String message() default "{com.myproject.validation.NotBlank";Class<?>[] groups() default {};String[] params() default {};Class<? extends Payload>[] payload() default {}; } Please verify that the default value of the message attribute is the same as the one you put on the property file. Thats it, now you can use the annotation normally like you do, and if you don’t provide a hardcoded message it will get loaded from the property file if is declared there.Reference: JSR 303 loading messages from an I18N property file from our JCG partner Idriss Mrabti at the Fancy UI blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close