Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Consumerization of IT – What it means for the Architect

Consumerization is described as the trend for IT to first emerge in the Consumer space and subsequently makes its way into the enterprise world. But what exactly in the consumer world, that is making the users, demand the similar things from the enterprise IT. To understand the underlying need, we need to first examine the basic requirements of the user.Kathy Sierra, co-creator of the Head First series of books and founder of javaranch.com, describes the hierarchy of needs from the user(s) perspective. The needs are stacked in the order of increasing engagement from the user. Starting with the basic needs of a defined functionality and its correctness, moving on to the learnability, efficiency & usability and finally culminating in intuitiveness and enchantment. Merely provision of correctly working functionality is not guarantee of the success of the application(s). The idea is to hook the user; the application needs to do something extra. Using the hierarchy of needs as the background, we will see how the applications in the consumer and enterprise world stack up. Now, in the consumer world, the advent of Consumerization started with the proliferation of the mobile devices. In the connected world, the device (smartphone or tablet) imposed certain constraintsScreen size is small and limited Processing power of the device is limited Interface is touch enabled Connectivity is not constant (read patchy) and speed/data is limited (no unlimited data plans) User audience demography not knownNow, the Apps needed to be designed keeping in mind the constraints, which meant that the Apps areFocused on providing only one functionality Have simple and intuitive messages/steps in the absence of any help/guides Providing simple interfaces (cater to worldwide audience) Having uncluttered screens as real estate is limited Using UI Controls that are big and usable (touch interface) Having intuitive workflows Engaging to stand apart among the millions of other AppsThis meant that the functionality does not require the user to be a geek. The average individual user can understand and use the App functionality without any external help. Majority of the Consumer Apps are heavily focused on the Efficiency and Usability part. Few of the Apps have been able to enchant and engage the user (e.g. Angry Birds). In the enterprise world, the device is most likely to be powerful desktop/laptop with a large screen. The connectivity is via LAN or broadband. As a result the Enterprise Applications are more industrial with high focus on providing long list of functionality and ensuring the correctness of the same. Consumers facing business applications usually focused on usability, Specialized enterprise applications (e.g. Call Center Applications) focus on the overall workflow efficiency, but majority of the applications are resource intensive, lack efficiency and have high learnability curves. The enterprise user experiences the simple, intuitive consumer apps on their mobile devices. With the advent of BYOD (Bring Your Own Device), enterprise user’s started bringing their smart devices into the enterprise. Soon the users are comparing the experience of consumer apps with the enterprise business apps; they wonder why the enterprise apps can’t provide a similar experience. Why are the enterprise apps so low on learnability, efficiency, intuitiveness factors? The whole idea behind the Consumerization of IT is not just measures like BYOD but to bring the missing intuitiveness, usability and efficiency into the enterprise applications. So, what does Consumerization means for the Architect? When designing and architecting enterprise business applications, following considerations need to be kept in mindConnected users means the applications need to be available 24 X 7 ( Always On, Always Available), 24 X 7 availability requires application solution(s) to be elastic – expand or shrink based on the load Intuitiveness and Usability have to be high on the agenda when designing the interface and workflows Solution needs offer APIs for additional applications to be build Integration with systems and other applications (including SaaS) need to be simple, straight forward and well documented User experience is the key to successful business application (it was always the key) but need to be (re)designed for the connected & mobile devices Product evaluation(s) will now include SaaS applications that can provide the functionality Saas adoption will increase leading to newer challenges in integration, data security and privacy measures Applications need to be designed/tested keeping in mind various devices/OS combinations (days of designing apps that worked on IE6 only are over)The impact of consumerization on the Enterprise IT will be felt in the years to comes, BYOD is just the harbinger of things to come. Reference: Consumerization of IT – what it means for the Architect from our JCG partner Munish K Gupta at the Tech Spot blog....
java-interview-questions-answers

ADF Task Flow: Managed bean scopes for page fragments

Introduction When we work with ADF Task Flows and need to implement some flow specific business logic or store some information connected with the flow, we usually use pageFlowScope managed beans. And when we need to service view activities of the flow (pages or page fragments) we use shorter scopes for such managed beans. The common practice is to use requestScope, backingBeanScope and viewScope scopes for pages/fragments backing beans. In this post I’m going to play with these three options and discover the differences in the behavior of fragment based Task Flow. Let’s say I have some simple task flow template task-flow-template.xml: <managed-bean id="__5"> <managed-bean-name id="__3">viewBean</managed-bean-name> <managed-bean-class id="__2">com.cs.blog.ViewBean</managed-bean-class> <managed-bean-scope id="__4">request</managed-bean-scope> </managed-bean> <managed-bean id="__15"> <managed-bean-name id="__13">flowBean</managed-bean-name> <managed-bean-class id="__12">com.cs.blog.FlowBean</managed-bean-class> <managed-bean-scope id="__14">pageFlow</managed-bean-scope> </managed-bean><view id="MainView"> <page>/MainView.jsff</page> </view>It has one view activity MainView and two backing beans. The flowBean has pageFlow scope and is responsible to store flow information. The viewBean has request scope (we will play with that) and it services the ManView view activity. The flowBean has the following method returning the tittle of the task flow: public String getFlowTitle() { return null; }The viewBean has some string field testString to store input value: protected String testString; public void setTestString(String testString) { this.testString = testString; }public String getTestString() { return testString; }The MainView shows the task flow’s title and has an inputText for the testString. It looks like this:We also have two task flows built on the task-flow-template – first-flow-definition and second-flow-definition. They have overridden managed beans. For the first-flow-definition: <managed-bean id="__5"> <managed-bean-name id="__3">viewBean</managed-bean-name> <managed-bean-class id="__21">com.cs.blog.FirstViewBean</managed-bean-class> <managed-bean-scope id="__4">request</managed-bean-scope> </managed-bean> <managed-bean id="__15"> <managed-bean-name id="__13">flowBean</managed-bean-name> <managed-bean-class id="__12">com.cs.blog.FirstFlowBean</managed-bean-class> <managed-bean-scope id="__14">pageFlow</managed-bean-scope> </managed-bean>public class FirstFlowBean extends FlowBean { public FirstFlowBean() { super(); } public String getFlowTitle() { return "FirstFlow"; } }public class FirstViewBean extends ViewBean { public FirstViewBean() { super(); } @PostConstruct public void init() { testString = "FirstFlow"; } }So the title and default value for testString is “FirstFlow”. For the second-flow-definition: <managed-bean id="__5"> <managed-bean-name id="__3">viewBean</managed-bean-name> <managed-bean-class id="__21">com.cs.blog.SecondViewBean</managed-bean-class> <managed-bean-scope id="__4">request</managed-bean-scope> </managed-bean> <managed-bean id="__15"> <managed-bean-name id="__13">flowBean</managed-bean-name> <managed-bean-class id="__12">com.cs.blog.SecondFlowBean</managed-bean-class> <managed-bean-scope id="__14">pageFlow</managed-bean-scope> </managed-bean>public class SecondFlowBean extends FlowBean { public SecondfFowBean() { super(); } public String getFlowTitle() { return "SecondFlow"; } }public class SecondViewBean extends ViewBean { public SecondViewBean() { super(); } @PostConstruct public void init() { testString = "SecondFlow"; } }So the title and default value for testString is “SecondFlow”. Ok. It’s time to experiment. Let’s put on our page two regions with first-flow-definition and second-flow-definition task flows: <af:region value="#{bindings.firstflowdefinition1.regionModel}" id="r1"/> <af:separator id="s1"/> <af:region value="#{bindings.secondflowdefinition1.regionModel}" id="r2" />requestScope Leaving the scope for the viewBean as requestScope we will get the following result:In the SecondFlow we see the testString from the FirstViewBean instance. We can have only one instance of the requestScope bean per request. The viewBean was created for the FirstFlow task flow and the same instance was used again for the SecondFlow. backingBeanScope   Somebody could recommend to use backingBeanScope for the viewBean instead of requestScope. The backingBeanScope is commonly used to manage regions and declarative components. It has the same length of life as the requestScope but for different instances of regions/declarative components you will have separate instances of backingBean scoped managed beans. In our case we have two different regions, so let’s try:And, Yes, the backingBeanScope has fixed the problem. We have two instances of the viewBean – for the regions r1 and r2. But let’s make the first-flow-definition task flow a bit more complicated:Now we can call child task flow (of the same definition) from the MainView. And let’s repeat the experiment. On the initial rendering:   So far, so good. Let’s input something in the input text of the FirstFlow and press “call child task flow”:Oooops! We have only one instance of the viewBean for the region r1 during the request. So, value “FirstFlow111111″ entered in the parent task flow was rendered again in the child task flow. viewScope   And now let’s change the viewBean’s scope to the viewScope and have the same experiment. On the initial rendering:   Ok. Inputting the same garbage in the inputText:And pressing the “call child task flow”:And everything is ok. We have not only separate viewScope beans instances for different viewport IDs (for different regions and task flow instances), but additionally the controller is resetting the viewScope during the navigation process. But the cheese is not free. You have to pay by memory. If requestScope or backingBeanScope live not longer than the request, viewScope lives in memory until the viewport ID is changed. Perhaps in my further posts I will show how to manage the issue with the backingBeanScope. So, when you choose the appropriate scope for your fragment managed beans, consider how the task flow is going to be used. Probably, in order to get really high reusable task flow, using the viewScope is the best approach for fragment beans. That’s it! Reference: Managed bean scopes for page fragments in ADF Task Flow from our JCG partner Eugene Fedorenko at the ADF Practice blog....
google-app-engine-logo

Google AppEngine: Task Queues API

Task Queues com.google.appengine.api.taskqueue With Task Queues a user can initiate a request to have applications perform work outside of this request; they are a powerful tool for background work. Furthermore, you can organize work into small, discrete units (tasks). The application then inserts these tasks into one or more queues based on the queue’s configuration and processes them in FIFO order. Here’s a diagram I took from a Google IO presentation which illustrates at a high level task insertion into the queue:Queue Configuration 1. Push Queues (default): Push queue will process tasks based on the processing rate configured in the queue definition (see below). App Engine automatically manages the lifetime of these queues (creation, deletion, etc) and adjusts the processing capacity to match your configuration and processing volume. These can only be used within App Engine (internal to your app).  2. Pull Queues: Allow a task consumer to lease tasks at a specific time within a specific timeframe. They are accessible internally as well as externally through the Task Queue REST API. In this scenarion, however, GAE does not manage the lifecycle and processing rate of queues automatically, it is up to the developer to do it. A backend also has access to these queues. Tasks   They represent a unit of work performed by application. T asks are are idempotent, i.e they are unique in a queue and according to Google documentation cannot be invoked more than once simultaneously (unless some weird internal error condition happens). Instances of TaskOptions class, tasks consist of URL and a payload which can be a simple string, a binary object (byte[ ]), or an instance of a DeferredTask. A DeferredTask is basically a Runnable. This allows you to chain tasks together. Our team had to do this in order to simulate long runnings tasks when GAE’s max execution limit was 30 seconds. Presently, a task must finish executing and send an HTTP response value between 200–299 within 10 minutes of the original request. This deadline is separate from user requests, which have a 60-second deadline. Furthermore, t asks use token buckets to control the rate of task execution. Each time task is i nvoked, a token is used. This leasing model (acquire a token) is typically of brokering systems or message-passing systems and it allows users to control the rate of execution of these tasks (see below on configuring queues). Lastly, a very important feature of the Task Queue API is that it has automatic retries of tasks. You can configure this with the RetriesOptions parameter when creating the TaskOptions object. Task within a Transaction Tasks can be enqueued as part of a datastore transaction. Insertion (not execution) will be guaranteed if the transaction was committed successfully. The only caveat is that Transactional tasks cannot have user-defined names and there is a maximum of 5 insertions into task queues in a single transaction. Configuration   Queues are configured via queue.xml. If omitted, default queue with default configuration is used. Since Pull Queues are for more advanced needs, they must be specifically configured (there is no default pull queue). An application’s, queue configuration applies to all versions of the app. You can override this behavior for push queues using the target parameter in queue.xml. This is used in case you want different versions of your app (different sites) with different queue processing configuration. Here are some of things you are allowed to configure (the documentation is more extensive): • bucket-size: how fast the queue is processed when many tasks are in the queue and the rate is high (push only). (Warning: Development server ignores this value) • max-concurrent-requests: maximum number of tasks that can be executed at any \ given time in the specified queue (push only). • mode: whether it’s push or pull. • name: queue name • rate: How often tasks are processed on this queue (s=seconds, m=minutes, h=hours, d=days). If 0, queue is considered paused. (Warning: Development server ignores this value) • target: target a task to a specfic backend or application version. <queue-entries> <!--Set the number of max concurrent requests to 10--> <queue> <name>optimize-queue</name> <rate>20/s</rate> <bucket-size>40</bucket-size> <max-concurrent-requests>10</max-concurrent-requests> </queue> </queue-entries>Sample Code   This is a very straightforward example. As I said before, task queues are basically a URL handler. In this servlet, the GET will handle enqueueing a task. The task will POST to this same servlet and execute the doPost( ) method carrying out the task. In this case, it’s just a simple counter. Notice the counter is a volatile property. If you access this servlet as GET request, it will enqueue another task. So, you will see the counter being incremented by both tasks. public class TaskQInfo extends HttpServlet { private static volatile int TASK_COUNTER = 0;// Executed by user menu click public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { // Build a task using the TaskOptions Builder pattern from ** above Queue queue = QueueFactory.getDefaultQueue(); queue.add(withUrl("/taskq_demo").method(TaskOptions.Method.POST)); resp.getWriter().println("Task have been added to default queue..."); resp.getWriter().println("Refresh this page to add another count task"); } // Executed by TaskQueue @Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { // This is the body of the task for(int i = 0; i < 1000; i++) { log.info("Processing: " + req.getHeader("X-AppEngine-TaskName") + "-" + TASK_COUNTER++); try { // Sleep for a second (if the rate is set to 1/s this will allow at // most 1 more task to be processed) Thread.sleep(1000); } catch (InterruptedException e) { // ignore} } } }Task queues allow you to achieve some level of concurrency in your application by invoking background processes on demand. For very lengthy tasks, you might want to take a look at App Engine backends, which are basically special App Engine instances with no request time limit. Reference: Google AppEngine: Task Queues API from our JCG partner Luis Atencio at the Reflective Thought blog....
software-development-2-logo

Why Developers Never Use State Machines

A few months ago I saw a great little blog post about state machines on the Shopify blog. The message was that state machines are great and developers should use them more – given my recent experiences with state machines at CrowdHired, I could certainly agree with that. But it got me thinking, how many times in my developer career have I actually used a state machine (either separate library or even hand-rolled abstraction)? The answer is zero times – which surprised the hell out of me since state machines really are very useful. So I decided to engage in a bit of introspection and figure out why we tend to manage our “state” and “status” fields in an ad-hoc fashion rather than doing what is clearly called for. We Don’t Need One Until We Do The problem is that you almost never create an object fully formed with all the behaviour it is ever going to need, rather you build it up over time. The same is true for the “states” that a state machine candidate object can be in. So, early on you don’t feel like your objects’ state machine behaviour is complex enough to warrant a “full-blown” state machine (YAGNI and all that jazz), but later on – when it IS complex enough – you feel like you’ve invested too much time/effort to replace it with something that has equivalent functionality. It’s a bit of a catch-22. It’s overkill and by the time it’s not, it’s too late. A State Machine Is A Fluffy Bunny (Not Particularly Threatening)Those of us who went through computer science degrees remember state machines from our computing theory subjects and the memories are often not fond ones. There are complex diagrams and math notation, determinism and non-determinism, Moore and Mealy, as well as acronyms galore (DFA, NFA, GNFA etc.). We come to believe that state machines are more complex than they actually are and it is therefore nothing but pragmatism that makes us consider a “full-blown” state machine overkill. But most state machines you’re likely to need in your day-to-day development have nothing in common with their computing theory counterparts (except the … errr … theory). You have states which are strings, and events which are methods that cause transitions from one state to another – that’s pretty much it (at least for the state_machine gem in Ruby). The point is, even if you have two states, a state machine is not overkill, it might be easier that rolling an ad-hoc solution, as long as you have a good library to lean on. Even A Good Tool Is Not A Good Tool I would hazard a guess that there are decent state machine libraries for most languages that you can use (the aforementioned state_machine for Ruby is just one example). But even a fluffy bunny has a learning curve (I am stretching the metaphor well past breaking point here). That wouldn’t be such an issue if you were solving a problem, but all you’re likely doing is replacing an existing solution. Since we tend to turn to a state machine library after the fact (our ad-hoc solution is working right now). Just like with everything that has “potential future benefits” the immediate value is very hard to justify even to yourself (unless you’ve had experience with it before). The slight learning curve only tips the scale further towards the “we can live without it” side. It doesn’t matter how good a tool is if you never give it a chance. It is really difficult to appreciate (until you’ve gone through it) – how much better life can be if you do give a good state machine library a chance. When we finally “bit the bullet” at CrowdHired and rejigged some of our core objects to use the state_machine gem, the difference was immediately apparent.Firstly the learning curve was minor, I did spend a few hours of going through the source and documentation, but after that I had a good idea what could and couldn’t be done (I might do an in-depth look at the state_machine gem at some point). The integration itself was almost painless, but moving all the code around to be inline with the new state machine was a big pain. In hindsight had we done this when our objects only had a couple of states it would have been a breeze. We’re now able to easily introduce more states to give our users extra information as well as allow us to track things to a finer grain. Before it was YAGNI cause it was a pain, now we find that we “ai gonna need” after all, cause it’s so easy. Our return values from state transitions are now 100% consistent (true/false). Before we were returning objects, arrays of objects, nil, true/false depending on who was writing it and when. We’re now able to keep an audit trail of our state transitions simply by dropping in state_machine-audit_trail (see that Shopify post), before it was too hard to hook it in everywhere so we had nothing. We removed a bunch of code and improved our codebase – always worthy goals as far as I am concerned.My gut-feel is that most people who read that Shopify post agreed with it in spirit, but did nothing about it (that’s kinda how it was with me). We seem to shy away from state machines due to misunderstanding of their complexity and/or an inability to quantify the benefits. But, there is less complexity than you would think and more benefits than you would expect as long you don’t try to retrofit a state machine after the fact. So next time you have an object that even hints at having a “status” field, just chuck a state machine in there, you’ll be glad you did. I guarantee it or your money back :). Reference: Why Developers Never Use State Machines from our JCG partner Alan Skorkin at the Skorks blog blog....
javafx-logo

JBox2D and JavaFX: Events and forces

In yesterdays samples you saw how you can create a simple world, and display it with WorldView, and how to provide custom Renderers. Now we’re going to add some user input. We’ll create a control that behaves like a flipper in a pinball machine. To do that we’ll create a Joint. In JBox2D Joints are used to constrain bodies to the world or to each other. We’ll create a static circular Body that will serve as the axis for our flipper, and bind a Box to it via a RevoluteJoint. To simplify the code, we’ll first define a JointBuilder base class and a RevoluteJointBuilder: public abstract class JointBuilder, T extends JointDef> {protected World world; protected T jointDef;protected JointBuilder(World world, T jointDef) { this.world = world; this.jointDef = jointDef; }public K bodyA(Body a) { jointDef.bodyA = a; return (K) this; }public K bodyB(Body b) { jointDef.bodyB = b; return (K) this; }public K userData(Object userData) { jointDef.userData = userData; return (K) this; }public K type(JointType type) { jointDef.type = type; return (K) this; }public K collideConnected(boolean coco) { jointDef.collideConnected = coco; return (K) this; }public Joint build() { return world.createJoint(jointDef); } }And here’s the RevoluteJointBuilder: public class RevoluteJointBuilder extends JointBuilder {public RevoluteJointBuilder(World world, Body a, Body b, Vec2 anchor) { super(world, new RevoluteJointDef()); jointDef.initialize(a, b, anchor); }public RevoluteJointBuilder enableLimit(boolean enable) { jointDef.enableLimit = enable; return this; }public RevoluteJointBuilder enableMotor(boolean motor) { jointDef.enableMotor = motor; return this; }public RevoluteJointBuilder localAnchorA(Vec2 localAnchorA) { jointDef.localAnchorA = localAnchorA; return this; }public RevoluteJointBuilder localAnchorB(Vec2 localAnchorB) { jointDef.localAnchorB = localAnchorB; return this; }public RevoluteJointBuilder lowerAngle(float lowerAngle) { jointDef.lowerAngle = lowerAngle; return this; }public RevoluteJointBuilder maxMotorTorque(float maxMotorTorque) { jointDef.maxMotorTorque = maxMotorTorque; return this; }public RevoluteJointBuilder motorSpeed(float motorSpeed) { jointDef.motorSpeed = motorSpeed; return this; }public RevoluteJointBuilder referenceAngle(float referenceAngle) { jointDef.referenceAngle = referenceAngle; return this; }public RevoluteJointBuilder upperAngle(float upperAngle) { jointDef.upperAngle = upperAngle; return this; }}Now we can modify our HelloWorld-Example like this: public class HelloWorld extends Application {public static void main(String[] args) { Application.launch(args); }@Override public void start(Stage primaryStage) { World world = new World(new Vec2(0, -2f), true); primaryStage.setTitle("Hello World!"); NodeManager.addCircleProvider(new MyNodeProvider());new CircleBuilder(world).userData("ball").position(0.1f, 4).type(BodyType.DYNAMIC).restitution(1).density(2).radius(.15f).friction(.3f).build(); final Body flipperBody = new BoxBuilder(world).position(0, 2).type(BodyType.DYNAMIC).halfHeight(.02f).halfWidth(.2f).density(2).friction(0).userData("flipper").build(); Vec2 axis = flipperBody.getWorldCenter().add(new Vec2(.21f, 0)); Body axisBody = new CircleBuilder(world).position(axis).type(BodyType.STATIC).build(); new RevoluteJointBuilder(world, flipperBody, axisBody, axis).upperAngle(.6f).lowerAngle(-.6f) .enableMotor(true).enableLimit(true).maxMotorTorque(10f).motorSpeed(0f).build();Scene scene = new Scene(new WorldView(world, 200, 400, 50), 500, 600);// ground new BoxBuilder(world).position(0, -1f).halfHeight(1).halfWidth(5).build(); primaryStage.setScene(scene ); primaryStage.show(); } }This will display our scene and you’ll see how the Joint prevents the dynamic Box from falling to the ground and how it constrains it’s movement. The next step is to allow the user to control it. For this we’ll apply a force when the user presses a key. Add this after the instantiation of the scene: scene.setOnKeyPressed(new EventHandler() {@Override public void handle(KeyEvent ke) {if (ke.getCode() == KeyCode.LEFT) { flipperBody.applyTorque(-15f); }} });scene.setOnKeyReleased(new EventHandler() {@Override public void handle(KeyEvent ke) {if (ke.getCode() == KeyCode.LEFT) { flipperBody.applyTorque(15f); }} });That’s it for now. In the next parts of this tutorial we’ll do a bit more custom rendering and create some nice custom Nodes. Reference: Events and forces with JBox2D and JavaFX from our JCG partner Toni Epple at the Eppleton blog....
groovy-logo

Using Groovy scriptlets inside a *.docx document

Introduction One of my recent projects required automated generation of contracts for customers. Contract is a legal document of about 10 pages length. One contract form can be applied for many customers so the document is a template with customer info put in certain places. In this article I am going to show you how I solved this problem. Requirements This is an initial version of formalized requirements : Specified data must be placed in marked places of a complex DOC/DOCX file The requirements were subsequently refined and expanded :Specified data must be placed in marked places of a complex DOCX file. Output markup must be scriptlet-like: ${}, <%%>, <%=%>. Output data may be not only strings but also hashes and objects. Field access must be an option. Output language must be brief and script-friendly: Groovy, JavaScript. A possibility to display list of objects in a table, each cell displaying a field.Background It turned out that the existing products in the field (I’m talking about Java world) do not fit into initial requirements. A brief overview of the products: Jasper reports Jasper Reportsuses *.jrxml files as templates. Template file in combination with input data (SQL result set or a Map of params) are given to a processor which forms any of these formats: PDF, XML, HTML, CSV, XLS, RTF, TXT. Did not fit in:It’s not a WYSIWYG, even with help of iReport — a visual tool to create jrxml-templates. JasperReports API must be learned well to create and style a complex template. JR does not output in a suitable format. PDF might be okay, but ability of hand-editing is preferable.Docx4java Docx4jis a Java library for creating and manipulating Microsoft Open XML (Word docx, Powerpoint pptx, and Excel xlsx) files.  Did not fit in:There is no case meeting my requirements in docx4java documentation. A brief note about XMLUtils.unmarshallFromTemplate functionality is present but it only does simpliest substitutions. Repeats of output is done with prepared XML-sources and XPath, linkApache POI Apache POI is a Java tool for creating and manipulating parts of *. doc, *.ppt, *.xls documents. A major use of the Apache POI api is for Text Extraction applications such as web spiders, index builders, and content management systems. Did not fit in:Does not have any options that meet my requirements.Word Content Control Toolkit Word Content Control Toolkit is a stand-alone, light-weight tool that opens any Word Open XML document and lists all of the content controls inside of it. After I developed my own solution with scriptlets I heard of a solution based on combination of this tool and XSDT-transformations. It may work for somebody but I did not bother digging because it simply takes less steps to use my solution straightforward. Solution of the problem It was fun! 1. Document text content is stored as Open XML file inside a zip-archive. Traditional JDK 6 zipper does not support an explicit encoding parameter. That is, a broken docx-file may be produced using this zipper. I had to use a Groovy-wrapper AntBuilder for zipping, which does have an encoding parameter. 2. Any text inside you enter in MS Word may be “arbitrary” broken into parts wrapped with XML. So, I had to solve the problem of cleaning pads generated from the template xml. I used regular expressions for this task. I did not try to use XSLT or anything because I thought RegEx would be faster. 3. I decided to use Groovy as a scripting language because of its simplicity, Java-nature, and a built-in template processor. I found an interesting issue related to the processor. It turned out that even in a small 10-sheet document one can easily run into a restriction on the length of a string between two scriptlets. I had to substitute the text going between a pair of scriptlets with a UUID-string, run the Groovy template processor using the modified text, and finally swich back those UUID-placeholders with the initial text fragments. After overcoming these difficulties, I tried out the project in real life. It turned out well! I created a project website and published it. Project address: snowindy.github.com/scriptlet4docx/ Code example HashMap<String, Object> params = new HashMap<String, Object>(); params.put("name","John"); params.put("sirname","Smith");DocxTemplater docxTemplater = new DocxTemplater(new File("path_to_docx_template/template.docx")); docxTemplater.process(new File("path_to_result_docx/result.docx"), params);Scriptlet types explanation ${ data } Equivalent to out.print(data) <%= data %> Equivalent to out.print(data) <% any_code %> Evaluates containing code. No output applied. May be used for divided conditions: <% if (cond) { %> This text block will be printed in case of "cond == true" <% } else { %> This text block will be printed otherwise. <% } %>$[ @listVar.field ] This is a custom Scriptlet4docx scriptlet type designed to output collection of objects to docx tables. It must be used inside a table cell. Say, we have a list of person objects. Each has two fields: ‘name’ and ‘address’. We want to output them to a two-column table.Create a binding with key ‘personList’ referencing that collection. Create a two-column table inside a template docx-document: two columns, one row. $[@person.name] goes to the first column cell; $[@person.address] goes to the second. Voila, the whole collection will be printed to the table.Live template example You can check all mentioned scriptlets usage in a demonstration template Project future If I actually developed a new approach to processing docx-templates, it would be nice to popularize it. Projects TODOs:Preprocessed templates caching, Scriptlets support in lists Streaming APIReference: Using Groovy scriptlets inside a *.docx document from our W4G partner Eugene Polyhaev ...
software-development-2-logo

Stupid Design Decisions (Part I)

Maybe you know the joke where a young software engineer goes into a bar, puts a green frog on top of the bar counter and the frog says: “Kiss me, I’m an enchanted princess.” The bar keeper is fascinated and recommends the software engineer to kiss the frog. But he just replies “I have no time for a girlfriend and a talking frog is cool!”. We love cool things, difficult technical puzzles and sometimes challenging bugs. But we should be careful with our motivation if we do technical decisions. I have seen and will see a lot of stupid decisions form engineers and/or managers. The following list of tree decision anti-pattern is neither complete nor scientifically verifiable, but I think you may recognize some of them in your organization. 1) The Swarm-Foolishness-Design-Decision Often, a group of humans is not more intelligent than each single team member. I know that this is a provocative statement, but it’s a matter of fact that team dynamic can be a killer of intelligent decisions. If a group of people discuss a problem and every body’s input is told, it is very likely that the person will get acceptance of the team which is most sure that he/she knows the right solution. People which are convinced that their opinion is the correct solution are so dominant to others, that even people which know it better getting uncertain and just recede from their opinion. The only way to avoid this pitfall is to believe nothing. Try to find a prove, a sample, some measurement, program spike solutions and/or find reliable case studies. Software Engineering is a discipline that should be founded on knowledge and not on believe. 2) The Manager-Design-Decision Unfortunately relatively often these people which are perfectly sure that they have the right solution are managers. A lot of organizations promote people which are self confident and good in solving problems. Over the time these managers lose their technical competence, they just stop to be up-to-date. Some are still convinced that they know everything better than their team members. This can be a great problem, in cases where the manager is fooled into bel ieving that a manager is automatically a good designer. In the case that your manager permanently over rules the team and makes stupid design decisions, maybe it’s a good idea to find a better manager. Or you could print this article, use a yellow text marker to highlight right paragraph and drop it at the desk of your boss. At least you will have some fun and there is a little chance that he/she will improve. 3) The Big-Mac-Menu-Design-Decision A Big-Mac is a product with consistent standard of quality in space and time over the past 40 years. Just small variations in nutritional values between countries [1] and almost no variation within a country. But this doesn’t says anything about taste and/or healthiness. Large organizations tend to have a (big) enterprise architecture department which produces a kind of (big) standard architecture. Independent of the selected technology a standard architecture has usually a lot of overhead (because it should fit for every project in the company) and/or it is outdated before published for the first time. It can be dangerous to use a standard architecture in inappropriate circumstances and it is difficult to do decide for alternative solutions. Please, try to courageously raise your voice if the standard architecture doesn’t fit for your task. The enterprise architecture department should work for us and not vice versa. Reference: Stupid Design Decisions – ‘I have no time for a girlfriend and a talking frog is cool!’ from our JCG partner Markus Sprunck at the Software Engineering Candies blog....
jsf-logo

Custom JSF validator for required fields

JSF components implementing EditableValueHolder interface have two attributes ‘ required’ and ‘ requiredMessage’ – a flag indicating that the user is required to input / select not empty value and a text for validation message. We can use that, but it’s not flexible enough, we can’t parameterize the message directly in view (facelets or jsp) and we have to do something for a proper message customization. What is about a custom validator attached to any required field? We will write one. At first we need to register such validator in a tag library. <?xml version='1.0'?> <facelet-taglib version='2.0' ... > <namespace>http://ip.client/ip-jsftoolkit/validator</namespace> <tag> <tag-name>requiredFieldValidator</tag-name> <validator> <validator-id>ip.client.jsftoolkit.RequiredFieldValidator</validator-id> </validator> <attribute> <description>Resource bundle name for the required message</description> <name>bundle</name> <required>false</required> <type>java.lang.String</type> </attribute> <attribute> <description>Key of the required message in the resource bundle</description> <name>key</name> <required>false</required> <type>java.lang.String</type> </attribute> <attribute> <description>Label string for the required message</description> <name>label</name> <required>false</required> <type>java.lang.String</type> </attribute> </tag> </facelet-taglib>We defined three attributes in order to achieve a high flexibility. A simple using would be <h:outputLabel for='myInput' value='#{text['myinput']}'/> <h:inputText id='myInput' value='...'> <jtv:requiredFieldValidator label='#{text['myinput']}'/> </h:inputText>The validator class itself is not difficult. Dependent on the ‘ key’ parameter (key of the required message) and the ‘ label’ parameter (text of the corresponding label) there are four cases how the message gets acquired. /** * Validator for required fields. */ @FacesValidator(value = RequiredFieldValidator.VALIDATOR_ID) public class RequiredFieldValidator implements Validator { /** validator id */ public static final String VALIDATOR_ID = 'ip.client.jsftoolkit.RequiredFieldValidator';/** default bundle name */ public static final String DEFAULT_BUNDLE_NAME = 'ip.client.jsftoolkit.validator.message';private String bundle; private String key; private String label;@Override public void validate(FacesContext facesContext, UIComponent component, Object value) throws ValidatorException { if (!UIInput.isEmpty(value)) { return; }String message; String bundleName;if (bundle == null) { bundleName = DEFAULT_BUNDLE_NAME; } else { bundleName = bundle; }if (key == null && label == null) { message = MessageUtils.getMessageText( MessageUtils.getResourceBundle(facesContext, bundleName), 'jsftoolkit.validator.emptyMandatoryField.1'); } else if (key == null && label != null) { message = MessageUtils.getMessageText( MessageUtils.getResourceBundle(facesContext, bundleName), 'jsftoolkit.validator.emptyMandatoryField.2', label); } else if (key != null && label == null) { message = MessageUtils.getMessageText( MessageUtils.getResourceBundle(facesContext, bundleName), key); } else { message = MessageUtils.getMessageText( MessageUtils.getResourceBundle(facesContext, bundleName), key, label); }throw new ValidatorException(new FacesMessage(FacesMessage.SEVERITY_WARN, message, StringUtils.EMPTY)); // getter / setter ... } }MessageUtils is an utility class to get ResourceBundle and message text. We also need two text in the resource bundle (property file) jsftoolkit.validator.emptyMandatoryField.1=Some required field is not filled in. jsftoolkit.validator.emptyMandatoryField.2=The required field '{0}' is not filled in.and the following context parameter in web.xml <context-param> <param-name>javax.faces.VALIDATE_EMPTY_FIELDS</param-name> <param-value>true</param-value> </context-param>This solution is not ideal because we need to define the label text (like #{text['myinput']}) twice and to attach the validator to each field to be validated. A better and generic validator for multiple fields will be presented in the next post. Stay tuned! Reference: Custom JSF validator for required fields from our JCG partner Oleg Varaksin at the Thoughts on software development blog....
akka-logo

Akka STM – Playing PingPong with STM Refs and Agents

PingPong is a classic example where 2 players (or threads) access a shared resource – PingPong Table and pass the Ball (state variable) between each other. With any shared resource, unless we synchronize the access, the threads can run into potential deadlock situation.The PingPong algorithm is very simple if my turn { update whose turn is next ping/pong -log the hit notify other threads } else { wait for notification }  Let’s take an example and see how this works! Here is our Player class, which implements Runnable and takes in the access to the shared resource and a message public class Player implements Runnable { PingPong myTable; Table where they play String myOpponent; public Player(String opponent, PingPong table) { myTable = table; myOpponent = opponent; } public void run() { while (myTable.hit(myOpponent)) ; }}Second, we see the PingPong table class, which has a synchronized method hit() where a check is made, if my turn or not. If my turn, log the ping and update the shared variable for opponent name. public class PingPong { state variable identifying whose turn it is. private String whoseTurn = null; public synchronized boolean hit(String opponent) { String x = Thread.currentThread().getName(); if (x.compareTo(whoseTurn) == 0) { System.out.println('PING! (' + x + ')'); whoseTurn = opponent; notifyAll(); } else { try { wait(2500); } catch (InterruptedException e) { } } } }Next, we start the game and get the players started! public class Game { public static void main(String args[]) { PingPong table = new PingPong(); Thread alice = new Thread(new Player('bob', table)); Thread bob = new Thread(new Player('alice', table)); alice.setName('alice'); bob.setName('bob'); alice.start(); alice starts playing bob.start(); bob starts playing try { Wait 5 seconds Thread.sleep(5000); } catch (InterruptedException e) { } table.hit('DONE'); cause the players to quit their threads. try { Thread.sleep(100); } catch (InterruptedException e) { } } }That’s all, we have our PingPong game running. In this case, we saw how the synchronized method hit() allows only one thread to access the shared resource – whoseTurn. Akka STM provides two constructs Refs and Agents. Refs (Transactional References) provide coordinated synchronous access to multiple identities. Agents provide uncoordinated asynchronous access to single identity. Refs In our case, since share state variable is a single identity, usage of Refs is overkill but still we will go ahead and see their usage. public class PingPong { updates to Ref.View are synchronous Ref.View<String> whoseTurn; public PingPong(Ref.View<String> player) { whoseTurn = player; } public boolean hit(final String opponent) { final String x = Thread.currentThread().getName(); if (x.compareTo(whoseTurn.get()) == 0) { System.out.println('PING! (' + x + ')'); whoseTurn.set(opponent); } else { try { wait(2500); } catch (Exception e) { } } } }The key here are the followingThe synchronized keyword is missing Definition of the state variable as Ref //updates to Ref.View are synchronous Ref.View<string> whoseTurn; Calls to update Ref are coordinated and synchronous whoseTurn.set(opponent);So, when we use the Ref to hold the state, access to the Refs is automatically synchronized in a transaction. Agents Since agents provide uncoordinated asynchronous access, using agents for state manipulation would mean that we need to wait till all the updates have been applied to the agent. Agents provide a non blocking access for gets. public class PingPong { Agent<String> whoseTurn; public PingPong(Agent<String> player) { whoseTurn = player; } public boolean hit(final String opponent) { final String x = Thread.currentThread().getName(); wait till all the messages are processed to make you get the correct value, as updated to Agents are async String result = whoseTurn.await(new Timeout(5, SECONDS)); if (x.compareTo(result) == 0) { System.out.println('PING! (' + x + ')'); whoseTurn.send(opponent); } else { try { wait(2500); } catch (Exception e) { } } return true; keep playing. } }The key here are the followingThe synchronized keyword is missing Definition of the state variable as Agent //updates to Ref.View are synchronous Agent<string> whoseTurn; Wait for updates to the agent, as updates to agent are async String result = whoseTurn.await(new Timeout(5, SECONDS)); Calls to update Ref are coordinated and synchronous whoseTurn.send(opponent);All the code referred in these example is available at – https://github.com/write2munish/Akka-Essentials/tree/master/AkkaSTMExample/src/main/java/org/akka/essentials/stm/pingpong with Example 1 – for normal thread based synchronization Example 2 – Usage of Refs for synchronization Example 3 – Usage of Agents for synchronization Reference: Playing PingPong with STM – Refs and Agents from our JCG partner Munish K Gupta at the Akka Essentials blog....
spring-interview-questions-answers

REST CXF for Spring JPA2 backend

In this demo, we will generate a REST/CXF application with spring/jpa2 backend. This demo presents the track REST-CXF of minuteproject. The model from demo 2 remains the same.The enrichment stays the same. But the tracks changes What is added are 2 layers:a DAO layer with spring integration on top of JPA2 a REST-CXF layer with JAX-RS annotationThe JPA2 entities are annotated by JAXB annotations. All this is done to provide a CRUD interface on top of the model entities. Configuration Here is the configuration TRANXY-JPA2-Spring-REST-CXF.xml <!DOCTYPE root> <generator-config xmlns="http://minuteproject.sf.net/xsd/mp-config" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="../config/mp-config.xsd"> <configuration> <conventions> <target-convention type="enable-updatable-code-feature" /> </conventions> <model name="tranxy" version="1.0" package-root="net.sf.mp.demo"> <data-model> <driver name="mysql" version="5.1.16" groupId="mysql" artifactId="mysql-connector-java"></driver> <dataSource> <driverClassName>org.gjt.mm.mysql.Driver</driverClassName> <url>jdbc:mysql://127.0.0.1:3306/tranxy</url> <username>root</username> <password>mysql</password> </dataSource> <primaryKeyPolicy oneGlobal="false" > <primaryKeyPolicyPattern name="autoincrementPattern"></primaryKeyPolicyPattern> </primaryKeyPolicy> </data-model> <business-model> <generation-condition> <condition type="exclude" startsWith="QUARTZ"></condition> </generation-condition> <business-package default="tranxy"> <condition type="package" startsWith="trans" result="translation"></condition> </business-package> <enrichment> <conventions> <!-- manipulate the structure and entities BEFORE manipulating the entities --> <column-naming-convention type="apply-strip-column-name-suffix" pattern-to-strip="ID" /> <reference-naming-convention type="apply-referenced-alias-when-no-ambiguity" is-to-plurialize="true" /> </conventions> <entity name="language_x_translator"> <field name="language_id" linkReferenceAlias="translating_language" /> <field name="user_id" linkReferenceAlias="translator" /> </entity> <entity name="LANGUAGE_X_SPEAKER"> <field name="LANGUAGE_ID" linkToTargetEntity="LANGUAGE" linkToTargetField="IDLANGUAGE" linkReferenceAlias="spoken_language" /> <field name="user_id" linkReferenceAlias="speaker" /> </entity> <entity name="APPLICATION"> <field name="TYPE"> <property tag="checkconstraint" alias="application_type"> <property name="OPENSOURCE"/> <property name="COPYRIGHT" /> </property> </field> </entity> </enrichment> </business-model> </model> <targets> <target refname="REST-CXF-BSLA" name="default" fileName="mp-template-config-REST-CXF-Spring.xml" outputdir-root="../../DEV/latvianjug/tranxy/rest" templatedir-root="../../template/framework/cxf"> </target><target refname="BackendOnBsla" name="default" fileName="mp-template-config-JPA2-bsla.xml" outputdir-root="../../DEV/latvianjug/tranxy/bsla" templatedir-root="../../template/framework/bsla"> <property name="add-cache-implementation" value="ehcache"></property> </target> <target refname="JPA2" fileName="mp-template-config-JPA2.xml" outputdir-root="../../DEV/latvianjug/tranxy/jpa" templatedir-root="../../template/framework/jpa"> <property name="add-querydsl" value="2.1.2"></property> <property name="add-jpa2-implementation" value="hibernate"></property> <property name="add-cache-implementation" value="ehcache"></property> <property name="add-domain-specific-method" value="true"></property> <property name="add-xmlbinding" value="true"></property> <property name="add-xml-format" value="lowercase-hyphen"></property> </target> <target refname="MavenMaster" name="maven" fileName="mp-template-config-maven.xml" outputdir-root="../../DEV/latvianjug/tranxy" templatedir-root="../../template/framework/maven"> </target><target refname="CACHE-LIB" fileName="mp-template-config-CACHE-LIB.xml" templatedir-root="../../template/framework/cache"> </target> <target refname="LIB" fileName="mp-template-config-bsla-LIB-features.xml" templatedir-root="../../template/framework/bsla"> </target><target refname="REST-LIB" fileName="mp-template-config-REST-LIB.xml" templatedir-root="../../template/framework/rest"> </target> <target refname="SPRING-LIB" fileName="mp-template-config-SPRING-LIB.xml" templatedir-root="../../template/framework/spring"> </target></targets> </configuration> </generator-config>Todo explanations Generation Set TRANXY-JPA2-Spring-REST-CXF.xml in /mywork/config Run >model-generation.cmd TRANXY-JPA2-Spring-REST-CXF.xml The output goes in /dev/latvianjug/tranxy Resulting artefacts A maven project structure with 3 modulesJPA2 layer Spring DAO layer CXF layerJPA2 layer has been visited in Demo 1 and Demo 2. Spring DAO layer It consists of transactional services one for each entity CRUD DAO layer on top of JPA2: This layer is called BSLA (Basic Spring Layer Architecture). Two interfaces and implementation are generated for each entity Example for Translation entity DAO Interfaces /** * Copyright (c) minuteproject, minuteproject@gmail.com * All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * More information on minuteproject: * twitter @minuteproject * wiki http://minuteproject.wikispaces.com * blog http://minuteproject.blogspot.net * */ /** * template reference : * - name : BslaDaoInterfaceUML * - file name : BslaDaoInterfaceUML.vm */ package net.sf.mp.demo.tranxy.dao.face.translation;import net.sf.mp.demo.tranxy.domain.translation.Translation; import java.util.List; import net.sf.minuteProject.architecture.bsla.bean.criteria.PaginationCriteria; import net.sf.minuteProject.architecture.bsla.dao.face.DataAccessObject;/** * * <p>Title: TranslationDao</p> * * <p>Description: Interface of a Data access object dealing with TranslationDao * persistence. It offers a set of methods which allow for saving, * deleting and searching translation objects</p> * */ public interface TranslationDao extends DataAccessObject {/** * Inserts a Translation entity * @param Translation translation */ public void insertTranslation(Translation translation) ; /** * Inserts a list of Translation entity * @param List<Translation> translations */ public void insertTranslations(List<Translation> translations) ; /** * Updates a Translation entity * @param Translation translation */ public Translation updateTranslation(Translation translation) ;/** * Updates a Translation entity with only the attributes set into Translation. * The primary keys are to be set for this method to operate. * This is a performance friendly feature, which remove the udibiquous full load and full update when an * update is to be done * Remark: The primary keys cannot be update by this methods, nor are the attributes that must be set to null. * @param Translation translation */ public int updateNotNullOnlyTranslation(Translation translation) ; public int updateNotNullOnlyPrototypeTranslation(Translation translation, Translation prototypeCriteria); /** * Saves a Translation entity * @param Translation translation */ public void saveTranslation(Translation translation); /** * Deletes a Translation entity * @param Translation translation */ public void deleteTranslation(Translation translation) ; /** * Loads the Translation entity which is related to an instance of * Translation * @param Long id * @return Translation The Translation entity public Translation loadTranslation(Long id); */ /** * Loads the Translation entity which is related to an instance of * Translation * @param java.lang.Long Id * @return Translation The Translation entity */ public Translation loadTranslation(java.lang.Long id);/** * Loads a list of Translation entity * @param List<java.lang.Long> ids * @return List<Translation> The Translation entity */ public List<Translation> loadTranslationListByTranslation (List<Translation> translations); /** * Loads a list of Translation entity * @param List<java.lang.Long> ids * @return List<Translation> The Translation entity */ public List<Translation> loadTranslationListById(List<java.lang.Long> ids); /** * Loads the Translation entity which is related to an instance of * Translation and its dependent one to many objects * @param Long id * @return Translation The Translation entity */ public Translation loadFullFirstLevelTranslation(java.lang.Long id); /** * Loads the Translation entity which is related to an instance of * Translation * @param Translation translation * @return Translation The Translation entity */ public Translation loadFullFirstLevelTranslation(Translation translation); /** * Loads the Translation entity which is related to an instance of * Translation and its dependent objects one to many * @param Long id * @return Translation The Translation entity */ public Translation loadFullTranslation(Long id) ;/** * Searches a list of Translation entity based on a Translation containing Translation matching criteria * @param Translation translation * @return List<Translation> */ public List<Translation> searchPrototypeTranslation(Translation translation) ; /** * Searches a list of Translation entity based on a list of Translation containing Translation matching criteria * @param List<Translation> translations * @return List<Translation> */ public List<Translation> searchPrototypeTranslation(List<Translation> translations) ; /** * Searches a list of Translation entity * @param Translation translation * @return List */ public List<Translation> searchPrototypeTranslation(Translation translationPositive, Translation translationNegative) ; /** * Load a paginated list of Translation entity dependent of pagination criteria * @param PaginationCriteria paginationCriteria * @return List */ public List<Translation> loadPaginatedTranslation (Translation translation, PaginationCriteria paginationCriteria) ; }/** * Copyright (c) minuteproject, minuteproject@gmail.com * All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * More information on minuteproject: * twitter @minuteproject * wiki http://minuteproject.wikispaces.com * blog http://minuteproject.blogspot.net * */ /** * template reference : * - name : BslaDaoInterfaceExtendedUML * - file name : BslaDaoInterfaceKFUML.vm */ package net.sf.mp.demo.tranxy.dao.face.translation;import net.sf.mp.demo.tranxy.domain.translation.Translation; import java.util.List; import net.sf.minuteProject.architecture.filter.data.Criteria; import net.sf.minuteProject.architecture.bsla.dao.face.DataAccessObject;/** * * <p>Title: TranslationExtDao</p> * * <p>Description: Interface of a Data access object dealing with TranslationExtDao * persistence. It offers extended DAO functionalities</p> * */ public interface TranslationExtDao extends DataAccessObject { /** * Inserts a Translation entity with cascade of its children * @param Translation translation */ public void insertTranslationWithCascade(Translation translation) ; /** * Inserts a list of Translation entity with cascade of its children * @param List<Translation> translations */ public void insertTranslationsWithCascade(List<Translation> translations) ; /** * lookup Translation entity Translation, criteria and max result number */ public List<Translation> lookupTranslation(Translation translation, Criteria criteria, Integer numberOfResult); public Integer updateNotNullOnlyTranslation (Translation translation, Criteria criteria);/** * Affect the first translation retrieved corresponding to the translation criteria. * Blank criteria are mapped to null. * If no criteria is found, null is returned. */ public Translation affectTranslation (Translation translation); public Translation affectTranslationUseCache (Translation translation); /** * Assign the first translation retrieved corresponding to the translation criteria. * Blank criteria are mapped to null. * If no criteria is found, null is returned. * If there is no translation corresponding in the database. Then translation is inserted and returned with its primary key(s). */ public Translation assignTranslation (Translation translation);/** * Assign the first translation retrieved corresponding to the mask criteria. * Blank criteria are mapped to null. * If no criteria is found, null is returned. * If there is no translation corresponding in the database. * Then translation is inserted and returned with its primary key(s). * Mask servers usually to set unique keys or the semantic reference */ public Translation assignTranslation (Translation translation, Translation mask); public Translation assignTranslationUseCache (Translation translation); /** * return the first Translation entity found */ public Translation getFirstTranslation (Translation translation); /** * checks if the Translation entity exists */ public boolean existsTranslation (Translation translation); public boolean existsTranslationWhereConditionsAre (Translation translation);/** * partial load enables to specify the fields you want to load explicitly */ public List<Translation> partialLoadTranslation(Translation translation, Translation positiveTranslation, Translation negativeTranslation);/** * partial load with parent entities * variation (list, first, distinct decorator) * variation2 (with cache) */ public List<Translation> partialLoadWithParentTranslation(Translation translation, Translation positiveTranslation, Translation negativeTranslation);public List<Translation> partialLoadWithParentTranslationUseCache(Translation translation, Translation positiveTranslation, Translation negativeTranslation, Boolean useCache);public List<Translation> partialLoadWithParentTranslationUseCacheOnResult(Translation translation, Translation positiveTranslation, Translation negativeTranslation, Boolean useCache);/** * variation first */ public Translation partialLoadWithParentFirstTranslation(Translation translationWhat, Translation positiveTranslation, Translation negativeTranslation); public Translation partialLoadWithParentFirstTranslationUseCache(Translation translationWhat, Translation positiveTranslation, Translation negativeTranslation, Boolean useCache);public Translation partialLoadWithParentFirstTranslationUseCacheOnResult(Translation translationWhat, Translation positiveTranslation, Translation negativeTranslation, Boolean useCache);/** * variation distinct */ public List<Translation> getDistinctTranslation(Translation translationWhat, Translation positiveTranslation, Translation negativeTranslation);// public List partialLoadWithParentForBean(Object bean, Translation translation, Translation positiveTranslation, Translation negativeTranslation);/** * search on prototype with cache */ public List<Translation> searchPrototypeWithCacheTranslation (Translation translation); /** * Searches a list of distinct Translation entity based on a Translation mask and a list of Translation containing Translation matching criteria * @param Translation translation * @param List<Translation> translations * @return List<Translation> */ public List<Translation> searchDistinctPrototypeTranslation(Translation translationMask, List<Translation> translations) ;public List<Translation> countDistinct (Translation whatMask, Translation whereEqCriteria); public Long count (Translation whereEqCriteria); public List<Translation> loadGraph(Translation graphMaskWhat, List<Translation> whereMask); public List<Translation> loadGraphFromParentKey (Translation graphMaskWhat, List<Translation> parents); /** * generic to move after in superclass */ public List<Object[]> getSQLQueryResult(String query); }DAO implementations TranslationJPAImpl and TranslationJPAExtImpl (code not copied). In the future Generic DAO will be used for cross-entity redundant aspects. Adaptation to spring 3.x will be perform (i.e no more JPASupport extension by EntityManager injection) Meanwhile the code here above works fine with spring 2.5+ Spring configurations spring-config-Tranxy-BE-main.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"><beans><!-- Dao JPA --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/spring-config-JPA-Tranxy-dao.xml"/><!--MP-MANAGED-UPDATABLE-BEGINNING-DISABLE @JPAtranxyFactory-tranxy@--> <!-- hibernate config to put in an appart config file--> <bean id="JPAtranxyFactory" autowire="byName" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <!-- all connection information are retrieve from the persistence file--> <!-- <property name="dataSource" ref="..."/> <property name="persistenceUnitName" value="..."/> --> <property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml" /> </bean> <!--MP-MANAGED-UPDATABLE-ENDING--> <!-- Database --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/spring-config-Tranxy-database.xml"/></beans>spring-config-Tranxy-database.xml <?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jndi="http://www.springframework.org/schema/jee" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"><bean id="placeHolderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location"><value>classpath:net/sf/mp/demo/tranxy/factory/spring/spring-config-Tranxy.properties</value></property> </bean> <bean id="tranxyTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="JPAtranxyFactory"/> </bean><!-- to get the entity manager --> <tx:annotation-driven transaction-manager="tranxyTransactionManager"/> </beans>spring-config-Tranxy-BE-main jdbc.tranxy.driverClassName=org.gjt.mm.mysql.Driver jdbc.tranxy.url=jdbc:mysql://127.0.0.1:3306/tranxy jdbc.tranxy.username=root jdbc.tranxy.password=mysql jdbc.tranxy.jndi=jdbc/tranxy hibernate.dialect=org.hibernate.dialect.MySQLDialectspring-config-JPA-Tranxy-dao.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"><beans><!-- Import Dao definitions for business components --><!-- tranxy --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/tranxy/dao-JPA-Tranxy.xml"/> <!-- translation --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/translation/dao-JPA-Translation.xml"/><!-- Import Ext Dao definitions for business components --> <!-- tranxy extended dao --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/tranxy/dao-ext-JPA-Tranxy.xml"/> <!-- translation extended dao --> <import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/translation/dao-ext-JPA-Translation.xml"/></beans>dao-JPA-Translation.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd"> <beans><bean id="translationDao" class="net.sf.mp.demo.tranxy.dao.impl.jpa.translation.TranslationJPAImpl" singleton="false" > <property name="entityManagerFactory"><ref bean="JPAtranxyFactory"/></property> </bean> <bean id="translationKeyDao" class="net.sf.mp.demo.tranxy.dao.impl.jpa.translation.TranslationKeyJPAImpl" singleton="false" > <property name="entityManagerFactory"><ref bean="JPAtranxyFactory"/></property> </bean> <bean id="translationRequestDao" class="net.sf.mp.demo.tranxy.dao.impl.jpa.translation.TranslationRequestJPAImpl" singleton="false" > <property name="entityManagerFactory"><ref bean="JPAtranxyFactory"/></property> </bean></beans>It is the same for the dao-ext-JPA-Translation.xml, dao-ext-JPA-Tranxy.xml, dao-JPA-Tranxy.xml files But wait a minute… How can I unit test? You need two other artifacts before writting your own test. One is persistence.xml… Again? Yes, with a embedded connection pool, because the shipped with the build of your JPA2 layer may refere a JNDI Datasource (in case the property environment is set to remote). Since it is under /src/test/resources/META-INF it will override the one in the JPA2 package. Two is an adapter that extends AbstractTransactionalJUnit4SpringContextTests: it is generated in /src/test/java package net.sf.mp.demo.tranxy.dao.face;import javax.sql.DataSource;import org.apache.commons.lang.StringUtils; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.jdbc.core.simple.SimpleJdbcTemplate; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.AbstractTransactionalJUnit4SpringContextTests; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.transaction.TransactionConfiguration; import org.springframework.transaction.annotation.Transactional;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={ "classpath:net/sf/mp/demo/tranxy/factory/spring/spring-config-Tranxy-BE-main.xml" }) @TransactionConfiguration(transactionManager = "tranxyTransactionManager") @Transactional public class AdapterTranxyTestDao extends AbstractTransactionalJUnit4SpringContextTests {@Override @Autowired public void setDataSource(@Qualifier(value = "tranxyDataSource") DataSource dataSource) { this.simpleJdbcTemplate = new SimpleJdbcTemplate(dataSource); } ...CXF layer Each entity have a Rest Resource artifact with JAX-RS annotations to enable CRUD access. Example with Translation /** * Copyright (c) minuteproject, minuteproject@gmail.com * All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License") * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * More information on minuteproject: * twitter @minuteproject * wiki http://minuteproject.wikispaces.com * blog http://minuteproject.blogspot.net * */ /** * template reference : * - name : CXFSpringEntityResource * - file name : CXFSpringEntityResource.vm */ package net.sf.mp.demo.tranxy.rest.translation;import java.util.Date; import java.util.List; import java.util.ArrayList; import java.io.*; import java.sql.*;import javax.servlet.http.*;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional;import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.FormParam; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.DELETE; import javax.ws.rs.GET; import javax.ws.rs.PUT; import javax.ws.rs.Produces; import javax.ws.rs.core.Context; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Request; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo; import javax.xml.bind.JAXBElement;import net.sf.mp.demo.tranxy.dao.face.translation.TranslationDao; import net.sf.mp.demo.tranxy.dao.face.translation.TranslationExtDao; import net.sf.mp.demo.tranxy.domain.translation.Translation;/** * * <p>Title: TranslationResource</p> * * <p>Description: remote interface for TranslationResource service </p> * */ @Path ("/rest/xml/translations") @Produces ({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Consumes ({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Service @Transactional public class TranslationResource {@Autowired @Qualifier("translationDao") TranslationDao translationDao; @Autowired @Qualifier("translationExtDao") TranslationExtDao translationExtDao;//MP-MANAGED-UPDATABLE-BEGINNING-DISABLE @FIND_ALL-translation@ @GET @Produces ({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public List<Translation> findAll () { List<Translation> r = new ArrayList<Translation>(); List<Translation> l = translationDao.searchPrototypeTranslation(new Translation()); for (Translation translation : l) { r.add(translation.flat()); } return r; } //MP-MANAGED-UPDATABLE-ENDING//MP-MANAGED-UPDATABLE-BEGINNING-DISABLE @FIND_BY_ID-translation@ @GET @Path("{id}") @Produces ({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Translation findById (@PathParam ("id") java.lang.Long id) { Translation _translation = new Translation (); _translation.setId(id); _translation = translationExtDao.getFirstTranslation(_translation); if (_translation!=null) return _translation.flat(); return new Translation (); } //MP-MANAGED-UPDATABLE-ENDING@DELETE @Path("{id}") public void delete (@PathParam ("id") Long id) { Translation translation = new Translation (); translation.setId(id); translationDao.deleteTranslation(translation); }@POST @Produces ({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public Translation create ( @FormParam("id") Long id, @FormParam("translation") String translation, @FormParam("language") Integer language, @FormParam("key") Long key, @FormParam("isFinal") Short isFinal, @FormParam("dateFinalization") Date dateFinalization, @FormParam("translator") Long translator, @Context HttpServletResponse servletResponse ) throws IOException { Translation _translation = new Translation ( id, translation, language, key, isFinal, dateFinalization, translator); return save(_translation); }@PUT @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON}) public Translation save(JAXBElement<Translation> jaxbTranslation) { Translation translation = jaxbTranslation.getValue(); if (translation.getId()!=null) return translationDao.updateTranslation(translation); return save(translation); }public Translation save (Translation translation) { translationDao.saveTranslation(translation); return translation; }}And two files for the web application and spring in /src/main/resources/webapp/WEB-INF Web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"><display-name>tranxy CXF REST</display-name> <description>tranxy CXF REST access</description><context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/application-context.xml</param-value> </context-param><listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener><servlet> <servlet-name>CXFServlet</servlet-name> <servlet-class>org.apache.cxf.transport.servlet.CXFServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet><servlet-mapping> <servlet-name>CXFServlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping></web-app> application-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:jaxrs="http://cxf.apache.org/jaxrs" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsdhttp://www.springframework.org/schema/txhttp://www.springframework.org/schema/tx/spring-tx-3.0.xsdhttp://cxf.apache.org/jaxrshttp://cxf.apache.org/schemas/jaxrs.xsd"><import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-jaxrs-binding.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <context:component-scan base-package="net.sf.mp.demo.tranxy.rest"/><import resource="classpath:net/sf/mp/demo/tranxy/factory/spring/spring-config-Tranxy-BE-main.xml"/> <jaxrs:server id="restContainer" address="/"> <jaxrs:serviceBeans> <!-- tranxy --> <ref bean="applicationResource"/> <ref bean="languageResource"/> <ref bean="userResource"/> <!-- translation --> <ref bean="translationResource"/> <ref bean="translationKeyResource"/> <ref bean="translationRequestResource"/> <!-- statements --> </jaxrs:serviceBeans> </jaxrs:server></beans> Package, deployment and test Package Before building the package there is a dependency shipped with minuteproject mp-bsla.x.y.jar to install. In /target/mp-bsla/ Run script: maven-install.cmd/sh  Build: >mvn clean package The result is tranxyRestCxfApp.war in /rest/target Deployment Start tomcat Drop tranxyRestCxfApp.war in /webapps There is an embedded connection pool, so no configuration is needed on tomcat. Test USE `tranxy` ; DELETE FROM application_x_key; DELETE FROM translation; DELETE FROM language_x_translator; DELETE FROM language_x_speaker; DELETE FROM request_key;DELETE FROM application; DELETE FROM translation_key; DELETE FROM user; DELETE FROM language;INSERT INTO application (idapplication, name, description, type) VALUES (-1,'Porphyry', 'OS application holding environment app', 'OPENSOURCE'); INSERT INTO application (idapplication, name, description, type) VALUES (-2,'Minuteproject', 'Minuteproject app', 'OPENSOURCE');INSERT INTO user (iduser, first_name, last_name, email) VALUES (-1,'test', 'lastName', 'test@test.me'); INSERT INTO user (iduser, first_name, last_name, email) VALUES (-2,'test2', 'lastName2', 'test2@test.me');INSERT INTO language (idlanguage, code, description, locale) VALUES (-1, 'FR', 'France', 'fr'); INSERT INTO language (idlanguage, code, description, locale) VALUES (-2, 'ES', 'Spanish', 'es'); INSERT INTO language (idlanguage, code, description, locale) VALUES (-3, 'EN', 'English', 'en');INSERT INTO language_x_translator (language_id, user_id) VALUES (-1, -1); INSERT INTO language_x_translator (language_id, user_id) VALUES (-2, -1); INSERT INTO language_x_speaker (language_id, user_id) VALUES (-1, -1); INSERT INTO language_x_speaker (language_id, user_id) VALUES (-2, -1); INSERT INTO language_x_translator (language_id, user_id) VALUES (-1, -2); INSERT INTO language_x_translator (language_id, user_id) VALUES (-2, -2); INSERT INTO language_x_translator (language_id, user_id) VALUES (-3, -2);INSERT INTO translation_key (id, key_name, description) VALUES (-1, 'msg.user.name', 'user name'); INSERT INTO translation (id, translation, language_id, key_id, is_final, date_finalization, translator_id) VALUES (-1, 'nom', -1, -1, 1, '2012-04-04', -1); INSERT INTO translation (id, translation, language_id, key_id, is_final, date_finalization, translator_id) VALUES (-2, 'apellido', -1, -2, 1, CURDATE(), -1);Now enter http://localhost:8080/tranxyRestCxfApp/rest/xml/languages to get all the languages This is the resultNow enter http://localhost:8080/tranxyRestCxfApp/rest/xml/users/-1 to get the first user This is the resultConclusion This article presented you how to get quickly a CRUD REST interface on top of your DB model. Of course, you may not need CRUD for all entities and may you need more coarse grain functions to manipulate your model. Next article will present you how with Statement Driven Development we can get closer to Use Case. Reference: RigaJUG – demo – REST CXF from our JCG partner Florian Adler at the minuteproject blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close