Featured FREE Whitepapers

What's New Here?


Java EE7 and Maven project for newbies – part 7

Resuming from the previous parts Part #1, Part #2, Part #3, Part #4, Part #5 , Part #6 In the previous post (num 6) we discovered how we can unit test our JPA2 domain model, using Arquillian and Wildfly 8.1 In the post we made a simple configuration decision, we used the internal H2 database that is bundled with Wildfly 8.1 and the already configured Datasource (called ExampleDS). But what about a real DBMS? In this post we are going to extend a bit the previous work, use the same principles and    test towards a running PostgreSQL in our localhost use some of the really nice features the ShrinkWrap APi of Arquillian Offers.Pre-requisites You need to install locally a PostgreSQL RBDMS, my example is based on a server running on localhost and the Database name is papodb. Adding some more dependencies Eventually we will need to add some more dependencies in our sample-parent (pom). Some of the are related to Arquillian and specifically the ShrinkWrap Resolvers features (more on this later). So our we need to add to the parent pom. xml the following: <shrinkwrap.bom-version>2.1.1</shrinkwrap.bom-version> <!-- jbdc drivers --> <postgreslq.version>9.1-901-1.jdbc4</postgreslq.version> ... <!-- shrinkwrap BOM--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-bom</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- shrinkwrap dependency chain--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> </dependency> <!-- arquillian itself--> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${arquillian-version}</version> <scope>import</scope> <type>pom</type> </dependency> <!-- the JDBC driver for postgresql --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>${postgreslq.version}</version> </dependency> Some notes on the above change: In order to avoid any potential conflicts between dependencies, make sure to define the ShrinkWrap BOM on top of Arquillian BOMNow on the sample-services (pom.xml) , the project that hosts are simple tests, we need to reference some of these dependencies. <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <scope>test</scope> <type>pom</type> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> Restructuring our test code In the previous example, our test was simple, we we only used a certain test configuration. That resulted to single test-persistence.xml file and no web.xml file, since we were packaging our test application as a jar. Now we will upgrade our testing archive to a war. War packaging in JavaEE7 has become a first level citizen when it comes to bundling and deploying an enterprise application. The main difference with the previous example is that we would like to keep both the previous settings, meaning test using the internal H2 on wildfly, and the new setting testing towards a real RDBMS server. So we need to maintain 2 set of configuration files, and making use of the Maven Profiles feature, package them accordingly depending our mode. If you are new to Maven make sure to look on the concepts of profiles. Adding separate configurations per profiles So our test resources (watch out these are under src/test/resources) are now as illustrated below.There are differences in both cases. The test-persistence.xml of h2 is pointing to the ExampleDS datasource, where the one on postgre is pointing to a new datasource that we have defined in the web.xml! Please have a look on the actual code, from the git link down below. This is how we define a datasource in web.xmlNotes on the abovethe standard naming in the JNDI name java:jboss/datasources/datasourceName the application server, once it reads the contents of the web.xml file, will automatically deploy and configure a new Datasource.This is our persistence.xmlNotes on the aboveMake sure the 2 JNDI entries are the same both in the datasource definition and in the persistence.xml Of course the Hibernate Dialect used for postGresql is different The line that is highlighted is a special setting that is required for Wildfly 8.1 in cases that you want to deploy with one go, the datasource, the jdbc driver and the code. It hints the application server to initialize and configure first the datasource and then initialize the EntityManager. In cases that you have already deployed /configured the datasource this setting is not needed.Define the profiles in our pom In the sample-services pom.xml we add the following section. This is our profile definition. <profiles> <profile> <id>h2</id> <build> <testResources <testResource> <directory>/resources-h2</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> <profile> <id>postgre</id> <build> <testResources> <testResource> <directory>/resources-postgre</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> </profiles> Depending on the profile actived, we instruct Maven to include and work with the xml files under a specific subfolder. So if we apply the following command: mvn clean test -Pdb2 Then maven will include the persistence.xml and web.xml under the resource-h2 folder and our tests will make use of the interall H2 DB. If we issue though: mvn clean test -Ppostgre Then our test web archive will be packaged with data source definition specific to our local postgresql server. Writting a simple test Eventually our new JUnit test is not very different from the previous one. Here is a screenshot indicating some key points.   Some notes on the code above:The Junit test and basic annotations are the same with the previous post. The init() method is again the same, we just create and persist a new SimpleUser Entity The first major different is the use of ShrinkWrap Api, that makes use of our test dependencies in our pom, and we can locate the JBDC driver as a jar. Once located ShrinkWrap makes sure to package it along with the rest of resources and code in our test.war. Packaging only the jdbc driver though is NOT enough, in order this to work, we need a datasource to be present (configured) in the server. We would like this to be automatic, meaning we dont want to preconfigure anything on our test Wildfly Server. We make use of the feature to define a datasource on web.xml. (open it up in the code).The application server, once it scans the web.xml will pick up the entry and will configure a datasource under the java:jboss/datasources/testpostgre name. So we have bundled the driver, the datasource definition, we have a persistence.xml pointing to the correct datasourc. we are ready to test Our test method is similar with the previous one.We have modified a bit the resources for the H2 profile so that we package the same war structure every time. That means if we run the test using the -Ph2 profile, the web.xml included is empty, because we actually we don’t need to define a datasource there, since the datasource is already deployed by Wildfly. The persistence.xml though is different, because in one case the dialect defined is specific to H2 and in the other is specific to Postgre. You can follow the same principle and add a new resource subfolder, configure a Datasource for another RDBMS eg MySQL, add the appropriate code to fetch the driver and package it along.You can get the code for this post on this bitbucket repo-tag.ResourceShrinkwrap resolver API page (lots of nice examples for this powerful API) Defining Datasources for Wildfly 8.1Reference: Java EE7 and Maven project for newbies – part 7 from our JCG partner Paris Apostolopoulos at the Papo’s log blog....

Behavior-Driven RESTful APIs

In the RESTBucks example, the authors present a useful state diagram that describes the actions a client can perform against the service. Where does such an application state diagram come from? Well, it’s derived from the requirements, of course. Since I like to specify requirements using examples, let’s see how we can derive an application state diagram from BDD-style requirements.       Example: RESTBucks state diagram Here are the three scenarios for the Order a Drink story: Scenario: Order a drinkGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I pay the order using credit card xxx1234 Then I receive a receipt And the order is paid When I wait until the order is ready And I take the order Then the order is completedScenario: Change an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created And the size is large When I change the order to a small size Then the order is created And the size is smallScenario: Cancel an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I cancel the order Then the order is canceled Let’s look at this in more detail, starting with the happy path scenario. Given the RESTBucks service When I create an order for a large, semi milk latte for takeaway The first line tells me there is a REST service, at some given billboard URL. The second line tells me I can use the POST method on that URI to create an Order resource with the given properties.    Then the order is created This tells me the POST returns 201 with the location of the created Order resource. When I pay the order using credit card xxx1234 This tells me there is a pay action (link relation).    Then I receive a receipt This tells me the response of the pay action contains the representation of a Receipt resource.    And the order is paid This tells me there is a link from the Receipt resource back to the Order resource. It also tells me the Order is now in paid status.    When I wait until the order is ready This tells me that I can refresh the Order using GET until some other process changes its state to ready.    And I take the order This tells me there is a take action (link relation).    Then the order is completed This tells me that the Order is now in completed state.    Analyzing the other two scenarios in similar fashion gives us a state diagram that is very similar to the original in the RESTBucks example.    The only difference is that this diagram here contains an additional action to navigate from the Receipt to the Order. This navigation is also described in the book, but not shown in the diagram in the book. Using BDD techniques for developing RESTful APIs Using BDD scenarios it’s quite easy to discover the application state diagram. This shouldn’t come as a surprise, since the Given/When/Then syntax of BDD scenarios is just another way of describing states and state transitions. From the application state diagram it’s only a small step to the complete resource model. When the resource model is implemented, you can re-use the BDD scenarios to automatically verify that the implementation matches the requirements. So all in all, BDD techniques can help us a lot when developing RESTful APIs.Reference: Behavior-Driven RESTful APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....

Hibernate and UUID identifiers

Introduction In my previous post I talked about UUID surrogate keys and the use cases when there are more appropriate than the more common auto-incrementing identifiers. A UUID database type There are several ways to represent a 128-bit UUID, and whenever in doubt I like to resort to Stack Exchange for an expert advice.     Because table identifiers are usually indexed, the more compact the database type the less space will the index require. From the most efficient to the least, here are our options:Some databases (PostgreSQL, SQL Server) offer a dedicated UUID storage type Otherwise we can store the bits as a byte array (e.g. RAW(16) in Oracle or the standard BINARY(16) type) Alternatively we can use 2 bigint (64-bit) columns, but a composite identifier is less efficient than a single column one We can store the hex value in a CHAR(36) column (e.g 32 hex values and 4 dashes), but this will take the most amount of space, hence it’s the least efficient alternativeHibernate offers many identifier strategies to choose from and for UUID identifiers we have three options:the assigned generator accompanied by the application logic UUID generation the hexadecimal “uuid” string generator the more flexible “uuid2″ generator, allowing us to use java.lang.UUID, a 16 byte array or a hexadecimal String valueThe assigned generator The assigned generator allows the application logic to control the entity identifier generation process. By simply omitting the identifier generator definition, Hibernate will consider the assigned identifier. This example uses a BINARY(16) column type, since the target database is HSQLDB. @Entity(name = "assignedIdentifier") public static class AssignedIdentifier {@Id @Column(columnDefinition = "BINARY(16)") private UUID uuid;public AssignedIdentifier() { }public AssignedIdentifier(UUID uuid) { this.uuid = uuid; } } Persisting an Entity: session.persist(new AssignedIdentifier(UUID.randomUUID())); session.flush(); Generates exactly one INSERT statement: Query:{[insert into assignedIdentifier (uuid) values (?)][[B@76b0f8c3]} Let’s see what happens when issuing a merge instead: session.merge(new AssignedIdentifier(UUID.randomUUID())); session.flush(); We get both a SELECT and an INSERT this time: Query:{[select assignedid0_.uuid as uuid1_0_0_ from assignedIdentifier assignedid0_ where assignedid0_.uuid=?][[B@23e9436c]} Query:{[insert into assignedIdentifier (uuid) values (?)][[B@2b37d486]} The persist method takes a transient entity and attaches it to the current Hibernate session. If there is an already attached entity or if the current entity is detached we’ll get an exception. The merge operation will copy the current object state into the existing persisted entity (if any). This operation works for both transient and detached entities, but for transient entities persist is much more efficient than the merge operation. For assigned identifiers, a merge will always require a select, since Hibernate cannot know if there is already a persisted entity having the same identifier. For other identifier generators Hibernate looks for a null identifier to figure out if the entity is in the transient state. That’s why the Spring Data SimpleJpaRepository#save(S entity) method is not the best choice for Entities using an assigned identifier: @Transactional public <S extends T> S save(S entity) { if (entityInformation.isNew(entity)) { em.persist(entity); return entity; } else { return em.merge(entity); } } For assigned identifiers, this method will always pick merge instead of persist, hence you will get both a SELECT and an INSERT for every newly inserted entity. The UUID generators This time we won’t assign the identifier ourselves but have Hibernate generate it on our behalf. When a null identifier is encountered, Hibernate assumes a transient entity, for whom it generates a new identifier value. This time, the merge operation won’t require a select query prior to inserting a transient entity. The UUIDHexGenerator The UUID hex generator is the oldest UUID identifier generator and it’s registered under the “uuid” type. It can generate a 32 hexadecimal UUID string value (it can also use a separator) having the following pattern: 8{sep}8{sep}4{sep}8{sep}4. This generator is not IETF RFC 4122 compliant, which uses the 8-4-4-4-12 digit representation. @Entity(name = "uuidIdentifier") public static class UUIDIdentifier {@GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid") @Column(columnDefinition = "CHAR(32)") @Id private String uuidHex; } Persisting or merging a transient entity: session.persist(new UUIDIdentifier()); session.flush(); session.merge(new UUIDIdentifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfa0000]} Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfc0001]} You can check out the string parameter value sent to the SQL INSERT queries. The UUIDGenerator The newer UUID generator is IETF RFC 4122 compliant (variant 2) and it offers pluggable generation strategies. It’s registered under the “uuid2″ type and it offers a broader type range to choose from:java.lang.UUID a 16 byte array a hexadecimal String value@Entity(name = "uuid2Identifier") public static class UUID2Identifier {@GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "uuid2") @Column(columnDefinition = "BINARY(16)") @Id private UUID uuid; } Persisting or merging a transient entity: session.persist(new UUID2Identifier()); session.flush(); session.merge(new UUID2Identifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuid2Identifier (uuid) values (?)][[B@68240bb]} Query:{[insert into uuid2Identifier (uuid) values (?)][[B@577c3bfa]} This SQL INSERT queries are using a byte array as we configured the @Id column definition.Code available on GitHub.Reference: Hibernate and UUID identifiers from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Graduate School: ”Do… Or Do Not. There Is No Try” – Yoda

I recently completed my Master of Science in Computer Science.  There were both good and bad experiences about graduate school while working full time, and I wanted to share them to help those who are considering taking that leap. Before I do, I wanted to provide a little history on why I choose to pursue a graduate degree in Computer Science. My undergraduate and first graduate background is in Biomedical Engineering. As my emphasis was in signal processing, I was exposed to curriculum that focused on logic and coding. I enjoyed these classes the most. Therefore, I chose to get a more formal education in Computer Science when I moved to Kansas City in 2010.   Typically, I would start off a blog post with the good news first, but I don’t want to end with the bad and scare off folks who are on the fence of deciding whether or not to further their education. My intention is to provide some honest opinion and feedback that may save you from the bad aspects of a graduate program, rather than turning you off from taking a very beneficial and self-rewarding step. The Not-So-Good My bad experiences are more complaints than anything else. Not all graduate students are full-time students and, in my experience, many professors don’t understand this. I remember countless “busy work” assignments that never helped me gain knowledge of the subject covered in class. For example, students were assigned to find and read published paper and in turn write mandatory one-page reports. This was a weekly activity throughout the semester. These assignments would eat up my free time at home, and, to this day, I can’t quantify the value in any of the critiques that I wrote. I mentioned free time in that last sentence. I will be the first to say that graduate school is a commitment and it requires sacrifices. I remember countless nights and weekends that I would have been rather doing anything else other than homework. As a tip to those considering this move, I would recommend researching the course structure and teaching methodology to make sure the course curriculum is synced up with your goals. Needless to say, having a supportive work environment and spouse are a must. I was fortunate to have both, so thank you Keyhole and a shout out to my husband. I was under the assumption that creativity and innovation are welcome in a higher education setting, but found that wasn’t always the case. For example, I once solved a problem on an exam by applying an algorithm that was different than the way the professor used to solve the same problem. I was deducted points and when I went to sit down with the professor to understand why my approach was wrong, I was told “that is not the way I would solve the problem.” I could be wrong, but the best I could tell, I was punished for solving the problem in a creative, efficient way. The professor didn’t appreciate it, as it was a different way of getting to the answer. I swallowed my pride. I told myself to keep looking for ways to be creative and focus on learning rather than worrying about my grades. The Good Now for the good experiences about graduate school. Grad school helped me obtain a basic theoretical Computer Science background that I was lacking, as I came from a Biomedical Engineering program. Furthermore, I really enjoyed the small-scale projects that helped me understand the lifecycle of a project holistically. Things like IT Project Management, business and system requirements, designing, coding, and testing were all part of the project. The small-scale projects helped me translate why and how things are structured in a real work setting. At the same time, some of the topics like architecture cannot be fully understood in small projects. But the importance of architecture and good coding practice can be quantified into a large project. I experienced this first-hand as I was able to translate these qualities while at my full-time role as a Keyhole consultant. On the flip side, I was able to take things I was learning on work projects and apply them on school projects. I would say there was a good balance in applying the skills I learned between work and school projects, which furthered my learning. Final Thoughts In conclusion, I found graduate school was very beneficial for me. As a suggestion for those considering this leap: research the curriculum and make sure it is in line with your learning objectives and expectations. I would also highly recommend you have real-world experience before starting graduate school, as I found that having the work experience helps you succeed in your graduate studies. Graduate school is a big commitment and you need to be mentally prepared for time commitment that it requires. Keep learning; innovate, never become complacent, as a mind is a terrible thing to waste.Reference: Graduate School: ”Do… Or Do Not. There Is No Try” – Yoda from our JCG partner Jinal Patel at the Keyhole Software blog....

Software Defined Everything

The other day Taxis in London where on strike because Uber was setting up shop in London. Do you know a lot of people that still send paper letters? Book holiday flights via a travel agent?  Buy books in book stores? Rent DVD movies? 5 smart programmers can bring down a whole multi-billion industry and change people’s habits. It has long been known that any company that changes people habits becomes a multi-billion company. Cereals for breakfast, brown coloured sweet water, throw-away shaving equipment, online bookstore, online search & ads, etc. You probably figured out the name of the brand already.   Software Defined Everything is Accelerating The Cloud, crowd funding, open source, open hardware, 3D printing, Big Data, machine learning, Internet of Things, mobile, wearables, nanotechnology, social networks, etc. all seem individual technology innovations. However things are changing. Your Fitbit will send your vital signs via your mobile to the cloud where deep belief networks analyse it and find out that you are stressed. Your smart hub detects you are approaching your garage and your Arduino controller linked to your IP camera encased in a 3D printing housing detects that you brought a visitor. A LinkedIn and Facebook image scan finds that your visitor is your boss’s boss. Your Fitbit and Google Calendar have given away over the last months that whenever you have a meeting with your boss’s boss, you get stressed. Your boss’s boss music preferences are guesses based on public information available on social networks. Your smart watch gets a push notification with the personal profile data that could be gathered from your boss’s boss: he has two boys and a girl, got recently divorced, the girl recently won a chess award, a facebook tagged picture shows your boss in a golf tournament three weeks ago, an Amazon book review indicates that he likes Shakespeare but only the early work, etc. All of a sudden your house shows pictures of that one time you plaid golf. Music plays according to what 96.5% of Shakespeare lovers like from a crowd-funded bluetooth in-house speaker system… It might be a bit farfetched but what used to be disjoint technologies and innovations are fast coming together. Those companies that can both understand the latest cutting-edge innovations and be able to apply them to improve their customer’s life or solve business problems will have a big competitive edge. Software is fast defining more and more industries. Media, logistics, telecom, banking, retail, industrial, even agriculture will see major changes due to software (and hardware) innovations. What should you do? If you are technology savvy? You should look for customers that want faster horses and draw a picture of a car. Make a slide deck. Get feedback and adjust. Build a prototype. Get feedback and adjust. Create a minimum valuable product. Get feedback and adjust… Change the world. If you have a business problem and money but are not technology savvy?   Organise a competition in which you ask people to solve your problem and give prices to the best solution. You will be amazed by what can come out of these. If you work in a traditional industry and think software is not going to redefine what you do? Call your investment manager and ask them if you have enough money in the bank to retire in case you would get fired next year and wouldn’t be able to find a job any more. If the answer is no! Then start reading the top of the blog post again…Reference: Software Defined Everything from our JCG partner Maarten Ectors at the Telruptive blog....

Agile VS Real Life

The Agile Manifesto tells us that: “We have come to value “Individuals and Interaction over Processes and Tools” Reality tells us otherwise. Want to do unit testing? Pick up a test framework and you’re good to go. Want your organization to be agile? Scrum is very simple, and SAFe is scaled simplicity. We know there are no magic bullets. Yet we’re still attracted to pre-wrapped solutions. Why? Good question. We’re not stupid, most of us anyway. Yet we find it very easy to make up a story about how easy it’s going to be. Here are a couple of theories.We’re concentrating on short term gains. Whether it’s the start-up pushing for a lucrative exit by beating the market, or the enterprise looking at the next investor call, companies are pushing their people to maximize value in the short term. With that in mind, people look for a “proven” tool or process, that minimizes long term investments. In fact, systems punish people, if they do otherwise. We don’t understand complexity. Think about how many systems we’re part of , how they impact each other, and then consider things we haven’t thought about. That’s overwhelming. Our wee brain just got out of fight-or-flight mode, you want it to do full plan and execution with all those question marks? People are hard. Better get back to dry land where tools and processes are actually working. We’re biased in so many ways. One of our biases is called anchoring. Simply put, if we first hear of something, we compare everything to it. It becomes our anchor. Now, when you’re researching a new area, do you start with the whole methodology? Nope. We’re looking for examples, similar to our experiences. What comes out first when we search? The simple stuff. Tools and processes. Once we start there, there’s no way back. We don’t live well with uncertainty. Short term is fine, because we have the illusion of control over it. Because of complexity, long-term is so out of our reach we give up, and try to concentrate short term wins. We don’t like to read the small print. Small print hurts the future-perfect view. We avoid the context issues, we tell ourselves that the annotation applies to a minority of the cases, which obviously we don’t belong to.  Give us the short-short version, and we’ll take it from there. We like to be part of the group. Groups are comfy. Belonging to one removes anxiety. Many companies choose scrum because it works for them, why won’t it work for me? The only people who publish big methodology papers are from the academia. And that’s one group we don’t want to be part of, heaven forbid.That’s why we like processes and tools. Fighting that is not only hard, but may carry a penalty. So what’s the solution? Looking for simplicity again? So soon? Well, the good news, is that it is possible to do that with discipline. If we have enough breathing room, if we don’t get push back from the rest of our company, if we acknowledge that we need to invest in learning , and understand that processes and tools are just the beginning – then there’s hope for us yet. Lots of if’s. But if you don’t want to bother, just go with this magical framework.Reference: Agile VS Real Life from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Goodbye Sense – Welcome Alternatives?

I only recently noticed that Sense, the Chrome Plugin for Elasticsearch has been pulled from the app store by its creator. There are quite strong opinions in this thread and I would like to have Sense as a Chrome plugin as well. But I am also totally fine with Elasticsearch as a company trying to monetize some of its products so that is maybe something we just have to accept. What is interesting is that it isn’t even possible to fork the project and keep developing it as there is no explicit license in the repo. I guess there is a lesson buried somewhere in here. In this post I would like to look at some of the alternatives for interacting with Elasticsearch. Though the good thing about Sense is that it is independent from the Elasticsearch installation we are looking at plugins here. It might be possible to use some of them without installing them in Elasticsearch but I didn’t really try. The plugins are generally doing more things but I am looking at the REST capabilities only. Marvel Marvel is the commercial plugin by Elasticsearch (free for development purposes). Though it does lots of additional things, it contains the new version of Sense. Marvel will track lots of the state and interaction with Elasticsearch in a seperate index so be aware that it might store quite some data. Also of course you need to respect the license; when using it on a production system you need to pay. The main Marvel dashboard, which is Kibana, is available at http://localhost:9200/_plugin/marvel. Sense can be accessed directly using http://localhost:9200/_plugin/marvel/sense/index.html.The Sense version of Marvel behaves exactly like the one you are used from the Chrome plugin. It has highlighting, autocompletion (even for new features), the history and the formatting. elasticsearch-head elasticsearch-head seems to be one of the oldest plugins available for Elasticsearch and it is recommended a lot. The main dashboard is available at http://localhost:9200/_plugin/head/ which contains the cluster overview.There is an interface for building queries at the Structured Query tab./p>It lets you execute queries by selecting values from dropdown boxes and it can even detect fields that are available for the index and type. Results are displayed in a table. Unfortunately the values that can be selected are rather outdated. Instead of the match query it still contains the text query that is deprecated since Elasticsearch 0.19.9 and is not available anymore with newer versions of Elasticsearch. Another interface on the Any Request tab lets you execute custom requests.The text box that accepts the body has no highlighting and it is not possble to use tabs but errors will be displayed, the response is formatted, links are set and you do have the option to use a table or the JSON format for responses. The history lets you execute older queries. There are other options like Result Transformer that sound interesting but I have never tried those. elasticsearch-kopf elasticsearch-kopf is a clone of elasticsearch-head that also provides an interface to send arbitrary requests to Elasticsearch.You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing a autocompletion. If you’d like to learn more about elasticsearch-kopf I have recently published a tour through its features. Inquisitor Inquisitor is a tool to help you understand Elasticsearch queries. Besides other options it allows you to execute search queries.Index and type can be chosen from the ones available in the cluster. There is no formatting in the query field, you can’t even use tabs for indentation, but errors in your query are displayed in the panel on top of the results while typing. The response is displayed in a table, matching fields are automatically highlighted. Because of the limited possibilites when entering text the plugin seems to be more useful when it comes to the analyzing part or for pasting existing queries Elastic-Hammer Andrew Cholakian, the author of Exploring Elasticsearch, has published another query tool, Elastic-Hammer. It can either be installed locally or used as an online version directly.It is a quite useful query tool that will display syntactic errors in your query and format images and links in a pretty response. It even offers autocompletion though not as elaborated as the one Sense and Marvel are providing: It will display any allowed term, no matter the context. So you can’t really see which terms currently are allowed but only that the term is allowed at all. Nevertheless this can be useful. Searches can also be saved in local storage and executed again. Conclusion Currently none of the free and open source plugins seems to provide an interface that is as good as the one contained in Sense and Marvel. As Marvel is free for development you can still use but you need to install it in the instances again. Sense was more convenient and easier to start but I guess one can get along with Marvel the same way. Finally I wouldn’t be surprised if someone from the very active Elasticsearch community comes up with another tool that can take the place of Sense again.Reference: Goodbye Sense – Welcome Alternatives? from our JCG partner Florian Hopf at the Dev Time blog....

Java SE 8 new features tour: Functional programming with Lambda Expression

This article of the “Java SE 8 new features tour” series will deep dive into understanding Lambda expressions. I will show you a few different uses of Lambda Expressions. They all have in common the implementation of functional interfaces. I will explain how the compiler is inferring information from code, such as specific types of variables and what is really happening in the background. In the previous article “Java SE 8 new features tour: The Big change, in Java Development world”, where I have talked about what we are going to explore during this series. I have started by an introduction to Java SE 8 main features, followed by installation process of JDK8 on both Microsoft windows and Apple Mac OS X platforms, with important advices and notice to take care of. Finally, we went through a development of a console application powered by Lambda expression to make sure that we have installed Java SE 8 probably. Source code is hosted on my Github account: Clone from HERE. What is Lambda expression? Perhaps the best-known new feature of Java SE 8 is called Project Lambda, an effort to bring Java into the world of functional programming. In computer science terminology;A Lambda is an anonymous function. That is, a function without a name.In Java;All functions are members of classes, and are referred to as methods. To create a method, you need to define the class of which it’s a member.A lambda expression in Java SE 8 lets you define a class and a single method with very concise syntax implementing an interface that has a single abstract method. Let’s figure out the idea. Lambda Expressions lets developers simplify and shorten their code. Making it more readable and maintainable. This leads to remove more verbose class declarations. Let’s take a look at a few code snippets.Implementing an interface: Prior to Java SE 8, if you wanted to create a thread, you’d first define a class that implements the runnable interface. This is an interface that has a single abstract method named Run that accepts no arguments. You might define the class in its own code file. A file named by MyRunnable.java. And you might name the class, MyRunnable, as I’ve done here. And then you’d implement the single abstract method. public class MyRunnable implements Runnable { @Override public void run() { System.out.println("I am running"); } public static void main(String[] args) { MyRunnable r1 = new MyRunnable(); new Thread(r1).start(); } } In this example, my implementation outputs a literal string to the console. You would then take that object, and pass it to an instance of the thread class. I’m instantiating my runnable as an object named r1. Passing it to the thread’s constructor and calling the thread’s start method. My code will now run in its own thread and its own memory space. Implementing an inner class: You could improve on this code a bit, instead of declaring your class in a separate file, you might declare it as single use class, known as an inner class, local to the method in which it’s used. public static void main(String[] args) { Runnable r1 = new Runnable() { @Override public void run() { System.out.println("I am running"); } }; new Thread(r1).start(); } So now, I’m once again creating an object named r1, but I’m calling the interface’s constructor method directly. And once again, implementing it’s single abstract method. Then I’m passing the object to the thread’s constructor. Implementing an anonymous class: And you can make it even more concise, by declaring the class as an anonymous class, so named because it’s never given a name. I’m instantiating the runnable interface and immediately passing it to the thread constructor. I’m still implementing the run method and I’m still calling the thread’s start method. public static void main(String[] args) { new Thread(new Runnable() { @Override public void run() { System.out.println("I am running"); } }).start(); }Using lambda expression: In Java SE 8 you can re-factor this code to significantly reduce it and make it a lot more readable. The lambda version might look like this. public static void main(String[] args) { Runnable r1 = () -> System.out.println("I am running"); new Thread(r1).start(); } I’m declaring an object with a type of runnable but now I’m using a single line of code to declare the single abstract method implementation and then once again I’m passing the object to the Thread’s constructor. You are still implementing the runnable interface and calling it’s run method but you’re doing it with a lot less code. In addition, it could be improved as the following: public static void main(String[] args) { new Thread(() -> System.out.println("I am running")).start(); } Here is an important quote from an early specs document about Project Lambda. Lambda expressions can only appear in places where they will be assigned to a variable whose type is a functional interface. Quote By Brian Goetz Let’s break this down to understand what’s happening.What are the functional interfaces? A functional interface is an interface that has only a single custom abstract method. That is, one that is not inherited from the object class. Java has many of these interfaces such as Runnable, Comparable, Callable, TimerTask and many others. Prior to Java 8, they were known as Single Abstract Method or SAM interfaces. In Java 8 we now call them functional interfaces. Lambda Expression syntax:This lambda expression is returning an implementation of the runnable interface; it has two parts separated by a new bit of syntax called the arrow token or the Lambda operator. The first part of the lambda expression, before the arrow token, is the signature of the method you’re implementing. In this example, it’s a no arguments method so it’s represented just by parentheses. But if I’m implementing a method that accepts arguments, I would simply give the arguments names. I don’t have to declare their types. Because the interface has only a single abstract method, the data types are already known. And one of the goals of a lambda expression is to eliminate unnecessary syntax. The second part of the expression, after the arrow token, is the implementation of the single method’s body. If it’s just a single line of code, as with this example, you don’t need anything else. To implement a method body with multiple statements, wrap them in braces. Runnable r = ( ) -> { System.out.println("Hello!"); System.out.println("Lambda!"); }; Lambda Goals: Lambda Expressions can reduce the amount of code you need to write and the number of custom classes you have to create and maintain. If you’re implementing an interface for one-time use, it doesn’t always make sense to create yet another code file or yet another named class. A Lambda Expression can define an anonymous implementation for one time use and significantly streamline your code. Defining and instantiating a functional interface To get started learning about Lambda expressions, I’ll create a brand new functional interface. An interface with a single abstract method, and then I’ll implement that interface with the Lambda expression. You can use my source code project “JavaSE8-Features” hosted on github to navigate the project code.Method without any argument, Lambda implementation In my source code, I’ll actually put the interface into its own sub-package ending with lambda.interfaces. And I’ll name the interface, HelloInterface.In order to implement an interface with a lambda expression, it must have a single abstract method. I will declare a public method that returns void, and I’ll name it doGreeting. It won’t accept any arguments.That is all you need to do to make an interface that’s usable with Lambda expressions. If you want, you can use a new annotation, that’s added to Java SE 8, named Functional Interface. /** * * @author mohamed_taman */ @FunctionalInterface public interface HelloInterface { void doGreeting(); } Now I am ready to create a new class UseHelloInterface under lambda.impl package, which will instantiate my functional interface (HelloInterface) as the following: /** * @author mohamed_taman */ public class UseHelloInterface { public static void main(String[] args) { HelloInterface hello = ()-> out.println("Hello from Lambda expression"); hello.doGreeting(); } } Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Hello from Lambda expression ------------------------------------------------------------------------------------ So that’s what the code can look like when you’re working with a single abstract method that doesn’t accept any arguments. Let’s take a look at what it looks like with arguments.Method with any argument, Lambda implementation Under lambda.interfaces. I’ll create a new interface and name it CalculatorInterface. Then I will declare a public method that returns void, and I will name it doCalculate, which will receive two integer arguments value1 and value2. /** * @author mohamed_taman */ @FunctionalInterface public interface CalculatorInterface { public void doCalculate(int value1, int value2); } Now I am ready to create a new class Use CalculatorInterface under lambda.impl package, which will instantiate my functional interface (CalculatorInterface) as the following: public static void main(String[] args) { CalculatorInterface calc = (v1, v2) -> { int result = v1 * v2; out.println("The calculation result is: "+ result); }; calc.doCalculate(10, 5); } Note the doCalculate() arguments, they were named value1 and value2 in the interface, but you can name them anything here. I’ll name them v1 and v2. I don’t need to put in int before the argument names; that information is already known, because the compiler can infer this information from the functional interface method signature.Run the file and check the result, it should run and output the following. ------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- The calculation result is: 50 ------------------------------------------------------------------------------------ BUILD SUCCESS Always bear in mind the following rule: Again, you have to follow that rule that the interface can only have one abstract method. Then that interface and its single abstract method can be implemented with a lambda expression.Using built-in functional interfaces with lambdas I’ve previously described how to use a lambda expression to implement an interface that you’ve created yourself.Now, I’ll show lambda expressions with built in interfaces. Interfaces that are a part of the Java runtime. I’ll use two examples. I’m working in a package called lambda.builtin, that’s a part of the exercise files. And I’ll start with this class. UseThreading. In this class, I’m implementing the Runnable interface. This interface’s a part of the multithreaded architecture of Java.My focus here is on how you code, not in how it operates. I’m going to show how to use lambda expressions to replace these inner classes. I’ll comment out the code that’s declaring the two objects. Then I’ll re-declare them and do the implementation with lambdas. So let’s start. public static void main(String[] args) { //Old version // Runnable thrd1 = new Runnable(){ // @Override // public void run() { // out.println("Hello Thread 1."); // } //}; /* ***************************************** * Using lambda expression inner classes * ***************************************** */ Runnable thrd1 = () -> out.println("Hello Thread 1."); new Thread(thrd1).start(); // Old Version /* new Thread(new Runnable() { @Override public void run() { out.println("Hello Thread 2."); } }).start(); */ /* ****************************************** * Using lambda expression anonymous class * ****************************************** */ new Thread(() -> out.println("Hello Thread 2.")).start(); } Let’s look at another example. I will use a Comparator. The Comparator is another functional interface in Java, which has a single abstract method. This method is the compare method.Open the file UseComparator class, and check the commented bit of code, which is the actual code before refactoring it to lambda expression. public static void main(String[] args) { List<string> values = new ArrayList(); values.add("AAA"); values.add("bbb"); values.add("CCC"); values.add("ddd"); values.add("EEE"); //Case sensitive sort operation sort(values); out.println("Simple sort:"); print(values); // Case insensetive sort operation with anonymous class /* Collections.sort(values, new Comparator<string>() { @Override public int compare(String o1, String o2) { return o1.compareToIgnoreCase(o2); } }); */ // Case insensetive sort operation with Lambda sort(values,(o1, o2) -> o1.compareToIgnoreCase(o2)); out.println("Sort with Comparator"); print(values); } As before, it doesn’t provide you any performance benefit. The underlying functionality is exactly the same. Whether you declare your own classes, use inner or anonymous inner classes, or lambda expressions, is completely up to you.In the next article of this series, we will explore and code how to traverse the collections using lambda expression, filtering collections with Predicate interfaces, Traversing collections with method references, implementing default methods in interfaces, and finally implementing static methods in interfaces. Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: Functional programming with Lambda Expression from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....

Getting an Infinite List of Primes in Java

A common problem is to determine the prime factorization of a number. The brute force approach is trial division (Wikipedia, Khan Academy) but that requires a lot of wasted effort if multiple numbers must be factored. One widely used solution is the Sieve of Eratosthenes (Wikipedia, Math World). It is easy to modify the Sieve of Eratosthenes to contain the largest prime factor of each composite number. This makes it extremely cheap to subsequently compute the prime factorization of numbers. If we only care about primality we can either use a bitmap with the Sieve of Eratosthenes, or use the Sieve of Atkin). (Sidenote: for clarity I’m leaving out the common optimizations that follow from the facts that a prime number is always “1 mod 2, n > 2″ and “1 or 5 mod 6, n > 5″. This can substantially reduce the amount of memory required for a sieve.) public enum SieveOfEratosthenes { SIEVE; private int[] sieve;private SieveOfEratosthenes() { // initialize with first million primes - 15485865 // initialize with first 10k primes - 104729 sieve = initialize(104729); }/** * Initialize the sieve. */ private int[] initialize(int sieveSize) { long sqrt = Math.round(Math.ceil(Math.sqrt(sieveSize))); long actualSieveSize = (int) (sqrt * sqrt);// data is initialized to zero int[] sieve = new int[actualSieveSize];for (int x = 2; x < sqrt; x++) { if (sieve[x] == 0) { for (int y = 2 * x; y < actualSieveSize; y += x) { sieve[y] = x; } } }return sieve; }/** * Is this a prime number? * * @FIXME handle n >= sieve.length! * * @param n * @return true if prime * @throws IllegalArgumentException * if negative number */ public boolean isPrime(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }boolean isPrime = sieve[n] == 0;return isPrime; }/** * Factorize a number * * @FIXME handle n >= sieve.length! * * @param n * @return map of prime divisors (key) and exponent(value) * @throws IllegalArgumentException * if negative number */ private Map<Integer, Integer> factorize(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }final Map<Integer, Integer> factors = new TreeMap<Integer, Integer>();for (int factor = sieve[n]; factor > 0; factor = sieve[n]) { if (factors.containsKey(factor)) { factors.put(factor, 1 + factors.get(factor)); } else { factors.put(factor, 1); }n /= factor; }// must add final term if (factors.containsKey(n)) { factors.put(n, 1 + factors.get(n)); } else { factors.put(n, 1); }return factors; }/** * Convert a factorization to a human-friendly string. The format is a * comma-delimited list where each element is either a prime number p (as * "p"), or the nth power of a prime number as "p^n". * * @param factors * factorization * @return string representation of factorization. * @throws IllegalArgumentException * if negative number */ public String toString(Map factors) { StringBuilder sb = new StringBuilder(20);for (Map.Entry entry : factors.entrySet()) { sb.append(", ");if (entry.getValue() == 1) { sb.append(String.valueOf(entry.getKey())); } else { sb.append(String.valueOf(entry.getKey())); sb.append("^"); sb.append(String.valueOf(entry.getValue())); } }return sb.substring(2); } } This code has a major weakness – it will fail if the requested number is out of range. There is an easy fix – we can dynamically resize the sieve as required. We use a Lock to ensure multithreaded calls don’t get the sieve in an intermediate state. We need to be careful to avoid getting into a deadlock between the read and write locks. private final ReadWriteLock lock = new ReentrantReadWriteLock();/** * Initialize the sieve. This method is called when it is necessary to grow * the sieve. */ private void reinitialize(int n) { try { lock.writeLock().lock(); // allocate 50% more than required to minimize thrashing. initialize((3 * n) / 2); } finally { lock.writeLock().unlock(); } }/** * Is this a prime number? * * @param n * @return true if prime * @throws IllegalArgumentException * if negative number */ public boolean isPrime(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }if (n > sieve.length) { reinitialize(n); }boolean isPrime = false; try { lock.readLock().lock(); isPrime = sieve[n] == 0; } finally { lock.readLock().unlock(); }return isPrime; }/** * Factorize a number * * @param n * @return map of prime divisors (key) and exponent(value) * @throws IllegalArgumentException * if negative number */ private Map<Integer, Integer> factorize(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }final Map<Integer, Integer> factors = new TreeMap<Integer, Integer>();try { if (n > sieve.length) { reinitialize(n); }lock.readLock().lock(); for (int factor = sieve[n]; factor > 0; factor = sieve[n]) { if (factors.containsKey(factor)) { factors.put(factor, 1 + factors.get(factor)); } else { factors.put(factor, 1); }n /= factor; } } finally { lock.readLock().unlock(); }// must add final term if (factors.containsKey(n)) { factors.put(n, 1 + factors.get(n)); } else { factors.put(n, 1); }return factors; } Iterable<Integer> and foreach loops In the real world it’s often easier to use a foreach loop (or explicit Iterator) than to probe a table item by item. Fortunately it’s easy to create an iterator that’s built on top of our self-growing sieve. /** * @see java.util.List#get(int) * * We can use a cache of the first few (1000? 10,000?) primes * for improved performance. * * @param n * @return nth prime (starting with 2) * @throws IllegalArgumentException * if negative number */ public Integer get(int n) { if (n < 0) { throw new IllegalArgumentException("value must be non-zero"); }Iterator<Integer> iter = iterator(); for (int i = 0; i < n; i++) { iter.next(); }return iter.next(); }/** * @see java.util.List#indexOf(java.lang.Object) */ public int indexOf(Integer n) { if (!isPrime(n)) { return -1; }int index = 0; for (int i : sieve) { if (i == n) { return index; } index++; } return -1; } /** * @see java.lang.Iterable#iterator() */ public Iterator<Integer> iterator() { return new EratosthenesListIterator(); }public ListIterator<Integer> listIterator() { return new EratosthenesListIterator(); }/** * List iterator. * * @author Bear Giles <bgiles@coyotesong.com> */ static class EratosthenesListIterator extends AbstractListIterator<Integer> { int offset = 2;/** * @see com.invariantproperties.projecteuler.AbstractListIterator#getNext() */ @Override protected Integer getNext() { while (true) { offset++; if (SIEVE.isPrime(offset)) { return offset; } } // we'll always find a value since we dynamically resize the sieve. }/** * @see com.invariantproperties.projecteuler.AbstractListIterator#getPrevious() */ @Override protected Integer getPrevious() { while (offset > 0) { offset--; if (SIEVE.isPrime(offset)) { return offset; } }// we only get here if something went horribly wrong throw new NoSuchElementException(); } } } IMPORTANT : The code: for (int prime : SieveOfEratosthenes.SIEVE) { ... } is essentially an infinite loop. It will only stop once the JVM exhausts the heap space when allocating a new sieve. In practice this means that the maximum prime we can maintain in our sieve is around 1 GB. That requires 4 GB with 4-byte ints. If we only care about primality and use a common optimization that 4 GB can hold information on 64 GB values. For simplicity we can call this 9-to-10 digit numbers (base 10). What if we put our sieve on a disk? There is no reason why the sieve has to remain in memory. Our iterator can quietly load values from disk instead of an in-memory cache. A 4 TB disk, probably accessed in raw mode, would seem to bump the size of our sieve to 14-to-15 digit numbers (base 10). In fact it will be a bit less because we’ll have to double the size of our primitive types from int to long, and then probably to an even larger format. More! More! More! We can dramatically increase the effective size of our sieve by noting that we only have to compute sqrt(n) to initialize a sieve of n values. We can flip that and say that a fully populated sieve of n values can be used to populate another sieve of n2 values. In this case we’ll want to only populate a band, not the full n2 sieve. Our in-memory sieve can now cover values up to roughly 40 digit numbers (base 10), and the disk-based sieve jumps to as much as 60 digit numbers (base 10), minus the space require for the larger values. There is no reason why this approach can’t be taken even further – use a small sieve to bootstrap a larger transient sieve and use it, in turn, to populate an even larger sieve. But how long will this take ? Aye, there’s the rub. The cost to initialize a sieve of n values is O(n2). You can use various tweaks to reduce the constants but at the end of the day you’re visiting every node once (O(n)), and then visiting some rolling value proportional to n beyond each of those points. For what it’s worth this is a problem where keeping the CPU’s cache architecture could make a big difference. In practical terms any recent system should be able to create a sieve containing the first million primes within a few seconds. Bump the sieve to the first billion primes and the time has probably leapt to a week, maybe a month if limited JVM heap space forces us to use the disk heavily. My gut instinct is that it will take a server farm months to years to populate a TB disk Why bother ? For most of us the main takeaway is a demonstration of how to start a collection with a small seed, say a sieve with n = 1000, and transparently grow it as required. This is easy with prime numbers but it isn’t a huge stretch to imagine the same approach being used with, oh, RSS feeds. We’re used to thinking of Iterators as some boring aspect of Collections but in fact they give us a lot of flexibility when used as part of an Iterable. There is also a practical reason for a large prime sieve – factoring large numbers. There are several good algorithms for factoring large numbers but they’re expensive – even “small” numbers may take months or years on a server farm. That’s why the first step is always doing trial division with “small” primes – something that may take a day by itself. Source Code The good news is that I have published the source code for this… and the bad news is it’s part of ongoing doodling when I’m doing Project Euler problems. (There are no solutions here – it’s entirely explorations of ideas inspired by the problems. So the code is a little rough and should not be used to decide whether or not to bring me in for an interview (unless you’re impressed): http://github.com/beargiles/projecteuler.Reference: Getting an Infinite List of Primes in Java from our JCG partner Bear Giles at the Invariant Properties blog....

Parsing an Excel File into JavaBeans using jXLS

This post shows how you can use jXLS to parse an Excel file into a list of JavaBeans. Here is a generic utility method I wrote to do that:                 /** * Parses an excel file into a list of beans. * * @param <T> the type of the bean * @param xlsFile the excel data file to parse * @param jxlsConfigFile the jxls config file describing how to map rows to beans * @return the list of beans or an empty list there are none * @throws Exception if there is a problem parsing the file */ public static <T> List<T> parseExcelFileToBeans(final File xlsFile, final File jxlsConfigFile) throws Exception { final XLSReader xlsReader = ReaderBuilder.buildFromXML(jxlsConfigFile); final List<T> result = new ArrayList<>(); final Map<String, Object> beans = new HashMap<>(); beans.put("result", result); try (InputStream inputStream = new BufferedInputStream(new FileInputStream(xlsFile))) { xlsReader.read(inputStream, beans); } return result; } Example: Consider the following Excel file containing person information:FirstName LastName AgeJoe Bloggs 25John Doe 30  Create the following Person bean to bind each Excel row to: package model;public class Person {private String firstName; private String lastName; private int age;public Person() { } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } Create a jXLS configuration file which tells jXLS how to process your Excel file and map rows to Person objects: <workbook> <worksheet name="Sheet1"> <section startRow="0" endRow="0" /> <loop startRow="1" endRow="1" items="result" var="person" varType="model.Person"> <section startRow="1" endRow="1"> <mapping row="1" col="0">person.firstName</mapping> <mapping row="1" col="1">person.lastName</mapping> <mapping row="1" col="2">person.age</mapping> </section> <loopbreakcondition> <rowcheck offset="0"> <cellcheck offset="0" /> </rowcheck> </loopbreakcondition> </loop> </worksheet> </workbook> Now you can parse the Excel file into a list of Person objects with this one-liner: List<Person> persons = Utils.parseExcelFileToBeans(new File("/path/to/personData.xls"), new File("/path/to/personConfig.xml")); Related posts: Parsing a CSV file into JavaBeans using OpenCSVReference: Parsing an Excel File into JavaBeans using jXLS from our JCG partner Fahd Shariff at the fahd.blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: