Featured FREE Whitepapers

What's New Here?

javafx-logo

JavaFX Tip 9: Do Not Mix Swing / JavaFX

The JavaFX team has tried very hard to convince us that migrating from Swing to JavaFX is easy because of the option to embed Swing content in a JavaFX UI and vice versa. I must admit that I never tried it myself but based on the feedback I am getting from my customers I can only recommend to not mix Swing and JavaFX. At the time of this writing there were over 200 unresolved issues (120+ bugs) related to Swing integration (registered with the JavaFX issue management system).           Issue Types The following is a list of issues that you might encounter if you still decide to go with it:Appearance – there will always be a noticeable difference between the parts that were done in Swing and those that were done in JavaFX. Fields will show different font quality, different borders, different focus highlighting, etc…. Flickering – you might encounter flickering in your UI Behaviour – controls will behave differently. The user will be able to scroll JavaFX controls with a gesture but not the Swing controls. The columns of a JavaFX TableView control will autosize when you double click the line between two column headers, the Swing JTable does not. Threading – you are constantly dealing with issues related to the use of two different UI threads (the Swing EDT and the JavaFX application thread). You will run into freezing UIs and inconsistent state issues. Window Management - controlling which window will be on top of which other windows and which window is blocking input (modality) for other windows becomes difficult / impossible. Popup windows might no longer hide themselves automatically. Focus Handling- the wrong window might get the focus. Focus traversal between Swing controls and JavaFX controls might not work. Context Menus – you might not be able to close the menu by clicking somewhere else in the UI or you might end up with two context menus open at the same time (one controlled by JavaFX, one controlled by Swing). Cursor – setting different cursors on different controls / components will not work as expected. Drag and Drop – wether within the SwingNode itself or between Swing and JavaFX, exceptions are heading your way. Performance – the performance / rendering speed of JavaFX controls mixed with Swing components will degrade.Conclusion What does this mean now? Well, it means that in the end you will not save time if you are following the Swing/JavaFX mixing strategy. At least not if quality is important to you. If your focus is only on making features available then maybe, but if you want to ship a commercial grad / professional application, then no. If you have already decided to migrate to JavaFX, then do the Full Monty and redo your entire application in JavaFX, it is worth the wait.Reference: JavaFX Tip 9: Do Not Mix Swing / JavaFX from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
software-development-2-logo

Trust instead of Threats

According to Dr. Gary McGraw’s ground breaking work on software security, up to half of security mistakes are made in design rather than in coding. So it’s critical to prevent – or at least try to find and fix – security problems in design. For the last 10 years we’ve been told that we are supposed to do this through threat modeling aka architectural risk analysis – a structured review of the design or architecture of a system from a threat perspective to identify security weaknesses and come up with ways to resolve them. But outside of a few organizations like Microsoft threat modeling isn’t being done at all, or at best only on an inconsistent basis. Cigital’s work on the Build Security In Maturity Model (BSIMM), which looks in detail at application security programs in different organizations, has found that threat modeling doesn’t scale. Threat modeling is still too heavyweight, too expensive, too waterfally, and requires special knowledge and skills. The SANS Institute’s latest survey on application security practices and tools asked organizations to rank the application security tools and practices they used the most and found most effective. Threat modeling was second last. And at the 2014 RSA Conference, Jim Routh at Aetna, who has implemented large-scale secure development programs in 4 different major organizations, admitted that he has not yet succeeded in injecting threat modeling into design anywhere “because designers don’t understand how to make the necessary tradeoff decisions”. Most developers don’t know what threat modeling is, or how do to it, never mind practice it on a regular basis. With the push to accelerate software delivery, from Agile to One-Piece Continuous Flow and Continuous Deployment to production in Devops, the opportunities to inject threat modeling into software development are disappearing. What else can we do to include security in application design? If threat modeling isn’t working, what else can we try?There are much better ways to deal with security than threat modelling… like not being a tool. JeffCurless, comment on a blog post about threat modeling Security people think in terms of threats and risks – at least the good ones do. They are good at exploring negative scenarios and what-ifs, discovering and assessing risks. Developers don’t think this way. For most of them, walking through possibilities, things that will probably never happen, is a waste of time. They have problems that need to be solved, requirements to understand, features to deliver. They think like engineers, and sometimes they can think like customers, but not like hackers or attackers. In his new book on Threat Modeling Adam Shostack says that telling developers to “think like an attacker” is like telling someone to think like a professional chef. Most people know something about cooking, but cooking at home and being a professional chef are very different things. The only way to know what it’s like to be a chef and to think like a chef is to work for some time as a chef. Talking to a chef or reading a book about being a chef or sitting in meetings with a chef won’t cut it. Developrs aren’t good at thinking like attackers, but they constantly make assertions in design, including important assertions about dependencies and trust. This is where security should be injected into design. Trust instead of Threats Threats don’t seem real when you are designing a system, and they are hard to quantify, even if you are an expert. But trust assertions and dependencies are real and clear and concrete. Easy to see, easy to understand, easy to verify. You can read the code, or write some tests, or add a run-time check. Reviewing a design this way starts off the same as a threat modeling exercise, but it is much simpler and less expensive. Look at the design at a system or subsystem-level. Draw trust boundaries between systems or subsystems or layers in the architecture, to see what’s inside and what’s outside of your code, your network, your datacenter: Trust boundaries are like software firewalls in the system. Data inside a trust boundary is assumed to be valid, commands inside the trust boundary are assumed to have been authorized, users are assumed to be authenticated. Make sure that these assumptions are valid. And make sure to review dependencies on outside code. A lot of security vulnerabilities occur at the boundaries with other systems, or with outside libraries because of misunderstandings or assumptions in contracts. OWASP Application Threat Modeling Then, instead of walking through STRIDE or CAPEC or attack trees or some other way of enumerating threats and risks, ask some simple questions about trust: Are the trust boundaries actually where you think they are, or think they should be? Can you trust the system or subsystem or service on the other side of the boundary? How can you be sure? Do you know how it works, what controls and limits it enforces? Have you reviewed the code? Is there a well-defined API contract or protocol? Do you have tests that validate the interface semantics and syntax? What data is being passed to your code? Can you trust this data – has it been validated and safely encoded, or do you need to take care of this in your code? Could the data have been tampered with or altered by someone else or some other system along the way? Can you trust the code on the other side to protect the integrity and confidentiality of data that you pass to it? How can you be sure? Should you enforce this through a hash or an HMAC or a digital signature or by encrypting the data? Can you trust the user’s identity? Have they been properly authenticated? Is the session protected? What happens if an exception or error occurs, or if a remote call hangs or times out – could you lose data or data integrity, or leak data, does the code fail open or fail closed? Are you relying on protections in the run-time infrastructure or application framework or language to enforce any of your assertions? Are you sure that you are using these functions correctly? These are all simple, easy-to-answer questions about fundamental security controls: authentication, access control, auditing, encryption and hashing, and especially input data validation and input trust, which Michael Howard at Microsoft has found to be the cause of half of all security bugs. Secure Design that can actually be done Looking at dependencies and trust will find – and prevent – important problems in application design. Developers don’t need to learn security jargon, try to come up with attacker personas or build catalogs of known attacks and risk weighting matrices, or figure out how to use threat modeling tools or know what a cyber kill chain is or understand the relative advantages of asset-centric threat modeling over attacker-centric modeling or software-centric modeling. They don’t need to build separate models or hold separate formal review meetings. Just look at the existing design, and ask some questions about trust and dependencies. This can be done by developers and architects in-phase as they are working out the design or changes to the design – when it is easiest and cheapest to fix mistakes and oversights. And like threat modeling, questioning trust doesn’t need to be done all of the time. It’s important when you are in the early stages of defining the architecture or when making a major design change, especially a change that makes the application’s attack surface much bigger (like introducing a new API or transitioning part of the system to the Cloud). Any time that you are doing a “first of”, including working on a part of the system for the first time. The rest of the time, the risks of getting trust assumptions wrong should be much lower. Just focusing on trust won’t be enough if you are building a proprietary secure protocol. And it won’t be enough for high-risk security features – although you should be trying to leverage the security capabilities of your application framework or a special-purpose security library to do this anyways. There are still cases where threat modeling should be done – and code reviews and pen testing too. But for most application design, making sure that you aren’t misplacing trust should be enough to catch important security problems before it is too late.Reference: Trust instead of Threats from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

Test Attribute #3 – Speed

This is the 3rd post on test attributes that were described in the now more famous “How to test your tests” post. There’s a story I like to tell about my first TDD experience. You’ll have to hear it now (some of you for the n-th time). It was many moons ago,  when I just completed reading Kent Beck’s excellent “Test Driven Development By Example”. And I thought: This would end all my misery. I was working on a communication component at the time, and I thought, why not use this new TDD thing? I’ve already committed one foul ahead of writing a single line of test code, because I knew  that I was going to use MSMQ for the component. So I decided on the design instead of letting the tests drive it. My level of understanding of TDD at the time is not relevant for this story. MSMQ however, is. For those who don’t know, MSMQ is Microsoft Queuing service, that runs on all kinds of Windows machine. An infrastructure for asynchronous messaging, that seem perfect for the job. It is, however, a bit slow. So for my first test, I wrote a test that writes to the queue and waits to receive it from it. Something like this: [TestMethod] public void ReceiveSentMessage() { MyQueue myqueue = new MyQueue(); myqueue.SendMessage(new Message("Hi")); Message receivedMessage = myqueue.Receive(); Assert.AreEqual("Hi", receivedMessage.Body); } Since we’re talking about speed, here’s the thing: This single test ran around 3 seconds. What happens if I had a hundred more like it? The Death Spiral Of Slow Tests I was so happy I had a passing test, I didn’t notice that it took a few seconds to run. Most people starting out with unit testing don’t notice that. They keep accumulating slow tests to their suite, until one day they reach a tipping point. Let’s take, for example, a suite that takes 15 minutes to run. And let’s say I’m a very patient person. I know, just work with me. Up to this point I had no problem running a full suite every hour. Then, at that 15 minute point, I decide that running the suite every hour cripples my productivity. So I decide that I’ll run the tests twice a day. One run will be over lunch, and the 2nd will start as I go out of the office. That way I won’t need to wait on my time, the results will be there when I get back to work. That leaves me more time to write code (and hopefully some tests). So I write more code, and when I get back from lunch, there are a few red tests. Since I don’t know exactly what’s wrong (I can’t tell exactly which parts of the big chunks of code I added productively are the ones to blame), I’ll spend an hour debugging the failing tests. And repeat that tomorrow morning, and the next lunch break. Until I realize that I now spend 2 hours a day working on fixing tests. That’s 25% of my time working for my tests, instead of them working for me. Which is where I stop writing more tests, because I see the cost, and no value from them. And then I stop running them, because, what’s the point? I call it “The Death Spiral Of Doom”, and many developers who start doing testing, fall downstairs. Many never climb up again. If we reverse the process, we’ll see quite the opposite. If my suite runs in a matter of seconds, or faster, I run it more often. When a test breaks, I know what caused the problem, because I know it was caused from a change I did in the last few minutes. Fixing it may not even require debugging, because it’s still fresh in my mind. Development becomes smoother and quicker. Quick Feedback Is Mandatory Tests should run quickly. We’re talking hundreds and thousands in a matter of seconds. If they don’t we’ll need to do something about them. Quick feedback is not only an important agile property. It is essential for increasing velocity. If we don’t work at it, the entire safety net of our tests can come crashing down. So what can we do?Analyze. The length of tests is part of every test report, so it’s not even subjective. Look at those tests, and see which are the ones that take longer to run. Organize.  Split the tests to slow running and quick running. Leave the slow running to a later automated build cycle, so you’ll be able to run the quick ones without penalty. Mock. Mocking is a great way to speed up test. If a dependency (like my MSMQ service) is slow, mock it. Groom. Not all the tests should be part of our automated build forever. If there’s a part of code that you never touch, but has 5 minute test suite around it, stop running it. Or run those on the nightly cycle. Upgrade. You’ll be surprised how quicker better hardware runs your tests. The cost may be marginal compared to the value of quick feedback.The key thing is ongoing maintenance of the test suite. Keep analyzing your suite, and you’ll see where you can optimize, without taking bigger risks. The result is a quick safety net you can trust.Reference: Test Attribute #3 – Speed from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
jboss-hibernate-logo

From JPA to Hibernate’s legacy and enhanced identifier generators

JPA identifier generators JPA defines the following identifier strategies:                Strategy DescriptionAUTO The persistence provider picks the most appropriate identifier strategy supported by the underlying databaseIDENTITY Identifiers are assigned by a database IDENTITY columnSEQUENCE The persistence provider uses a database sequence for generating identifiersTABLE The persistence provider uses a separate database table to emulate a sequence objectIn my previous post I exampled the pros and cons of all these surrogate identifier strategies. Identifier optimizers While there’s not much application-side IDENTITY generator optimization (other than configuring database identity preallocation), the sequence identifiers offer much more flexibility in this regard. One of the most common optimization strategy is based on the hi/lo allocation algorithm. For this Hibernate offers:Generator DescriptionSequenceHiLoGenerator It uses a database sequence to generate the hi value, while the low value is incremented according to the hi/lo algorithmTableHiLoGeneratorA database table is used for generating the hi values. This generator is deprecated in favour of the MultipleHiLoPerTableGenerator, the enhanced TableGenerator or the SequenceStyleGenerator.MultipleHiLo PerTableGenerator It’s a hi/lo table generator capable of using a single database table even for multiple identifier sequences.SequenceStyleGenerator It’s an enhanced version of the previous sequence generator. It uses a sequence if the underlying database supports them. If the current database doesn’t support sequences it switches to using a table for generating sequence values. While the previous generators were having a predefined optimization algorithm, the enhanced generators can be configured with an optimizer strategy:none: there is no optimizing strategy applied, so every identifier is fetched from the database hi/lo: it uses the original hi/lo algorithm. This strategy makes it difficult for other systems to share the same identifier sequence, requiring other systems to implement the same identifier generation logic. pooled: This optimizer uses a hi/lo optimization strategy, but instead of saving the current hi value it stores the current range upper boundary (or lower boundary – hibernate.id.optimizer.pooled.prefer_lo).Pooled is the default optimizer strategy.TableGenerator Like MultipleHiLoPerTableGenerator it may use one single table for multiple identifier generators, while offering configurable optimizer strategies. Pooled is the default optimizer strategy.  JPA to Hibernate identifier mapping Having such an abundant generator offer, we cannot help asking which of those is being used as the default JPA generators. While the JPA specification doesn’t imply any particular optimization, Hibernate will prefer an optimized generator over one that always hit the database for every new identifier. The JPA SequenceGenerator We’ll define one entity configured with the SEQUENCE JPA identifier generator. A unit test is going to persists five such entities. @Entity(name = "sequenceIdentifier") public static class SequenceIdentifier {@Id @GeneratedValue(generator = "sequence", strategy=GenerationType.SEQUENCE) @SequenceGenerator(name = "sequence", allocationSize = 10) private Long id; }@Test public void testSequenceIdentifierGenerator() { LOGGER.debug("testSequenceIdentifierGenerator"); doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for (int i = 0; i < 5; i++) { session.persist(new SequenceIdentifier()); } session.flush(); return null; } }); } Running this test we’ll give us the following output Query:{[call next value for hibernate_sequence][]} Generated identifier: 10, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 11, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 12, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 13, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 14, using strategy: org.hibernate.id.SequenceHiLoGenerator Query:{[insert into sequenceIdentifier (id) values (?)][10]} Query:{[insert into sequenceIdentifier (id) values (?)][11]} Query:{[insert into sequenceIdentifier (id) values (?)][12]} Query:{[insert into sequenceIdentifier (id) values (?)][13]} Query:{[insert into sequenceIdentifier (id) values (?)][14]} Hibernate chooses to use the legacy SequenceHiLoGenerator for backward compatibility with all those applications that were developed prior to releasing the enhanced generators. Migrating a legacy application to the new generators is not an easy process, so the enhanced generators are a better alternative for new applications instead. Hibernate prefers using the “seqhilo” generator by default, which is not an intuitive assumption, since many might expect the raw “sequence” generator (always calling the database sequence for every new identifier value). To enable the enhanced generators we need to set the following Hibernate property: properties.put("hibernate.id.new_generator_mappings", "true"); Giveing us the following output: Query:{[call next value for hibernate_sequence][]} Query:{[call next value for hibernate_sequence][]} Generated identifier: 1, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 2, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 3, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 4, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 5, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Query:{[insert into sequenceIdentifier (id) values (?)][1]} Query:{[insert into sequenceIdentifier (id) values (?)][2]} Query:{[insert into sequenceIdentifier (id) values (?)][3]} Query:{[insert into sequenceIdentifier (id) values (?)][4]} Query:{[insert into sequenceIdentifier (id) values (?)][5]} The new SequenceStyleGenerator generates other identifier values than the legacy SequenceHiLoGenerator. The reason why the update statements differ between the old and the new generators is because the new generators default optimizer strategy is “pooled” while the old generators can only use the “hi/lo” strategy. The JPA TableGenerator @Entity(name = "tableIdentifier") public static class TableSequenceIdentifier {@Id @GeneratedValue(generator = "table", strategy=GenerationType.TABLE) @TableGenerator(name = "table", allocationSize = 10) private Long id; } Running the following test: @Test public void testTableSequenceIdentifierGenerator() { LOGGER.debug("testTableSequenceIdentifierGenerator"); doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for (int i = 0; i < 5; i++) { session.persist(new TableSequenceIdentifier()); } session.flush(); return null; } }); } Generates the following SQL statement output: Query:{[select sequence_next_hi_value from hibernate_sequences where sequence_name = 'tableIdentifier' for update][]} Query:{[insert into hibernate_sequences(sequence_name, sequence_next_hi_value) values('tableIdentifier', ?)][0]} Query:{[update hibernate_sequences set sequence_next_hi_value = ? where sequence_next_hi_value = ? and sequence_name = 'tableIdentifier'][1,0]} Generated identifier: 1, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 2, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 3, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 4, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 5, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Query:{[insert into tableIdentifier (id) values (?)][1]} Query:{[insert into tableIdentifier (id) values (?)][2]} Query:{[insert into tableIdentifier (id) values (?)][3]} Query:{[insert into tableIdentifier (id) values (?)][4]} Query:{[insert into tableIdentifier (id) values (?)][5]} As with the previous SEQUENCE example, Hibernate uses the MultipleHiLoPerTableGenerator to maintain the backward compatibility. Switching to the enhanced id generators: properties.put("hibernate.id.new_generator_mappings", "true"); Give us the following output: Query:{[select tbl.next_val from hibernate_sequences tbl where tbl.sequence_name=? for update][tableIdentifier]} Query:{[insert into hibernate_sequences (sequence_name, next_val) values (?,?)][tableIdentifier,1]} Query:{[update hibernate_sequences set next_val=? where next_val=? and sequence_name=?][11,1,tableIdentifier]} Query:{[select tbl.next_val from hibernate_sequences tbl where tbl.sequence_name=? for update][tableIdentifier]} Query:{[update hibernate_sequences set next_val=? where next_val=? and sequence_name=?][21,11,tableIdentifier]} Generated identifier: 1, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 2, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 3, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 4, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 5, using strategy: org.hibernate.id.enhanced.TableGenerator Query:{[insert into tableIdentifier (id) values (?)][1]} Query:{[insert into tableIdentifier (id) values (?)][2]} Query:{[insert into tableIdentifier (id) values (?)][3]} Query:{[insert into tableIdentifier (id) values (?)][4]} Query:{[insert into tableIdentifier (id) values (?)][5]} You can see that the new enhanced TableGenerator was used this time. For more about these optimization strategies you can read the original release note.Code available on GitHub.Reference: From JPA to Hibernate’s legacy and enhanced identifier generators from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
mongodb-logo

How to Use Projection in MongoDB?

In mongodb, projection means selecting only necessary data rather than selecting the whole data of a document. If a document has 5 fields and you need to show only 3, then select only 3 fields from it. There are a few projection operators that mongodb provides and helps us reach that goal. Let us discuss about those operators in detail below. $: At first we are going to talk about $ operator. This operator limits the contents of the field that is included in the query results to contain the first matching element.It must required to appear the field in the query document. One positional $ operator can only be appear in the projection document. At a time only one array field can be appear in the query document.Now let us see a basic example of this operator. db.collection.find( { : ... },{ ".$": 1 } ) In the above example as you can see we have used find() on a collection. The method holds only 1 value for the array. It specifies that in a single query only one value can be retrieved from the array depending on the position. Array Field Limitation: Since only one array field can appear in the query document, if the array contains documents, to specify criteria on multiple fields of these documents, what can we use? $elematch: In MongoDB the $elemMatch projection operator is used to limits the contents of an array field which is included in the query results to contain only the first matching element in the array, according to the specified condition.The elements of the array are documents. If multiple elements match the $elemMatch condition, the operator returns the first matching element in the array. The $elemMatch projection operator is similar to the positional $ projection operator.To describe an example about this operator we have to have a database where the db consists a lot of document type data. In my opinion a student mark-sheet is appropriate for this. Let us see the query. db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } } } ); This example returns all documents in the grades collection where any element in the records array satisfies all of the conditions in the $elemMatch expression. This example returns all documents in the grades collection where the records array contains at least one element with both student equal to stud1 and grade greater than 85. But what happens if there are two parameter stating 2 different grades of the same student. Like this: db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } , : { student: "stud1", grade: { $gt: 90 } } } } ); because only 1 parameter is matched in the above query the output will show perfectly. In $elematch any 1 parameter has to match. However in the next case: db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } , : { student: "stud2", grade: { $gt: 90 } } } } ); It would not match the query because no embedded document meets the specified criteria. The differences in the projection operators: The positional ($) projection operator:limits the contents of an array field that is included in the query results to contain the first element that matches the query document. requires that the matching array field is included in the query criteria can only be used if a single array field appears in the query criteria can only be used once in a projectionThe $elemMatch projection operatorlimits the contents of an array field that is included in the query results to contain only the first array element that matches the $elemMatch condition. does not require the matching array to be in the query criteria can be used to match multiple conditions for array elements that are embedded documents$slice The $slice operator controls the number of items of an array that a query returns. For information on limiting the size of an array during an update with $push, see the $slice modifier instead. Let us see a basic query: db.collection.find( { field: value }, { array: {$slice: count } } ); This operation selects the document collection identified by a field named field that holds value and returns the number of elements specified by the value of count from the array stored in the array field. If count has a value greater than the number of elements in array the query returns all elements of the array. $slice accepts arguments in a number of formats, including negative values and arrays. As you saw the basic code, let us see how we can retrieve a set of comments from an array. Let us look below: db.posts.find( {}, { comments: { $slice: 5 } } ) Here, $slice selects the first five items in an array in the comments field. db.posts.find( {}, { comments: { $slice: -5 } } ) This operation returns the last five items in array. So, we have first and last five comments. But what happens if we need a certain amount of comments from between the array? You have to modify the above code a little bit. As follows: db.collection.find( { field: value }, { array: {$slice: [skip,limit] } } ); so, in the above code we can see a pair of parameters. Which tells us how to select data from between the array. How you ask? As the first parameter you can see that we have used skip. Which tells us how many positions in the array we have to skip to start the count. And in the second we have limit, which tells us where the count will stop? Let us modify the above example for better understanding: db.posts.find( {}, { comments: { $slice: [5,10] } } ) in the above example instead showing the first 5 comments, w have skipped them. After skipping, from the fifth position (because array positioning starts from 0) the limit count will start. $meta The $meta projection operator returns for each matching document the metadata (e.g. “textScore”) associated with the query. The $meta expression can be a part of the projection document as well as a sort() expression. A $meta expression has the following syntax: { <projectedFieldName>: { $meta: <metaDataKeyword> } } A textscore returns the score associated with the corresponding query:$text query for each matching document. The text score signifies how well the document matched the stemmed term or terms. If not used in conjunction with a query. Default order is descending. As a basic example we can see the following set of codes: db.collection.find( , { score: { $meta: "textScore" } } ) The $meta expression can be part of a sort() expression. We will get into detail later. db.collection.find( , { score: { $meta: "textScore" } } ).sort( { score: { $meta: "textScore" } } ) Summary: As in databases finding information is a very important aspect, same we can call about projection in mongodb. Although in this article we have scratched the surface of the projection. Saying that you can understand that there are many other ways we can use the find() for projecting a certain result in mongodb.Reference: How to Use Projection in MongoDB? from our JCG partner Piyas De at the Phlox Blog blog....
javascript-logo

The Java Origins of Angular JS: Angular vs JSF vs GWT

A superheroic Javascript framework needs a good origin story. Let’s try to patch it together, while going over the use of Angular JS in the enterprise Java world and the Angular take on MVC. This post will go over the following topics, and end with an example:            The Java Origins of Angular JS Angular vs JSF Angular vs GWT Angular vs jQuery The Angular take on MVC (or MVW) The M in MVC – Scopes The V in MVC – Directives The C in MVC – ControllersThe Origins of Angular JS Angular is becoming a framework of choice for developing web applications in enterprise settings, where traditionally the backend is built in Java and the frontend is built in a Java/XML based framework such as JSF or GWT. As Java developers often living in the Spring/Hibernate world, we might wonder how a dependency-injection, dirty checking based MVC framework ever managed to jump from the server and into our browsers, and find that to be an interesting coincidence. The Story Behind Angular It turns out that the similarities are likelly not a coincidence, because at it’s roots Angular was built by Java Developers at Google, that felt that they where not being productive building frontend applications using Java, specifically GWT. These are some important quotes from the Angular developers about the origins of Angular, recently on the Javascript Jabber Podcast (transcript link here): we were building something in GWT and was getting really frustrated just how unproductive I was being. we could build the application (Google Feedback) much faster than we could build it in GWT. So this means Angular was effectively created by full-time Java GWT developers, as a response to how they felt that Java frameworks limited their frontend development productivity. Is JSF or GWT still the way to go? Although with two very different approaches, one of the main goals of both JSF and GWT is to abstract at least part of the web away, by allowing web development to be done in the Java/XML world. But it seems that in this day and age of HTML5 browsers, frameworks like JSF/GWT are much more complex than the underlying platform that they are trying to abstract away in the first place. Although they can be made to work fine, the question is: at what cost? Often the underlying browser technologies leak through to the developer, which ends up having to know HTML, CSS and Javascript anyway in order to be able to implement many real world requirements. This leaves the developer wondering why can’t browser technologies be used directly without so many constraints and intermediate layers of abstraction, because in the end there is really no escape from them. Browser technologies are actually simpler, more widespread and far better documented than any Java framework could ever be. Historical context of JSF and GWT It’s important to realize how JSF/GWT came to be in the first place: they where created to be used in scenarios where an enterprise backend already existed built in Java/XML, and a need existed to reuse that same team of enterprise developers to build also the frontend. From a project management point of view, on a first look and still today this makes a lot of sense. Also from an historical point of view, JSF/GWT where created in a context where the browser was a much more quirkier platform than it is today, and with a lot less developer tools available. So the goal of the framework was to abstract at least some of the browser technologies away, enabling it to be used by a wider developer base. Angular vs JSF JSF came more or less at the same time as Ajax exploded in the web development scene a decade ago. The initial version of JSF was not designed with Ajax in mind, but was instead meant as a full page request/response model. In this model, a DOM-like tree of components representing the user interface exists in memory, but this tree exists only on the server side. The server View then gets converted back and forth to HTML, CSS and Javascript, treating the browser mostly as a rendering platform with no state and limited control over what is going on. Pages are generated by converting the server View representation to HTML, CSS and Javascript via a set of special classes called Renderers, before sending the page to the user. How does JSF work? The user will then interact with the page and send back an action typically via an HTTP POST, and then a server side lifecycle is triggered via the JSF Controller, that restores the view tree, applies the new values to the view and validates them, updates the domain model, invokes the business logic and renders back a new view. The framework was then evolved in JSF 2 for native Ajax support and stateless web development, but the main approach of generating the HTML in the browser from a server side model remained. How does Angular compare to JSF The main design difference is that in Angular the Model, the View and the Controller where moved from the server and into the browser itself. In Angular, the browser technologies are not seen as something to be avoided or hidden, but something to be used to the full extent of it’s capabilities, to build something that is much more similar to a Swing fat client rather than a web page. Angular does not mandate this, but the server typically has very little to no state and serves mostly JSON via REST services. How important is Javascript in JSF? The take of JSF towards Javascript seems to be that the language is something that JSF library developers need to know, but usually not the application developers. The most widespread JSF library Primefaces contains internally thousands of lines of Javascript code for it’s jQuery based frontend widgets, but Primefaces based projects have often very little to no Javascript on the application code base itself. Still, in order to do custom component development in Primefaces, it’s important to know Javascript and jQuery, but usually only a small part of the application team needs to know it. Angular vs GWT A second generation take on Java web development on the browser came with the arrival of GWT. In the GWT take, the Model, View and Controller are also moved to the browser, just like in Angular. The main difference is the the way that Javascript is handled: GWT provides a Java to Javascript compiler that treats Javascript as a client side bytecode execution engine. In this model the development is made entirely in Java, and with a build process the code gets compiled down to Javascript and executed in the browser. The GWT take on HTML and CSS In GWT, HTML and CSS are not meant to be completely hidden from the developer, although XML namespaces are provided to the user to layout at least some of the page major blocks. When getting to the level of forms, an HtmlPanel is provided to allow to build pages in HTML and CSS directly. This is by the way also possible in JSF, although in the case of both frameworks typically developers try to avoid as much as possible HTML and CSS, by trying to use the XML namespaces to their maximum possible extent. Why the Javascript transpilation aproach ? GWT is not so different from Angular in a certain way: it’s MVC in the browser, with Javascript being a transpilation target rather than the application development language. The main goal of that transpilation is again reusing the same developer team that builds the backend as well, and abstracting away browser quirks. Does the GWT object oriented approach help ? The GWT programming model means that the web page is viewed from an object oriented point of view: the page is seen in the program as a network of interconnected objects instead of a document. The notion of document and elements are hidden away by the framework, but it turns out that this extra level of indirection altough familiar, ends up not being that helpful and often gets in the way of the developer more than anything else. Is the extra layer of abstraction needed? The fact is that the notion of page and elements are already simple and powerful enough so that they don’t need an extra layer of abstraction around it. With the object oriented abstraction of the page, the developer often ends up having to debug it’s way through a myriad of classes for simple things like finding where to add a or remove a simple CSS class or wrap an element in a div. Super Dev Mode helps, but it feels like the whole GWT hierarchy of objects, the Java to Javascript compiler and the several debug modes and browser and IDE plugins ecosystem are all together far more complex that what they are trying to hide away in the first place: the web. Angular vs jQuery Meanwhile and in parallel in the Javascript world, a new approach came along for tackling browser differences: the idea that a Javascript library can be created that provides a common API that works well in all browsers. The library would detect the browser at runtime and adapt internally the code used so that the same results occur in all browsers. Such library would be much simpler to use as it did not require browser quirk knowledge, and could appeal to a wider development base. The most successful of those libraries is jQuery, which is mostly a page manipulation library, but it’s not meant to be an MVC framework. jQuery in the Java World Still jQuery is the client side basis of the most popular JSF framework: Primefaces. The main difference between Angular and jQuery is that in jQuery there is no notion of Model or Controller, the document is instead directly manipulated. A lot of code like this is written if using jQuery (example from the Primefaces Javascript autocomplete widget): this.itemtip = $('<div id="' + this.id + '_itemtip" class="ui-autocomplete-itemtip ui-state-highlight ui-widget ui-corner-all ui-shadow"></div>') .appendTo(document.body); As we can see, the Primefaces developers themselves need to know about HTML, CSS and Javascript, altough many of the application developers use the provided XML tags that wrap the frontend widgets, and treat them as a black box. This type of code reminds of the code initially written in Java development, when the Servlet API came along but there weren’t yet any JSP’s: out.println(" " + message + ""); What Angular allows is to decouple the Model from the View, and loosely glue the two together with a Controller. The Angular JS take on MVC (or MVW) Angular positions itself as MVW framework – Model, View, Whatever. This means that it acknowledges the clear separation of a Model, that can be a View specific model and not necessarily a domain model. In Angular the Model is just a POJO – Plain Old Javascript Object. Angular acknowledges also the existence of a View, that is binded declaratively to the Model. The view is just HTML with some special expression language for Model and user interaction binding, and a reusable component building mechanism known as Directives. It also acknowledges the need to something to glue the Model and the View together, but it does name this element hence the “Wathever”. In MVC this element is the Controller, in MVP it’s the Presenter, etc. Minimal Angular Example Let’s go over the three elements of MVC and see what do they correspond in Angular by using a minimal interactive multiplication example, here it is working in a jsFiddle. As you can see, the result is updated immediately once the two factors change. Doing this in something like JSF or GWT would be a far larger amount of work. What would this look like in JSF and GWT? In JSF, for example in Primefaces this would mean having to write a small jQuery plugin or routine to add the interactive multiplication feature, create a facelet template, declare a facelet tag and add it to the tag library, etc. In GWT this would mean bootstraping a sample app, creating a UI binder template, add listeners to the two fields or setting up the editor framework, etc. Enhanced Developer Productivity We can see what the Angular JS developers meant with enhanced productivity, as the complete Angular version is the following, written in a few minutes: <div ng-app="Calculator" ng-controller="CalculatorCtrl"> <input type="text" ng-model="model.left"> * <input type="text" ng-model="model.right"> = <span>{{multiply()}}</span> </div> angular.module('Calculator', []) .controller('CalculatorCtrl', function($scope) { $scope.model = { left: 10, right: 10 }; $scope.multiply = function() { return $scope.model.left * $scope.model.right; } }); So let’s go over the MVC setup of this sample code, starting with the M. The M in MVC – Angular Scopes The Model in Angular is just a simple Javascript object. This is the model object, being injected into the scope: $scope.model = { left: 10, right: 10 }; Injecting the model into the scope makes it dirty checked, so that any changes in the model are reflected immediately back to the view. In the case of the example above, editing the factor input boxes triggers dirty checking which triggers the recalculation of the multiplication, which gets instantly reflected in the result. The V in MVC – Enhanced HTML The view in Angular is just HTML annotated with a special expression language, such as the definition of the multiply() field. The HTML is really acting in this case as client side template, that could be split into reusable HTML components called Directives. The C in MVC – Angular Controllers The CalculatorCtrl is the controller of the example application. It initializes the model before the view gets rendered, and act’s as the glue between the view and the model by defining the multiply function. The controller typically defines observers on the model that trigger event driven code. Conclusions It seems that polyglot development in both Java and Javascript is a viable option for the future of enterprise development, and that Angular is a major part of that view on how to build enterprise apps. The simplicity and speed of development that it brings is attractive to frontend Java developers, which to one degree or another already need to deal with HTML, CSS and often Javascript anyway. So an attractive option seems to be that a portion of enterprise application code will start being written in Javascript using Angular instead of Java, but only the next few years will tell. An alternative way of using Angular Another possibility is that Angular is used internally by frameworks such as JSF as an internal implementation mechanism. See for example this post from the lead of the Primefaces project: I have plans to add built-in js mvc framework support, probably it will be angular. So it’s possible that Angular will be used as an implementation mechanism for technologies that will follow the aproach of keeping the application developer experience to be Java and XML based as much as possible. One thing seems sure, Angular either as an application MVC framework or as an internal detail of a Java/XML based framework seems slowly but surely making it’s way into the enterprise Java world. Related Links: A great online resource for Angular: The egghead.io Angular lessons, a series of minimal 5 minutes video lectures by John Lindquist (@johnlindquist).Reference: The Java Origins of Angular JS: Angular vs JSF vs GWT from our JCG partner Aleksey Novik at the The JHades Blog blog....
opscode-chef-logo

Configuring chef Part-2

Lets recap what all we have done in the last blog :                  Setup workstation and chef-repo. Registered on chef to use hosted chef as the chef-server. Bootstrapped a node to be managed by the chef-server. Downloaded the “apache” cookbook in our chef-repo. Uploaded the “apache” cookbook to the chef-server. Added the recipe[apache] in the run-list of the node. Ran the chef-client on the client to apply the cookbook.Now lets continue, and try to understand some more concepts around chef and see them in action. Node Object The beauty of chef is that it gives an object oriented approach to the entire configuration management. The Node Object as the name suggests is an object of the class Node (http://rubydoc.info/gems/chef/Chef/Node). The node object consists of the run-list and node attributes, which is a JSON file that is stored on the Chef server. The chef-client gets a copy of the node object from the Chef server and maintains the state of a node. Attributes: An attribute is a specific detail about a node, such as an IP address, a host name, a list of loaded kernel modules, etc. Data-bags: Data bags are JSON files used to store the data essential across all nodes and not relative to particular cookbooks. They can be accessed inside the cookbooks, attribute files using search. example: user profiles, groups, users, etc. Used by roles and environments, a persistence available across all the nodes. Now, lets explore the node object and see the attributes and databags. We will also see how we can modify and set them. First lets see what all nodes are registered with chef-server: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node list aws-linux-node aws-node-ubuntu awsnode Now lets see the details of the node awsnode. Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node show awsnode Node Name: awsnode Environment: _default FQDN: ip-172-31-36-73.us-west-2.compute.internal IP: 172.31.36.73 Run List: recipe[apache] Roles: Recipes: apache, apache::default Platform: redhat 7.0 Tags: Finding specific attributes : You can find the fqdn of the aws node. Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node show awsnode -a fqdn awsnode: fqdn: ip-172-31-36-73.us-west-2.compute.internal Search : Search is one of the best features of chef, ‘Search’. Chef Server uses Solr for searching the node objects. So we can provide Solr style queries to search the Json node object attributes and data-bags. Lets see how we can search all the nodes and see their fqdn (fully qualiifed domain name): Anirudhs-MacBook-Pro:chef-repo anirudh$ knife search node "*:*" -a fqdn 2 items foundnode1: fqdn: centos63.example.comawsnode: fqdn: ip-172-31-36-73.us-west-2.compute.internal Changing the defaults using attributes Lets try to change some defaults in our apache cookbook using the attributes. In the /chef-repo/cookbooks/apache/attributes folder we can find the file default.rb (create if not). Add the following : default["apache"]["indexfile"]="index1.html" Now go to the folder cookbooks/apache/files/default and make a file index1.html <html> <body> <h1> Dude!! This is index1.html, it has been changed by chef!</h1> </body> </html> The last thing we need to do get this working is change the recipe and tell it to pick the default index file from the node attribute ‘indexfile’ which we have just set. So, open the file ‘cookbooks/apache/recipes/default.rb’ and append this: cookbook_file "/var/www/index.html" do source node["apache"]["indexfile"] mode "0644" end Now upload the cookbook to the chef server using the command : Anirudhs-MacBook-Pro:chef-repo anirudh$ knife cookbook upload apache And then go to the node, and run the chef-client: opscode@awsnode:~$ sudo chef-client Now, hit the external IP of the node in the browser, and we can see the change. So, we just now used the attribute to change the default index page of the apache server. An important thing to note here is the precedence of setting attributes. Defaults in recipe take a precedence over the attributes, and Role takes precedence over the recipes. The order of precedence is as follows: Ohai > Role > Environment > Recipe > Attribute Roles: A Role tell us what a particular node is acting as, the type of the node, is it a “web server”, a “database” etc. The use of this feature is that we can associate the run_list with it. So, instead of providing recipies as run_list to the node, We will associate the run_lists with a role and then apply this role to a node. Creating a role: knife create role webserver Check if role is created: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife role show webserver chef_type: role default_attributes: apache: sites: admin: port: 8000 description: Web Server env_run_lists: json_class: Chef::Role name: webserver override_attributes: run_list: recipe[apache] This role we just created has added apache recipe in the run_list. Assign this role to the node “awsnode” Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node run_list add awsnode 'role[webserver]' awsnode: run_list: recipe[apache] role[webserver] Upload this role to the chef-server: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife role from file webserver.rb Now run chef-client on the node. Environment: Environment means a QA, dev or a Production environment. We can assign a node any environment, and then apply some environment specific attributes. It is a mere tagging of nodes, environment attributes DOES NOT supersede role attributes. In the coming blogs we will see how we can use define dev, QA, production environments, apply different roles to nodes, configure attributes and data-bags and make a complete eco-system.Reference: Configuring chef Part-2 from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....
agile-logo

User Stories are Rarely Appropriate

All tools are useful when used appropriately, and User Stories are no different. User stories are fantastic when used in small teams on small projects where the team is co-located and has easy access to customers. User stories can quickly fall apart under any of the following situations:  the team or project is not small the team is not in a single location customers are hard to access project end date must be relatively fixedUser stories were introduced as a core part of Extreme Programming (XP). Extreme Programming assumes you have small co-located teams; if you relax (or abandon) any of these constraints and you will probably end up with a process out of control. XP, and hence user stories, works in high intensity environments where there are strong feedback loops inside the team and with customers:over Processes and Tools Customer Collaboration over Contract NegotiationUser stories need intense intra-team / customer communication to succeed User stories are a light-weight methodology that facilitates intense interactions between customers and developers and put the emphasis on the creation of code, not documentation. Their simplicity makes it easy for customers to help write them, but they must be complemented with timely interactions so that issues can be clarified. Large teams dilute interactions between developers; infrequence communication leads to a lack of team synchronization. Most organizations break larger teams into smaller groups where communication is primarily via email or managers — this kills communication and interaction. Larger projects have non-trivial architectures. Building non-trivial architecture by only looking at the end user requirements is impossible. This is like only having all the leaves of a tree and thinking you determine quickly where the branches and the trunk must be. User stories don’t work with teams where intense interaction is not possible. Teams distributed over multiple locations or time zones do not allow intense interaction. You are delusional if you think regular conference calls constitute intense interaction; most stand-up calls done via conference degrade into design or defect sessions. When emphasis is on the writing of code then it is critical that customers can be accessed in a timely fashion. If your customers are indirectly accessible through product managers or account representatives every few days then you will end up with tremendous latency. Live weekly demos with customers are necessary to flush out misunderstandings quickly and keep you on the same page User stories are virtually impossible to estimate. Often, we use user stories because there is a high degree of requirements uncertainty either because the requirements are unknown or it is difficult to get consistent requirements from customers. Since user stories are difficult to estimate, especially since you don’t know all the requirements, project end dates are impossible to predict with accuracy. To summarize, intense interactions between customers and developers are critical for user stories to be effective because this does several things:it keeps all the customers and developers on the same page it flushes out misunderstandings as quickly as possibleAll of the issues listed initially dilute the intensity of communication either between the team members or the developers and customers. Each issue that increases latency of communication will increase misunderstandings and increase the time it takes to find and remove defects. So if you have any of the following:Large or distributed teams Project with non-trivial architecture Difficult access to customers, i.e. high latency High requirements uncertainty but you need a fixed project end-dateThen user stories are probably not your best choice of requirements methodology. At best you may be able to complement your user stories with storyboards, at worst you may need some light-weight form of use case. Light-weight use case tutorial:(1 of 4) A use case is a dialog (2 of 4) Use case diagrams (UML) (3 of 4) Adding screens and reports (4 of 4) Adding minimal execution contextOther requirements articles:Shift Happens (long) Don’t manage enhancements in the Bug Tracker When BA means B∪ll$#!t ArtistReference: User Stories are Rarely Appropriate from our JCG partner Dalip Mahal at the Accelerated Development blog....
java-logo

The Knapsack problem

I found the Knapsack problem tricky and interesting at the same time. I am sure if you are visiting this page, you already know the problem statement but just for the sake of completion : Problem: Given a Knapsack of a maximum capacity of W and N items each with its own value and weight, throw in items inside the Knapsack such that the final contents has the maximum value. Yikes !!!      Link to the problem page in wikiHere’s the general way the problem is explained – Consider a thief gets into a home to rob and he carries a knapsack. There are fixed number of items in the home – each with its own weight and value – Jewellery, with less weight and highest value vs tables, with less value but a lot heavy. To add fuel to the fire, the thief has an old knapsack which has limited capacity. Obviously, he can’t split the table into half or jewellery into 3/4ths. He either takes it or leaves it. Example : Knapsack Max weight : W = 10 (units)Total items : N = 4Values of items : v[] = {10, 40, 30, 50}Weight of items : w[] = {5, 4, 6, 3} A cursory look at the example data tells us that the max value that we could accommodate with the limit of max weight of 10 is 50 + 40 = 90 with a weight of 7. Approach: The way this is optimally solved is using dynamic programming – solving for smaller sets of knapsack problems and then expanding them for the bigger problem. Let’s build an Item x Weight array called V (Value array): V[N][W] = 4 rows * 10 columns Each of the values in this matrix represent a smaller Knapsack problem. Base case 1 : Let’s take the case of 0th column. It just means that the knapsack has 0 capacity. What can you hold in them? Nothing. So, let’s fill them up all with 0s. Base case 2 : Let’s take the case of 0 row. It just means that there are no items in the house. What do you do hold in your knapsack if there are no items. Nothing again !!! All zeroes.Solution:Now, let’s start filling in the array row-wise. What does row 1 and column 1 mean? That given the first item (row), can you accommodate it in the knapsack with capacity 1 (column). Nope. The weight of the first item is 5. So, let’s fill in 0. In fact, we wouldn’t be able to fill in anything until we reach the column 5 (weight 5). Once we reach column 5 (which represents weight 5) on the first row, it means that we could accommodate item 1. Let’s fill in 10 there (remember, this is a Value array):  Moving on, for weight 6 (column 6), can we accommodate anything else with the remaining weight of 1 (weight – weight of this item => 6 – 5). Hey, remember, we are on the first item. So, it is kind of intuitive that the rest of the row will just be the same value too since we are unable to add in any other item for that extra weight that we have got.  So, the next interesting thing happens when we reach the column 4 in third row. The current running weight is 4.We should check for the following cases.Can we accommodate Item 2 – Yes, we can. Item 2′s weight is 4. Is the value for the current weight is higher without Item 2? – Check the previous row for the same weight. Nope. the previous row* has 0 in it, since we were not able able accommodate Item 1 in weight 4. Can we accommodate two items in the same weight so that we could maximize the value? – Nope. The remaining weight after deducting the Item2′s weight is 0.Why previous row? Simply because the previous row at weight 4 itself is a smaller knapsack solution which gives the max value that could be accumulated for that weight until that point (traversing through the items). Exemplifying,The value of the current item = 40 The weight of the current item = 4 The weight that is left over = 4 – 4 = 0 Check the row above (the Item above in case of Item 1 or the cumulative Max value in case of the rest of the rows). For the remaining weight 0, are we able to accommodate Item 1? Simply put, is there any value at all in the row above for the given weight?The calculation goes like so :Take the max value for the same weight without this item: previous row, same weight = 0=> V[item-1][weight]Take the value of the current item + value that we could accommodate with the remaining weight: Value of current item + value in previous row with weight 4 (total weight until now (4) - weight of the current item (4))=> val[item-1] + V[item-1][weight-wt[item-1]] Max among the two is 40 (0 and 40). The next and the most important event happens at column 9 and row 2. Meaning we have a weight of 9 and we have two items. Looking at the example data we could accommodate the first two items. Here, we consider few things: 1. The value of the current item = 40 2. The weight of the current item = 4 3. The weight that is left over = 9 - 4 = 5 4. Check the row above. At the remaining weight 5, are we able to accommodate Item 1.  So, the calculation is :Take the max value for the same weight without this item: previous row, same weight = 10Take the value of the current item + value that we could accumulate with the remaining weight: Value of current item (40) + value in previous row with weight 5 (total weight until now (9) - weight of the current item (4))= 10 10 vs 50 = 50.At the end of solving all these smaller problems, we just need to return the value at V[N][W] – Item 4 at Weight 10:Complexity Analyzing the complexity of the solution is pretty straight-forward. We just have a loop for W within a loop of N => O (NW) Implementation: Here comes the obligatory implementation code in Java: class Knapsack {public static void main(String[] args) throws Exception { int val[] = {10, 40, 30, 50}; int wt[] = {5, 4, 6, 3}; int W = 10;System.out.println(knapsack(val, wt, W)); }public static int knapsack(int val[], int wt[], int W) {//Get the total number of items. //Could be wt.length or val.length. Doesn't matter int N = wt.length;//Create a matrix. //Items are in rows and weight at in columns +1 on each side int[][] V = new int[N + 1][W + 1];//What if the knapsack's capacity is 0 - Set //all columns at row 0 to be 0 for (int col = 0; col <= W; col++) { V[0][col] = 0; }//What if there are no items at home. //Fill the first row with 0 for (int row = 0; row <= N; row++) { V[row][0] = 0; }for (int item=1;item<=N;item++){//Let's fill the values row by row for (int weight=1;weight<=W;weight++){//Is the current items weight less //than or equal to running weight if (wt[item-1]<=weight){//Given a weight, check if the value of the current //item + value of the item that we could afford //with the remaining weight is greater than the value //without the current item itself V[item][weight]=Math.max (val[item-1]+V[item-1][weight-wt[item-1]], V[item-1][weight]); } else { //If the current item's weight is more than the //running weight, just carry forward the value //without the current item V[item][weight]=V[item-1][weight]; } }}//Printing the matrix for (int[] rows : V) { for (int col : rows) {System.out.format("%5d", col); } System.out.println(); }return V[N][W];}}Reference: The Knapsack problem from our JCG partner Arun Manivannan at the Rerun.me blog....
java-logo

An Introduction to Generics in Java – Part 6

This is a continuation of an introductory discussion on Generics, previous parts of which can be found here. In the last article we were discussing about recursive bounds on type parameters. We saw how recursive bound helped us to reuse the vehicle comparison logic. At the end of that article, I suggested that a possible type mixing may occur when we are not careful enough. Today we will see an example of this. The mixing can occur if someone mistakenly creates a subclass of Vehicle in the following way:     /** * Definition of Vehicle */ public abstract class Vehicle<E extends Vehicle<E>> implements Comparable<E> { // other methods and propertiespublic int compareTo(E vehicle) { // method implementation } }/** * Definition of Bus */ public class Bus extends Vehicle<Bus> {}/** * BiCycle, new subtype of Vehicle */ public class BiCycle extends Vehicle<Bus> {}/** * Now this class’s compareTo method will take a Bus type * as its argument. As a result, you will not be able to compare * a BiCycle with another Bicycle, but with a Bus. */ cycle.compareTo(anotherCycle); // This will generate a compile time error cycle.compareTo(bus); // but you will be able to do this without any error This type of mix up does not occur with Enums because JVM takes care of subclassing and creating instances for enum types, but if we use this style in our code then we have to be careful. Let’s talk about another interesting application of recursive bounds. Consider the following class: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public void setAttrib1(String attrib1) { this.attrib1 = attrib1; }public String getAttrib2() { return attrib2; }public void setAttrib2(String attrib2) { this.attrib2 = attrib2; }public String getAttrib3() { return attrib3; }public void setAttrib3(String attrib3) { this.attrib3 = attrib3; }public String getAttrib4() { return attrib4; }public void setAttrib4(String attrib4) { this.attrib4 = attrib4; }public String getAttrib5() { return attrib5; }public void setAttrib5(String attrib5) { this.attrib5 = attrib5; } } If we want to create an instance of this class, then we can do this: MyClass mc = new MyClass(); mc.setAttrib1("Attribute 1"); mc.setAttrib2("Attribute 2"); The above code creates an instance of the class and initializes the properties. If we could use Method Chaining here, then we could have written: MyClass mc = new MyClass().setAttrib1("Attribute 1") .setAttrib2("Attribute 2"); which obviously looks much better than the first version. However, to enable this type of method chaining, we need to modify MyClass in the following way: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public MyClass setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public MyClass setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public MyClass setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public MyClass setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public MyClass setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } } and then we will be able to use method chaining for instances of this class. However, if we want to use method chaining where inheritance is involved, things kind of get messy: public abstract class Parent { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() {}public String getAttrib1() { return attrib1; }public Parent setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public Parent setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public Parent setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public Parent setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public Parent setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } }public class Child extends Parent { private String attrib6; private String attrib7;public Child() {}public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } }/** * Now try using method chaining for instances of Child * in the following way, you will get compile time errors. */ Child c = new Child().setAttrib1("Attribute 1").setAttrib6("Attribute 6"); The reason for this is that even though Child inherits all the setters from its parent, the return type of all those setter methods are of type Parent, not Child. So the first setter will return reference of type Parent, calling setAttrib6 on which will result in compilation error,  because it does not have any such method. We can resolve this problem by introducing a generic type parameter on Parent and defining a recursive bound on it. All of its children will pass themselves as type argument when they extend from it, ensuring that the setter methods will return references of its type: public abstract class Parent<T extends Parent<T>> { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() { }public String getAttrib1() { return attrib1; }@SuppressWarnings("unchecked") public T setAttrib1(String attrib1) { this.attrib1 = attrib1; return (T) this; }public String getAttrib2() { return attrib2; }@SuppressWarnings("unchecked") public T setAttrib2(String attrib2) { this.attrib2 = attrib2; return (T) this; }public String getAttrib3() { return attrib3; }@SuppressWarnings("unchecked") public T setAttrib3(String attrib3) { this.attrib3 = attrib3; return (T) this; }public String getAttrib4() { return attrib4; }@SuppressWarnings("unchecked") public T setAttrib4(String attrib4) { this.attrib4 = attrib4; return (T) this; }public String getAttrib5() { return attrib5; }@SuppressWarnings("unchecked") public T setAttrib5(String attrib5) { this.attrib5 = attrib5; return (T) this; } }public class Child extends Parent<Child> { private String attrib6; private String attrib7;public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } } Notice that we have to explicitly cast this to type T  because compiler does not know whether or not this conversion is possible, even though it is because T by definition is bounded by Parent<T>. Also since we are casting an object reference to T, an unchecked warning will be issued by the compiler. To suppress this we used @SuppressWarnings(“unchecked”) above the setters. With the above modifications, it’s perfectly valid to do this: Child c = new Child().setAttrib1("Attribute 1") .setAttrib6("Attribute 6"); When writing method setters this way, we should be careful as to not to use recursive bounds for any other purposes, like to access children’s states from parent, because that will expose parent to the internal details of its subclasses and will eventually break the encapsulation. With this post I finish the basic introduction to Generics. There are so many things that I did not discuss in this series, because I believe they are beyond the introductory level. Until next time.Reference: An Introduction to Generics in Java – Part 6 from our JCG partner Sayem Ahmed at the Random Thoughts blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books