Featured FREE Whitepapers

What's New Here?


Java EE CDI Qualifiers: Quick Peek

Qualifiers are the mainstay of type safety and loose coupling in Contexts and Dependency Injection (CDI). Why? Without CDI, we would be injecting Java EE components in a manner similar to below Note:This will actually not compile and is just a hypothetical code snippet           Example 1  Example 2What’s wrong with the above implementations?Not type safe – Uses a String to specify the fully qualified name of an implementation class (see Example 1) Tightly couples the BasicCustomerPortal class to the BasicService class (see Example 2)This is exactly why CDI does not do Injection this way ! Qualifiers help promoteLoose Coupling – An explicit class is not introduced within another. Detaches implementations from each other Strong Typing (type safety) – No String literals to define injection properties/metadata Qualifiers also serve asBinding components between beans and Decorators Event selectors for Observers (event consumers)  How to use Qualifiers?  CDI Qualifiers Simplified   Simplified stepsCreate a Qualifier Apply Qualifiers to different implementation classes Use the Qualifiers along with @Inject to inject the instance of the appropriate implementation within a classThis was not a detailed or in-depth post about CDI Qualifiers. It’s more of a quick reference.Click for source codeMore on CDIThe Specification page (CDI 1.2) Official CDI pageThanks for reading!Reference: Java EE CDI Qualifiers: Quick Peek from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

Do not underestimate the power of the fun

Do you like your tools? Are you working with the technology, programming language and tools that you like? Are you having fun working with it? When a new project is being started, the company has to decide what technologies, frameworks and tools will be used to develop it. Most common sense factor to take into consideration is the tool’s ability to get the job done. However, especially in Java world, usually there is more than one tool  able to pass this test. Well, usually there are tens, if not hundreds of them. So another factors have to be used. The next important and also quite obvious one is how easy the tool is to use, and how fast can we get the job done with it. “Easy” is subjective, and “fast” depends strongly on the tool itself and the environment it is used in. Like the tool’s learning curve or the developers knowledge of it. While the developers knowledge of the tool usually is taken into account, their desire to work with it (or not), usually is not. Here I would like to convince you that it is really important too. Known ! = best There are cases where it’s better to choose cool tools instead of known ones. Yes, the developers need to learn it, and it obviously costs some time, but I believe it is an investment that pays off later. Especially if alternatives are the ones that the devs are experienced with, but don’t want to use any more. Probably there are some people who like to code in the same language and use the same frameworks for 10 years, but I don’t know many of them. Most of the coders I know like to learn new languages, use new frameworks, tools and libs. Sadly, some of them can’t do it because of corporate policies, customer’s requirements or other restrictions. Why I believe such an investment pays off? If you think developer writes 800 LOC/day, so 100 LOC/hour, so 10 LOC/minute… well, you’re wrong. Developers are not machines working with constant speed 9 to 5. Sometimes we are “in the zone”, coding like crazy (let’s leave the code quality aside), sometimes we are creative, working with pen and paper, inventing clever solutions, algorithms etc. and sometimes we are just bored, forcing ourselves to put 15th form on the page or write boilerplate code.The power of fun Now ask yourself, in which situation you (or your developers) are most often? And if you are often bored, working 5th year with the same technology and tools, think about the times when you were learning it. Remember when you were using it for the first time? Were you bored then? Or rather excited? Were you less productive? That’s truism, but we are not productive when we need to force ourselves to work. Maybe it’s a good idea to change your work to be more fun? Use some tools you don’t know (yet), but really want to try? It might seem you are going to be less productive, at least at the beginning, but is it really true? Moreover, if it allows you to write less boilerplate code or closures or anything else that can make you faster and more efficient in the long run, it seems a really good investment. There is one more advantage of cool and fun tools. If you are a company owner, do you want your business partners to consider your company expensive but very good, delivering high quality services and worth the price, or not-so-good but cheap? I don’t know any software company that wants the latter. We all want to be good – and earn more, but well deserved, money. Now think about good and best developers – where do they go? Do they choose companies where they have to work with old, boring tools and frameworks? Even when you pay them much, the best devs are not motivated by money. Probably you know it already. Good devs are the ones that like to learn and discover new stuff. There is no better way to learn new stuff than working with it. And there are not many things that are as fun for a geek as working with languages, technologies and tools they like.So, when choosing tools for your next project, take fun factor into account. Or even better – let the developers make the choice.Reference: Do not underestimate the power of the fun from our JCG partner Pawel Stawicki at the Java, the Programming, and Everything blog....

Beyond the Product Demo: Choosing the Right Validation Technique in Scrum

Summary Scrum employs the product demo as its default technique to understand if the right product with the right features is developed. While a product demo can be very effective, it can also be limiting. Like any research and validation technique, demoes have their strengths and weaknesses. This post provides an overview of alternative validation methods so you can choose the one that is best suited for your product. Feedback and Data in Scrum Collecting feedback and data is a vital aspect of Scrum. It enables you to learn if the right product with the right features is created. Scrum employs a three-step process to achieve this: A product increment is created, which is then exposed to the users, the customers, and the other stakeholders. This generates feedback and data, which triggers product backlog changes, as the following picture shows.To leverage this process, the techniques used to gather the data and validate the product are crucial. If you use the wrong method, you are likely to collect wrong or insufficient data and draw the wrong conclusions. While there is a range of techniques available, Scrum recognises only one: the product demo, which is performed in the sprint review meeting. But this does not mean that you should or cannot employ another technique! The opposite is true: The sprint demo may or may nor be right for product. Additionally, using a single data collection method over an extended period of time is usually a mistake. Every technique has its strengths and limitations, and none is always appropriate. You should hence choose the one that is most helpful for your product and combine complementing techniques. A Selection of Helpful Techniques To help you select the right method, I have complied five techniques happily used in Scrum in the table below. I discuss each technique in the following sections.Technique Description Strengths WeaknessesProduct demo Demo the latest increment. Listen to the feedback, and ask open questions. Test limited functionality: Helps people imagine what using the product would be like. Feedback is available immediately. Feedback is based on what users hear and see; danger of influencing users.Usability test Observe how users interact with the prototype or product in a controlled environment such as a lab. Understand if users employ the product as anticipated. Possibly collect data using analytics software. Data is available immediately. Artificial environment: users do not use the product “in the wild”; observer effect.Release Release the software to a group of target users and collect data using analytics. Find out how the product is used in its target environment. Reach a larger test group. Run A/B tests. Does not explain why users employ the product in a certain way. Can take time to collect enough data.Observation Observe users employ the product preferably in its target environment. Understand how users interact with the product. Can be time consuming; danger of observer effect and bias.Spike  Create an executable prototype to address an architecture risk. Understand if an architecture or technology choice is feasible. Can lead to an over-engineered solution.Product Demo As its name suggests, the product demo presents the latest product increment to the appropriate users, customers, and internal stakeholders. The presenter explains how the users would employ the product to get a job done. Product demos are particularly valuable in the early sprints, as they allow you to get immediate feedback on limited functionality and very small increments: By wrapping the increment in a story, people can imagine what it would be like to use the product. This strength is also a major weakness: The feedback is based on what people see and hear, not on their actual experience of using the product. What’s more, the presenter can influence the users inappropriately by talking up the product or asking closed questions, and powerful individuals like a senior manager can influence the views of the group. My post “The Product Demo as an Agile Market Research Technique” shares more tips and tricks on employing product demoes effectively. Usability Test A usability test allows you to understand how users interact with your product. Usability testing takes typically place in a controlled environment such as a meeting room or lab. Target users are asked to perform a task using the latest product increment, which may be a paper prototype or executable software. You then observe and record how people employ the product, and you can end the test with asking the participants about their experiences and impressions. It is often possible to conduct the test in the sprint review meeting. While a usability test quickly generate real user data, the artificial environment and the observation can cause the users to act differently compared to working with the product in “the wild”, that is, in the environment in which the product will be used, for instance, at home, at work, in the car, on the train. This is where the next technique comes in. Release Releasing software means giving a group of target users access to an early version of the product and asking them to use it in its target environment. The usage data is then recorded by an analytics tool, for instance, Google Analytics. The product is now employed in its target environment, and a larger test group can be employed, which reduces the risk of collecting data that is not representative for the market segment targeted. With the right analytics software, you can also collect data such as who interacts with which product feature when and how often, and which variant of a feature people prefer (A/B test). On the downside, the technique cannot explain why people use a certain feature, or why they don’t. Additionally, there is a delay: It takes people time to download, install, and start using the latest increment before enough data is available to draw the right conclusions. In a Scrum context, this means that you either have to postpone the next sprint or continue with a different feature to mitigate the risk of moving in the wrong direction. If you primarily use releases then your sprint review meeting changes: It is now used to synch the internal stakeholders and review the project progress rather than to validate the product. Observation Observation means watching users carefully as they employ the product in its target environment. This helps you understand how people interact with the product and use its features. It also allows you to debrief with the user and to learn about the user’s experience whilst interacting with the product. The main drawback of observations that they can be time consuming particularly if you want to observe more than a few users. Users may also act differently with somebody watching them (overseer effect), and your own biases may interfere with you ability to see clearly. Spike Spikes are prototypes to test an architectural or technology-related assumption, for instance, that Enterprise JavaBeans will be able to deliver the desired performance. They are usually cheap to create, generate the relevant knowledge quickly, and allow you to see if you can meet critical nonfunctional requirements. Spikes become problematic if you employ them too much, if you worry more about how to build the product than why and what to build. If this happens the result is in an over-engineered product and a solution-centric mindset rather than a user-centric one. I once met a team that had been building spikes for nearly two years without having to show any shippable code. Telling them to stop it and think about the users was quite shocking for them. Choosing the Right Technique All the techniques discussed have their strengths and weaknesses. While you should always choose the technique that helps you meet your sprint goal and validate the increment effectively, I find it a helpful rule of thumb to use product demos, usability tests and spikes in the early sprints, and releases and observation in the later sprints, as the following picture illustrates.The diagram above tries to balance the strengths and weaknesses of the different techniques, and it corresponds to a risk-driven approach where the key risks and critical assumptions are tackled in the early sprints. (For more info assumptions, risks and learning in Scrum see my post “Get Your Focus Right: Learning and Execution in Scrum“.) Whichever techniques you choose, don’t make the mistake to rely on a single technique for an extended period of time. Every technique has its benefits and drawbacks, and no single technique is perfect. Combine qualitative and quantitative techniques, for instance, releases and observation, to leverage their respective strength and mitigate their weaknesses. Don’t be shy to experiment with different techniques to find out what works best for your product, and use the sprint retrospective to review their effectiveness.Reference: Beyond the Product Demo: Choosing the Right Validation Technique in Scrum from our JCG partner Roman Pichler at the Pichler’s blog blog....

Difference between State and Strategy Design Pattern in Java

In order to make proper use of State and Strategy design Pattern in Core Java application, its important for a Java developer to clearly understand difference between them. Though both State and Strategy design patterns has similar structure, and both of them are based upon Open closed design principle, represents ‘O’ from SOLID design principles, they are totally different on there intent. Strategy design pattern in Java is used to encapsulate related set of algorithms to provide runtime flexibility to client. Client can choose any algorithm at runtime, without changing Context class, which uses Strategy object. Some of the popular example of Strategy pattern is writing code, which uses algorithms e.g. encryption, compression or sorting algorithm.   On the other hand, State design pattern allows an object to behave differently at different state. Since real world object often has state, and they behave differently at different state, e.g. a Vending Machine only vend items if it’s in hasCoin state, it will not vend until you put the coin on it. You can now clearly see the difference between Strategy and State pattern, there intent is different. State pattern helps object to manage state, while Strategy pattern allows client to choose different behaviour. Another difference, which is not easily visible is, who drives change in behaviour. In case of Strategy pattern, it’s client, which provides different strategy to Context, on State pattern, state transition is managed by Context or State itself. Also, if you are managing state transition in State object itself, it must hold reference of Context e.g. Vending Machine, so that it can call setState() method to change current state of Context. On the other hand, Strategy object never held reference of Context, it’s client which passes Strategy of there choice to Context. As difference between state and strategy pattern is one of the popular Java design pattern question on Interviews, In this Java design pattern article, we will take a closer look on this. We will explore some similarity and difference between Strategy and State design pattern in Java, which will help to improve your understanding on both of these patterns. Similarities between State and Strategy Pattern If you look at UML diagram of State and Strategy design Pattern, they both look very similar to each other. An object that uses State object to change its behaviour is known as Context object, similarly an Object which uses a Strategy object to alter its behaviour is referred as Context object. Remember client interact with Context object. In case of state pattern, context delegates method calls to state object, which is held in form of current object, while in case of strategy pattern, context uses Strategy object passed as parameter or provided at the time of creating Context object. UML Diagram of State Pattern in Java  This UML diagram is for state design pattern, drawn for a classic problem of creating object oriented design of Vending Machine in Java. You can see that State of Vending Machine is represented using an interface, which further has implementation to represent concrete state. Each state also holds reference of Context object to make transition to another state due to action triggered by Context. UML Diagram of Strategy Pattern in JavaThis UML diagram is for strategy design pattern, implementing sorting functionality. Since there are many sorting algorithm, this design pattern lets client choose the algorithm while sorting objects. In fact, Java Collection framework make use of this pattern to implement Collections.sort() method, which is used to sort objects in Java.  Only difference is instead of allowing client to choose sorting algorithm, they allow them to specify comparison strategy by passing instance of Comparator or Comparable interface in Java. Let’s see couple of more similarities between these two core Java design patterns  :Both State and Strategy Pattern makes it easy to add new state and strategy, without affecting Context object, which uses them. Both of them, makes your code follow open closed design principle, i.e. your design will be open for extension but closed for modification. In case of State and Strategy pattern, Context object is closed for modification, introduction of new State or new Strategy, either you don’t need to to modify Context of other state, or minimal changes are required. Just like Context object is started with a initial state in State design Pattern, a Context object also has a default strategy in case of Strategy pattern in Java. State pattern wraps different behaviour in form of different State object, while Strategy pattern wraps different behaviour in form of different Strategy object. Both Strategy and State Patterns relies on sub classes to implement behaviour. Every concrete strategy extends from an Abstract Strategy, each State is sub class of interface or abstract class used to represent State.  Difference between Strategy and State Pattern in Java So now we know that State and Strategy are similar in structure and there intent are different. Let’s revisit some of the key difference between these design patterns.Strategy Pattern encapsulate a set of related algorithms, and allow client to use interchangeable behaviours though composition and delegation at runtime, On the other hand State pattern helps a class to exhibit different behaviours in different state. Another difference between State and Strategy Patten is that, State encapsulate state of an Object, while Strategy Pattern encapsulate an algorithm or strategy. Since states are cohesively associated with object, it can not be reused, but by separating strategy or algorithm from it’s context, we can make them reusable. In State pattern, individual state can contain reference of Context, to implement state transitions, but Strategies doesn’t contain reference of Context, where they are used. Strategy implementations can be passed as parameter to there the Object which uses them e.g. Collections.sort() accepts a Comparator, which is a strategy.  On the other hand state is part of context object itself, and over time, context object transitions from one State to other. Though both Strategy and State follows Open closed design principle, Strategy also follow Single Responsibility principle, Since every Strategy encapsulate individual algorithm, different strategies are independent to each other. A change in one strategy, doesn’t order a change in another strategy. One more theoretical difference between Strategy and State pattern is that former defines “How” part of an Object e.g. How a Sorting object sorts data, One the other hand State Pattern defines “what” and “when” part of Object e.g. What can an object, when it’s on certain state. Order of State transition is well defined in State pattern, there is no such requirement for Strategy pattern. Client is free to choose any Strategy implementation of his choice. Some of the common example of Strategy Pattern is to encapsulate algorithms e.g. sorting algorithms, encryption algorithm or compression algorithm. If you see, your code needs to use different kind of related algorithms, than think of using Strategy pattern. On the other hand, recognizing use of State design pattern is pretty easy, if you need to manage state and state transition, without lots of nested conditional statement, state pattern is the pattern to use. Last but one of the most important difference between State and Strategy pattern is that, change in Strategy is done by Client, but Change in State can be done by Context or State object itself.That’s all on difference between State and Strategy Pattern in Java. As I said, they both look similar in there class and UML diagrams, both of them enforces Open Closed design principle and encapsulate behaviours. Use Strategy design pattern, to encapsulate algorithm or strategy, which is provided to Context at runtime, may be as parameter or composed object and use State pattern for managing state transitions in Java.Reference: Difference between State and Strategy Design Pattern in Java from our JCG partner Javin Paul at the Javarevisited blog....

HashMap performance improvements in Java 8

HashMap<K, V> is fast, versatile and ubiquitous data structure in every Java program. First some basics. As you probably know, it uses hashCode() and equals() method of keys to split values between buckets. The number of buckets (bins) should be slightly higher than the number of entries in a map, so that each bucket holds only few (preferably one) value. When looking up by key, we very quickly determine bucket (using hashCode() modulo number_of_buckets) and our item is available at constant time. This should have already been known to you. You probably also know that hash collisions have disastrous impact on HashMap performance. When multiple hashCode() values end up in the same bucket, values are placed in an ad-hoc linked list. In worst case, when all keys are mapped to the same bucket, thus degenerating hash map to linked list – from O(1) to O(n) lookup time. Let’s first benchmark how HashMap behaves under normal circumstances in Java 7 (1.7.0_40) and Java 8 (1.8.0-b132). To have full control over hashCode() behaviour we define our custom Key class: class Key implements Comparable<Key> { private final int value; Key(int value) { this.value = value; } @Override public int compareTo(Key o) { return Integer.compare(this.value, o.value); } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Key key = (Key) o; return value == key.value; } @Override public int hashCode() { return value; } } Key class is well-behaving: it overrides equals() and provides decent hashCode(). To avoid excessive GC I cache immutable Key instances rather than creating them from scratch over and over: public class Keys { public static final int MAX_KEY = 10_000_000; private static final Key[] KEYS_CACHE = new Key[MAX_KEY]; static { for (int i = 0; i < MAX_KEY; ++i) { KEYS_CACHE[i] = new Key(i); } } public static Key of(int value) { return KEYS_CACHE[value]; } } Now we are ready to experiment a little bit. Our benchmark will simply create HashMaps of different sizes (powers of 10, from 1 to 1 million) using continuous key space. In the benchmark itself we will lookup values by key and measure how long it takes, depending on the HashMap size: import com.google.caliper.Param; import com.google.caliper.Runner; import com.google.caliper.SimpleBenchmark; public class MapBenchmark extends SimpleBenchmark { private HashMap<Key, Integer> map; @Param private int mapSize; @Override protected void setUp() throws Exception { map = new HashMap<>(mapSize); for (int i = 0; i < mapSize; ++i) { map.put(Keys.of(i), i); } } public void timeMapGet(int reps) { for (int i = 0; i < reps; i++) { map.get(Keys.of(i % mapSize)); } } } The results confirm that HashMap.get() is indeed O(1):Interestingly Java 8 is on average 20% faster than Java 7 in simple HashMap.get(). The overall performance is equally interesting: even with one million entries in a HashMap a single lookup taken less than 10 nanoseconds, which means around 20 CPU cycles on my machine*. Pretty impressive! But that’s not what we were about to benchmark. Suppose that we have a very poor map key that always returns the same value. This is the worst case scenario that defeats the purpose of using HashMap altogether: class Key implements Comparable<Key> { //... @Override public int hashCode() { return 0; } } I used the exact same benchmark to see how it behaves for various map sizes (notice it’s a log-log scale):Results for Java 7 are to be expected. The cost of HashMap.get() grows proportionally to the size of the HashMap itself. Since all entries are in the same bucket in one huge linked list, looking up one requires traversing half of such list (of size n) on average. Thus O(n) complexity as visualized on the graph. But Java 8 performs so much better! It’s a log scale so we are actually talking about several orders of magnitude better. The same benchmark executed on JDK 8 yields O(logn) worst case performance in case of catastrophic hash collisions, as pictured better if JDK 8 is visualized alone on a log-linear scale:What is the reason behind such a tremendous performance improvement, even in terms of big-O notation? Well, this optimization is described in JEP-180. Basically when a bucket becomes too big (currently: TREEIFY_THRESHOLD = 8), HashMap dynamically replaces it with an ad-hoc implementation of tree map. This way rather than having pessimistic O(n) we get much better O(logn). How does it work? Well, previously entries with conflicting keys were simply appended to linked list, which later had to be traversed. Now HashMap promotes list into binary tree, using hash code as a branching variable. If two hashes are different but ended up in the same bucket, one is considered bigger and goes to the right. If hashes are equal (as in our case), HashMap hopes that the keys are Comparable, so that it can establish some order. This is not a requirement of HashMap keys, but apparently a good practice. If keys are not comparable, don’t expect any performance improvements in case of heavy hash collisions. Why is all of this so important? Malicious software, aware of hashing algorithm we use, might craft couple of thousand requests that will result in massive hash collisions. Repeatedly accessing such keys will significantly impact server performance, effectively resulting in denial-of-service attack. In JDK 8 an amazing jump from O(n) to O(logn) will prevent such attack vector, also making performance a little bit more predictive. I hope this will finally convince your boss to upgrade.*Benchmarks executed on Intel Core i7-3635QM @ 2.4 GHz, 8 GiB of RAM and SSD drive, running on 64-bit Windows 8.1 and default JVM settings.Reference: HashMap performance improvements in Java 8 from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

What Is A Unit Test?

What make unit tests different than other tests? They are full of FAIL. Going to the Wikipedia  definition, you’ll get a vague and unhelpful description, and to summarize: It tests a small piece of code. In  what language? What is small? And why does that matter?     I feel that many times in software, we’d rather concentrate on the mechanics, rather than on the goal. For example, we talk about mocking (how we do it) while we actually want isolation (which is what we need for our tested code). First, let’s take a look at our goals for a good unit test:Tell you quickly when there’s a real problem Get you as quickly as possible from “Found a problem” to “Fixed a problem”Let’s take a closer look, using FAIL. Functionality: A unit test is a sensor, telling us if a former working functionality no longer works. While feedback is the requirement from every kind of test, the key thing is functionality, and in code terms – logic: if-thens, try-catches and workflows inside the code. Accuracy: A unit test should fail for only two reasons: We broke something and should fix the code (A bug), or we broke something and should fix the test (A changed requirement). In both cases, we’re going to do valuable work. When is it not valuable? Example: If the test checked internal implementation, and we changed the implementation but not the functionality, this does not count as a real problem. The code still does what it was meant to, functionality didn’t change. But now we need to fix the test, which is waste. We don’t like waste. Instant: A unit test runs quickly. A suite of hundreds and thousands unit tests runs in a few seconds or minutes. The success of applying a fine-grain sensor array relies on quickness in scale. This usually translates in the tested code to be short and isolated. Locator: When there’s a problem, we need to fix it quickly. Part of it is testing a small amount of code. Then, there’s more we can do in the test to help us solve the problem. Yet, we need to think outside the context of writing the test, though. Someone else may break it, in a year or more, after we’ve moved companies twice. In other words, we’re leaving a paper trail to locate the specific problem for someone else. To do that we use accurate naming, readable test code, testing small portion of the code in a specific scenario, isolation from any undetermined or non-specific dependency, and lots of other tricks that will help our unfortunate developer achieve a quick fix. Notice that none of these attributes are about the experience of writing a test. It’s about getting the most value out of it after it’s there. The only value you get while writing a test, is when the code is not there yet. That’s right, in TDD. In that case, you get all of the above, plus insight about the design and safe incremental progress. All other kinds of tests, which are also valuable, don’t have all these traits: Integration tests don’t locate the problematic code. UI tests are brittle. Full system tests are slow. Non-functional tests just give you feedback, and exploratory testing is not about functional correctness, but rather on business value (at least should be). If your test passes the FAIL test, then it is a unit test. image sourceReference: What Is A Unit Test? from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

The Top 10 Productivity Booster Techs for Programmers

This is the list we’ve all been waiting for. The top 10 productivity booster techs for programmers that – once you’ve started using them – you can never do without them any longer. Here it is:                GitBefore, there were various version control systems. Better ones, worse ones. But somehow they all felt wrong in one way or another. Came along Git (and GitHub, EGit). Once you’re using this miraculous tool, it’s hard to imagine that you’ll ever meet a better VCS again. You’ve never used Git? Get started with this guide.Stack OverflowNo kidding. Have you ever googled for anything tech-related back in 2005? Or altavista’d something back in 2000? Or went to FidoNet in search for answers in 1995? It was horrible. The top results always consisted in boring forum discussions with lots of un-experts and script kiddies claiming wrong things. These forums still exist, but they don’t turn up on page 1 of Google search results. Today, any time you search for something, you’ll have 2-3 hits per top 10 from Stack Overflow. And chances are, you’ll look no further because those answers are 80% wonderful! That’s partially because of Stack Overflow’s cunning reputation system, but also partially because of Stack Overflow’s even more cunning SEO rewarding system. (I already got 98 announcer, 19 booster, and 5 publicist badges. Yay). While Stack Overflow allows its more active user to pursue their vanity (see above!), all the other users without any accounts will continue to flock in, finding perfect answers and clicking on very relevant ads. Thumbs up for Stack Overflow and their awesome business model.Office 365We’re a small startup. Keeping costs low is of the essence. With Office 365, we only pay around $120 per user for a full-fledged Office 2013 suite, integrated with Microsoft Onedrive, Sharepoint, Exchange, Access, and much more. In other words, we get enterprise-quality office software for the price of what students used to pay, before. And do note, Office 2013 is better than any other Microsoft (or Libre) Office suite before. While not a 100% Programmer thing, it’s still an awesome tool chain for a very competitive price.IntelliJWhile Eclipse is great (and free), IntelliJ IDEA, and also phpStorm for those unfortunate enough to write PHP are just subtly better in almost every aspect of an IDE. You can try their free community edition any time, but beware, you probably won’t switch back. And then you probably won’t be able to evade the Ultimate edition for long!  PostgreSQL PostgreSQL claims to be the world’s most advanced Open Source database, and we think it’s also one of the most elegant, easy, standards-compliant databases. It is really the one database that makes working with SQL fun. We believe that within a couple of years, there’s a real chance of PostgreSQL not only beating commercial databases in terms of syntax but also in terms of performance. Any time you need a data storage system with a slight preference for SQL-based ones, just make PostgreSQL your default choice. You won’t be missing any feature in that database. Let’s hear it for PostgreSQL.Java Java is almost 20 years old, but it’s still the #1 or #2 language on the TIOBE index (sharing ranks with C), for very good reasons:It’s robust It’s mature It works everywhere (almost, really too bad it has never succeeded in the browser) It runs on the best platform ever, the JVM It is Open Source It has millions of tools, libraries, extensions, and applicationsWhile some languages may seem a bit more modern or sexy or geeky, Java has and will always rule them all in terms of popularity. It is a first choice and with Java 8, things have improved even more.jOOQ Now, learning this from the jOOQ blog is really unexpected and a shocker, but we think that jOOQ fits right into this programmer’s must-have top-10 tool chain. Most jOOQ users out there have never returned back to pre-jOOQ tools, as they’ve found writing SQL in Java as simple as never before. Given that we’ve had Java and PostgreSQL before, there’s only this one missing piece gluing the two together in the most sophisticated way. And besides, no one wants to hack around with the JDBC API, these days, do they?Less CSS When you try Less CSS for the first time, you’ll think that   Why isn’t CSS itself like this!? And you’re right. It feels just like CSS the way it should have always been. All the things that you have always hated about CSS (repetitiveness, verbosity, complexity) are gone. And if you’re using phpStorm or some other JetBrains product (see above), you don’t even have to worry about compiling it to CSS. As an old HTML-table lover who doesn’t care too much about HTML5, layout, and all that, using Less CSS makes me wonder if I should finally dare creating more fancy websites! Never again without Less CSS.jQueryWhat Less CSS is for CSS, jQuery is for JavaScript. Heck, so many junior developers on Stack Overflow don’t even realise that jQuery is just a JavaScript library. They think it is the language, because we’ve grown to use it all over the place. Yes, sometimes, jQuery can be overkill as is indicated by this slightly cynical website: http://vanilla-js.com  But it helps so much abstracting all the DOM manipulation in a very fluent way. If only all libraries were written this way. Do note that we’ve also published a similar library for Java, in case you’re interested in jQuery-style DOM XML manipulation. Along with Java 8′s new lambda expressions, manipulating the DOM becomes a piece of cake.  C8H10N4O2C8H10N4O2 (more commonly known as Caffeine) is probably the number one productivity booster for programmers. Some may claim that there’s such a thing like the Ballmer Peak. That might be true, but the Caffeine Peak has been proven times and again. Have Dilbert’s view on the matter: http://dilbert.com/strips/comic/2006-10-19/ More productivity boosters We’re certainly not the only ones believing that there is such a thing as a programmer-productivity-booster. Enjoy this alternative list by Troy Topnik here for more insight: http://www.activestate.com/blog/2010/03/top-ten-list-productivity-boosters-programmersReference: The Top 10 Productivity Booster Techs for Programmers from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

ActiveMQ – Network of Brokers Explained – Part 3

Now that we have understood the basics of ActiveMQ network connector in part 1 and part 2 of this blog series, in this part 3, we will examine how ActiveMQ load balances consumers which connect to a network of brokers. Introduction Concurrent consumers are used when messages in a queue can be processed out of order and usually to improve message throughput. ActiveMQ broker dispatches messages in a round-robin fashion among the consumers in order to load balance message consumption across concurrent consumers unless the consumer is specified as exclusive. Let’s see the following example where three consumers are concurrently processing messages from queue foo.bar. A producer enqueues 60 messages which are processed by three consumers (20 each) in a round robin fashion.Start three concurrent consumers on queue foo.bar Ashwinis-MacBook-Pro:example akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/example Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.bar -DparallelThreads=3 -Dmax=20 Produce 60 messages Ashwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.bar -Dmax=60 The following screenshot shows 3 consumers processing messages from queue foo.bar. 60 messages were enqueued and dequeued.As shown below 20 messages were processed by each of the consumers.The following excerpt from log shows that messages are divvied out among three consumers… [Thread-3] Received: 'Message: 1 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-2] Received: 'Message: 0 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-1] Received: 'Message: 2 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-3] Received: 'Message: 4 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-2] Received: 'Message: 3 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-1] Received: 'Message: 5 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-3] Received: 'Message: 7 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-2] Received: 'Message: 6 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-1] Received: 'Message: 8 sent at: Tue Mar 04 13:46:53 IST 2014  ...' (length 1000)[Thread-3] Received: 'Message: 10 sent at: Tue Mar 04 13:46:53 IST 2014 ...' (length 1000) Now that we have seen how concurrent consumers work on a single broker, we will now examine how they work when consumers are spread across network of brokers. Local Vs Remote Consumers Let’s explore how ActiveMQ handles local and remote consumers with the help of a configuration shown in the figure below.Consumer-1 and Consumer-2 consume messages from queue foo.bar on Broker-1 and Broker-2 respectively. Broker-1 established a network connector to Broker-2 to forward queue messages. Producer enqueues messages into queue foo.bar on Broker-1 Let’s see this in actionEdit Broker-1′s configuration /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-1/conf/activemq.xml and open a network connector to Broker-2 and restart Broker-1 and Broker-2<networkConnectors> <networkConnector name="T:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="false" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <queue physicalName=">" /> </excludedDestinations> </networkConnector> <networkConnector name="Q:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="false" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> </networkConnectors>Start local consumer, Consumer-1Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.barStart remote consumer, Consumer-2Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61626 -Dtopic=false -Dsubject=foo.barStart producer on Broker-1 to enqueue 100 messagesAshwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.bar -Dmax=100 Screenshot showing Broker-1′s queues:Let’s look at the consumers to see how the messages have been divvied out.As you may notice, ActiveMQ broker dispatches the messages equally to local consumer over the remote consumer giving them the same priority. The remote consumer, Consumer-2 is only broker 1 hop away which is less than configured networkTTL value of 2. This leads to suboptimal routes especially when brokers are connected such that multiple routes are possible between producers and consumers. It is preferable to dispatch to local consumers over remote consumers in order to ensure shortest path between producers and consumers. ActiveMQ provides a way to configure the priority between local consumer and remote consumer using the property decreaseNetworkConsumerPriority on the network connector. By default, this value is false and hence the local and remote brokers were treated alike. If we repeat the above steps after changing the decreaseNetworkConsumerPriority=”true” then we find that local consumer, Consumer-1 is given preference over remote consumer, Consumer-2 which is 1 broker hop away.ActiveMQ intelligently figures out shortest path in a network of brokers between message producers and consumers. Please read the following link to gain further understanding of optimal routing by ActiveMQ.http://fusesource.com/docs/esb/4.3/amq_clustering/Networks-OptimizingRoutes.htmlThis concludes part 3 of this series where we saw how to differentiate local and remote consumers to assist ActiveMQ determine most optimal path between message producers and consumers. As always your comments are very welcome. Stay tuned for part 4 where we will go over load balancing remote concurrent consumers…Reference: ActiveMQ – Network of Brokers Explained – Part 3 from our JCG partner Ashwini Kuntamukkala at the Ashwini Kuntamukkala – Technology Enthusiast blog....

ActiveMQ – Network of Brokers Explained – Part 2

In this blog we will see how duplex network connectors work. In the previous part 1 we created a network connector from broker-1 and broker-2. We were able to see how messages for queue “foo.bar” on broker-1 were forwarded queue “foo.bar” on broker-2 when there was a consumer on broker-2 for queue “foo.bar” Let’s try doing the reverse by producing messages into broker-2′s queue foo.bar and consume from broker-1′s queue “foo.bar”   Ashwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61626 -Dtopic=false -Ddurable=true -Dsubject=foo.bar -Dmax=100 Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.barIn the previous blog post, we had enqueued/dequeued 100 messages. Hence the #messages enqueued now shows as 200 here. As shown above, 100 new messages are enqueued on foo.bar queue on broker-2 but there are no consumers though there is a network connector for all queues from broker-1 to broker-2. The reason is that a network connector unless specified as “duplex” is unidirectional from the source to the destination broker.  Let’s change the following attribute highlighted in yellow in /Users/akuntamukkala/apache-activemq- 5.8.0/bridge-demo/broker-1/conf/activemq.xml configuration file for broker-1. <networkConnectors> <networkConnector name="T:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <queue physicalName=">" /> </excludedDestinations> </networkConnector> <networkConnector name="Q:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="true" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> </networkConnectors> Let’s restart the brokers and connect to the brokers using jConsole. Here is broker-1 jConsole MBean tab screenshot which shows the following:Q:broker1->broker2 network connector is duplex. There is now a dynamic producer into broker-1 from broker-2 because the Q:broker1->broker2 network connector is “duplex”.Here is broker-2 jConsole MBean tab screenshot which shows the following:Duplex network connector from broker-2 to broker-1 Two dynamic message producers from broker-1 to broker-2Please note that “Q:broker1->broker2″ network connector shows as duplex as configured in activemq.xmlLet’s see this in actionProducer 100 messages into broker-2 Ashwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61626 -Dtopic=false -Ddurable=true -Dsubject=foo.bar -Dmax=100 Screenshot of queues in broker-2: http://localhost:9161/admin/queues.jspCreate a consumer on foo.bar on broker-1 Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61616 -Dtopic=false -Dsubject=foo.bar The following screenshot from broker-2 shows that all the 100 messages have been dequeued by a consumer (dynamically forwarded to broker-1). http://localhost:9161/admin/queues.jspThe following screenshot shows the details of this dynamic consumer on broker-2′s foo.bar queue. http://localhost:9161/admin/queueConsumers.jsp?JMSDestination=foo.barThe following screenshot shows that the 100 messages which were dynamically moved from broker-2′s foo.bar queue to broker-1′s foo.bar queue have been successfully consumed by the consumer which we created in step #2This concludes part 2 of this series where we saw how duplex network connectors work. As always your comments are very welcome. Stay tuned for part 3 where we will go over load balancing consumers on local/remote brokers…Reference: ActiveMQ – Network of Brokers Explained – Part 2 from our JCG partner Ashwini Kuntamukkala at the Ashwini Kuntamukkala – Technology Enthusiast blog....

Structural contingency (part one)

Order, in mathematics, matters. Take two functions, f and g, and compose them, thus applying them to an argument as either f(g(x)) or g(f(x)). In general, it cannot be assumed that f(g(x)) = g(f(x)). If f(x) = 2x and g(x) = x 2, for example, then f(g(3)) = 18 but g(f(3)) = 36. The order in which the functions apply matters. Mathematics often supplying multiple ways of expressing an idea, this f(g(x)) can also be written using the compositional operator o meaning, “Applied after,” such that f(g(x)) = f o g (x). A function defined purely in terms of the composition of other functions is itself a composed function. In Java a composed method might look like this: public void recoverFromException() { View view = registry.get(View.class) view.endWaiting(); } If we wish to view a method entirely from the perspective of how it orders the invoking of other methods then, stripping away the clutter of variable and assignment, we can “model” this method with an “order equation”: recoverFromException = endWaiting o get Such a composition stems from the necessary relationship of the invocations involved, get() supplying the argument to endWaiting(). The explicitness of syntactic ordering demands that get() be invoked before endWaiting(), otherwise the program will not compile. But what of seemingly unrelated method invocations? Consider the following: public void clearViewCache(View view) { view.clearPositionCache(); view.clearImageCache(); } This method expresses a weaker syntactic ordering requirement than the previous because it is not a composed method but defined in terms of two method invocations neither of which supplies necessary argument to the other. A method whose invocation order lacks necessary explicitness is called a contingent method. Unlike the previous method, which admitted no re-ordering, this contingent method apparently could have been written as: public void clearViewCache(View view) { view.clearImageCache(); view.clearPositionCache(); } This, of course, may not be the case. Taking no arguments and supplying no return values, both method invocations wallow in side-effects, so the second invocation may harvest information altered by the first such that a re-ordering will cause an error. Whether the apparent lack of necessary order implies a stealth order to be respected or an invitation to shuffle at will is moot; the point remains that the method itself offers no syntactic evidence either way and in doing so suggests orderlessness where composition does not. This suggestion may be wrong but it exists nonetheless. Contingency raises questions that composition avoids entirely. Programmers value intent. Kent Beck’s Extreme Programming Explained, for example, claims that the very essence of simple code is that it, “Reveals its intent.” Robert C. Martin goes further asserting that, even more than mere programming, “Architecture is about intent.” Increasing constraint generally correlates with increasing intent because with increasing constraint comes a reduced number of alternatives and thus fewer opportunities for ambiguity and misinterpretation. Given that composition constrains ordering more than than contingency, composed methods show more intent than contingent methods and so enjoy, on this narrow dimension alone, superiority. Some notation may help. Just as the o operator signifies composition, let us say that the \ operator signifies contingency. Thus the second snippet can be written as: clearViewCache = clearPositionCache \ clearImageCache And this, by definition, is equal to: clearViewCache = clearImageCache \ clearPositionCache How might a programmer use this triviality to clarify intent? One way is to minimise the number of contingent methods and maximize the number of composed methods. Here, alas, the universe laughs at the programmer because of some unfortunate combinatorial rules; loosely: Composition + composition = composition. Contingency + contingency = contingency. Composition + contingency = contingency. That is, f o g o h constitutes a composed method but f o (g \ h) is a contingent method even though it contains a compositional operator. Once blighted by order ambiguity, no method escapes contingency. Nevertheless, it seems a shame to cloak even partial intent under a cloth of confusion. A method, let us therefore further say, formed exclusively around either the composition or contingency operator – for example, the invocation sequence k \ m \ p \ s or the sequence k o m o p o s – we shall call an, “Order-simple,” method, whereas one boasting both compositional and contingency operators we shall call, “Order-compound.” Maximising intent thus becomes smashing order-compound methods into order-simple shards. Consider: a = f \(g o h) If we introduce a composed method, j, such that j = g o h then the above order-compound reduces to the two order-simple methods: a = f \ j j = g o h As figure 1 shows, this reduction comes at the cost of both extra methods and depth, but whereas the original method’s contingency muddled its overall invocation order at least one of the new methods explicitly intends the order it articulates. Part of the program has been saved from contingency even if the overall contingency remains.  No guarantee exists, sadly, that an order-compound method may decompose into set of order-simple methods. The invocation of a multi-argument method inherently involves contingency as methods do not dictate the order of evaluation of functional arguments. (Indeed the \ operator is little more than the comma separating the arguments of a function.) Two questions then arise. Given a successful reduction, is the set of order-simple methods to which an order-compound method decomposes unique? Properties Some properties of contingency:f o (g \ h) = f(g, h)Commutativity: f \ g = g \ fAssociativity: (f \ g) \ h = f \ (g \ h)Composition right-distributes over contingency: (f \ g) o h = (f o h) \ (g o h)Others exist, to be investigated in a later post. The composed order-simple Let us examine some real production code. FitNesse supplies all the following snippets, the first being from class PageDriver. private NodeList getMatchingTags(NodeFilter filter) throws Exception { String html = examiner.html(); Parser parser = new Parser(new Lexer(new Page(html))); NodeList list = parser.parse(null); NodeList matches = list.extractAllNodesThatMatch(filter, true); return matches; } Modeling this as its essential method invocations yields the equation: getMatchingTags = extractAllNodesThatMatch o parse o new Parser o new Lexer o new Page o html This presents a perfect, order-simple, composed method, with each method invocation depending on the previous. In expressing an order that brooks no tampering, the programmer helps clarify the method’s intent by unburdening the programmer’s mind of potential alternatives. Being order-simple, PageDriver offers no order-compoundness to reduce. The contingent order-simple public void close() throws IOException { super.close(); removeStopTestLink(); publishAndAddLog(); maybeMakeErrorNavigatorVisible(); finishWritingOutput(); } This method within class TestHtmlFormatter presents order-simple contingency whose equation is: close = close / removeStopTestLink / publishAndAddLog / maybeMakeErrorNavigatorVisible / finishWritingOutput Again, being order-simple, TestHtmlFormatter stands irreducible but, unlike the previous example, this hardly speaks well of the method’s design. The programmer’s intent has left no trace on the syntax employed, dissolving entirely to hazy semantics. Still, at least the method is order-simple rather than order-compound: concentrating contingency into such pure order-implicitness at least singles out targets for the intent-revealing refactorings to which order-compound reductions point. To repeat, order-compound reductions do not reduce overall system contingency per se – they refactor the method under study but not those invoked by the method under study. Instead, reductions act as refineries, pumping the sludge of contingency into metal barrels for later detoxification. The order-compound The MultiUserAuthenticator constructor presents a minimalist example of order-compoundenss: public MultiUserAuthenticator(String passwdFile) throws IOException { PasswordFile passwords = new PasswordFile(passwdFile); users = passwords.getPasswordMap(); cipher = passwords.getCipher(); } Clearly an order-compound contingent method, this can be modeled by: MultiUserAuthenticator = (getPasswordMap \ getCipher) o new PasswordFile Performing an order-compound reduction involves refactoring a method to minimise the size of order-compound methods of which it is constructed. The most obvious reduction to perform on the above extracts a new method f = getPasswordMap \ getCipher, thus producing two equations: MultiUserAuthenticator = f o new PasswordFile f = getPasswordMap \ getCipher This eliminates all order-compoundenss, yielding a composed method and an order-simple contingent method, hence completing the reduction. The resulting source code will look something like: public MultiUserAuthenticator(String passwdFile) throws IOException { PasswordFile passwords = new PasswordFile(passwdFile); f(passwords); }private f(PasswordFile passwords) throws IOException { users = passwords.getPasswordMap(); cipher = passwords.getCipher(); } (A better name for method f() possibly exists.) Again, this reduction incurs the cost of increased depth. Other reductions present themselves, however. Given rule (4) above, the original model of the constructor could have been written as: MultiUserAuthenticator = (getPasswordMap o new PasswordFile) \ (getCipher o new PasswordFile) This reduces to three order-simple equations: MultiUserAuthenticator = f \ g f = getPasswordMap o new PasswordFile g = getCipher o new PasswordFile Such a formulation, however, suffers from many draw-backs, such as the calling of the PasswordFile constructor twice; if this were a computationally expensive piece of code then performance alone might render this second formulation untenable. The reduction nevertheless answers our question. Is the set of order-simple methods to which an order-compound method decomposes unique? No. The normalization problem The processTestFile() method of PageHistory illuminates some interesting difficulties: void processTestFile(TestResultRecord record) throws ParseException { Date date = record.getDate(); addTestResult(record, date); countResult(record); setMinMaxDate(date); setMaxAssertions(record); pageFiles.put(date, record.getFile()); } The previous example introduced duplication of method invocation as means to explore a different order arrangement. The order equation for processTestFile(), however, cannot be expressed without duplication. In the minimum case, its seems getDate() appears twice: processTestFile = put o (getDate \ getFile) \ setMaxAssertions \ (setMinMaxDate o getDate) \ countResult \ (addTestResult o getDate) = put o (getDate \ getFile) \ setMaxAssertions \ countResult \ ((setMinMaxDate \ addTestResult) o getDate) Where a one-to-one correspondence exists between method invocation in source code and its appearance in an order equation, we say that the method under study is in normal form. This processTestFile() is not in normal form and so, while still reducible, the performance of its duplicated invocations may prove impractical. For completeness’s sake, if we introduce a = put o (getDate \ getFile) and b = (setMinMaxDate \ addTestResult), then we get: processTestFile = a \ setMaxAssertions \ countResult \ (b o getDate) Finally, we introduce c = b o getDate we get: processTestFile = a \ setMaxAssertions \ countResult \ c This is now an order-simple contingent method, with the remaining order-compoundness sequestered into the a() method, though with getDate() called from both a() and c(). Summary Programmers view absolutes with suspicion. The weakest of the Tulegatan principles, the principle of contingency barely holds its head above the choppy waters of style guidelines. This perhaps does not surprise given the target of its attentions: not concrete ripple effect but amorphous intent. The principle states that contingency should be minimized, advocating not the alarmist eradication of the semantic ordering of method invocation but the bolstering of semantic ordering with syntactic constraint. Curiously, despite its weakness, the principle of contingency leads to marked code structure. While the novice struggles to balance order-simplicity with increased code depth, the warrior versed in the blade of order-compound reduction produces refactorings of rare acuity. Martin’s SOLID principles, developed for classes, often find application at method-level, with many programmers calling for the method, too, to adhere to the principle of single responsibility. “Single responsibility,” of course, means different things to different people, but some would argue that a composed method adheres to this principle more than an order-simple contingent method does, and both vastly out-adhere order-compound methods. As such, contingency provides an objective gradient along which this subjective principle might be evaluated. The next part of this post will investigate fallen eliminations, naturally arising order-layering and compaction.Reference: Structural contingency (part one) from our JCG partner Edmund Kirwan at the A blog about software. blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: