Featured FREE Whitepapers

What's New Here?

mongodb-logo

Modeling Mongo Documents With Mongoose

Without a doubt, one of the quickest ways to build an application that leverages MongoDB is with Node. It’s as if the two platforms were made for each other; the sheer number of Node libraries available for dealing with Mongo is testimony to a vibrant, innovative community. Indeed, one of my favorite Mongo focused libraries these days is Mongoose. Briefly, Mongoose is an object modeling framework that makes it incredibly easy to model collections and ultimately work with intuitive objects that support a rich feature set. Like most things in Node, it couldn’t be any easier to get set up. Essentially, to use Mongoose, you’ll need to define Schema objects – these are your documents – either top level or even embedded. For example, I’ve defined a words collection that contains documents (representing…words) that each contain an embedded collection of definition documents. A sample document looks like this: { _id: '4fd7c7ac8b5b27f21b000001', spelling: 'drivel', synonyms: ['garbage', 'dribble', 'drool'], definitions: [ { part_of_speech: 'noun', definition:'saliva flowing from the mouth, or mucus from the nose; slaver.' }, { part_of_speech: 'noun', definition:'childish, silly, or meaningless talk or thinking; nonsense; twaddle.' }] }From an document modeling standpoint, I’d like to work with a Word object that contains a list of Definition objects and a number of related attributes (i.e. synonyms, parts of speech, etc). To model this relationship with Mongoose, I’ll need to define two Schema types and I’ll start with the simplest: Definition = mongoose.model 'definition', new mongoose.Schema({ part_of_speech : { type: String, required: true, trim: true, enum: ['adjective', 'noun', 'verb', 'adverb'] }, definition : {type: String, required: true, trim: true} })As you can see, a Definition is simple – the part_of_speech attribute is an enumerated String that’s required; what’s more, the definition attribute is also a required String. Next, I’ll define a Word: Word = mongoose.model 'word', new mongoose.Schema({ spelling : {type: String, required: true, trim: true, lowercase: true, unique: true}, definitions : [Definition.schema], synonyms : [{ type: String, trim: true, lowercase: true }] })As you can see, a Word instance embeds a collection of Definitions. Here I’m also demonstrating the usage of lowercase and the index unique placed on the spelling attribute. To create a Word instance and save the corresponding document couldn’t be easier. Mongo array’s leverage the push command and Mongoose follows this pattern to the tee. word = new models.Word({spelling : 'loquacious'}) word.synonyms.push 'verbose' word.definitions.push {definition: 'talking or tending to talk much or freely; talkative; \ chattering; babbling; garrulous.', part_of_speech: 'adjective' } word.save (err, data) ->Finding a word is easy too: it 'findOne should return one', (done) -> models.Word.findOne spelling:'nefarious', (err, document) -> document.spelling.should.eql 'nefarious' document.definitions.length.should.eql 1 document.synonyms.length.should.eql 2 document.definitions[0]['part_of_speech'].should.eql 'adjective' done(err)In this case, the above code is a Mocha test case (which uses should for assertions) that demonstrates Mongoose’s findOne. You can find the code for these examples and more at my Github repo dubbed Exegesis and while you’re at it, check out the developerWorks videos I did for Node!   Reference: Modeling Mongo Documents With Mongoose from our JCG partner Andrew Glover at the The Disco Blog blog.   ...
json-logo

Don’t Use JSON And XML As Internal Transfer Formats http

You have a system that has multiple components and they have to communicate. They do that either via internal web services or using a message queue. Normally, you would want to send (data transfer) objects from one component to another. Three typical examples:a user has registered and you send a message to a message queue and whenever the message is consumed, an email is sent to the user. The message needs the user at least the email and names of the user. if your layers communicate via web services for some reason (rather than live within one JVM), on registration the web layer needs to invoke a back-end service and pass a User object. you store objects in some (distributed) in-memory cache in order to reduce redundant calls to the database (assuming you map your database results to objects in some way, either an ORM or some mapper, but this is done in the majority of cases). So when a request arrives asking for a user profile, you check if it’s present in the cache, and if it is – you get it from there, rather than hitting the database.In order to achieve these things you need to serialize the objects to some format that will then be deserialized on the other end. Many frameworks include XML and JSON serializers and they are used in many examples online. Therefore people are inclined to use JSON or XML for these purposes. And that’s not a good idea. Using these formats internally has no benefit – you don’t actually need the serialized objects to be human-readable, and if you need to read the message contents, then you have the facilities to deserialize it and print it to a log file. But there are major drawbacks – speed and size. Both formats are text-based (so that they can be human-readable), which means they are unnecessarily verbose. Yes, JSON is less verbose than XML, but it’s still a text format that you don’t need. Instead, in most cases you’d better use binary serialization. Almost any binary serialization is better. I have evaluated a couple and the ease of use + speed and size benefits made me choose MessagePack. But you can also use protobuf, bson , avro or whatever fits your project. Yes, I know, I also said “this is probably a micro-optimization”. And then I ran some benchmark on our messages to see the time and size saved. I don’t remember the exact figures, but MessagePack was a lot faster and had a lot smaller message size, and seeing the results made me go straight into coding a MessagePackConverter to replace the JSONConverter. It is a pretty small change for the huge impact it has on the whole system. And given the high volume of messages that our system needs to serialize and deserialize, spending one day on integrating MessagePack is totally worth it – after all that would allow you to process or store (say) twice as many messages with the same hardware (compared to JSON). There are, of course, some things to consider, like versioning of the objects (if you add a field, does the deserialization of old messages break? In messagePack it does if the field is primitive, so you need a custom template to handle that) or if you are in a multi-language environment – is the deserialization library supported by all languages. Also, you usually have to let the serializer know the structure of your objects in advance, so here’s some additional code/annotations to populate the serializer context. But all of these are included in the “one day” mentioned above that I spent for integrating MessagePack. And it is probably a good idea to mention that if you are exposing an API to 3rd parties, you can’t rely on these serializers – your API should be JSON/XML, because there it needs to be human-readable and it needs to be supported in every language. But unless you totally don’t care about your resources (probably because it’s a system with little usage), seriously consider a binary serialization mechanism for your internal messaging, APIs, caching, etc.   Reference: Don’t Use JSON And XML As Internal Transfer Formats from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog.   ...
software-development-2-logo

Do you really get your IDE?

This is bit like a philosophical post. Just some thoughts regarding our perception of developer tooling. First – a question. Which IDE do you use? Eclipse? NetBeans? IntelliJ IDEA? Visual Studio? Vim? Emacs? Yeah, really – Vim counts as an IDE as long as you can configure it to behave like one. Or maybe Sublime Text, a really awesome text editor? Second question. Do you leverage the full power of your favourite IDE/editor? Mostly, people will say that the just use the IDE and I haven’t really seen many people leveraging the power of refactorings, shortcuts, other awesome IDE features that are out there. Why? Lazy? Careless? Neglectful? I was reading a nice theory article The IDE Divide, which is already 8 years old. The point of the article is that it points out two extremes among the developers: language mavens and tooling mavens. Language mavens are those who case about the deepest nuances in the programming languages and don’t really want to rely on tools (or just don’t have time to explore the features of the tools). Tooling mavens are the ones who are obsessed with learning (and creating?) the tools and not spending as much time discovering the mysteries of language features. The article also mentioned that it is enormously hard to be both, the language maven and the tools maven at the same time, since the time for learning all this stuff is limited. But generally, I think, knowing the properties language and runtime is more important as it is the produced code that will eventually run on the system. The tools are just used to create the programs. However, I rather consider myself a ‘tooling maven’ type of developer. Not that I don’t care about the languages, no. It is just the interest shift towards tools for me. Despite the above, I noticed something (or it is just my perception). When a the new-born programmer starts, you will first try to reach the comfortable level of using the language. Once you’re successful, there comes time for you to write the programs more effectively – faster, using shortcuts… click, click, click. This is where the tooling kicks in. Eventually, you start appreciate those nice features of your IDE that help you to write the code more effectively. The next stage is when you realize that actually you do not write code as much as you read it, and then you will start to appreciate the features that help you to navigate the code, analyze it, maybe refactor it. Language becomes a bit unimportant. I was chatting with Jacek Laskowski one day and he asked an interesting question: ‘If you’re given awesome tools/components/frameworks to work with, would you really care about which programming language to use?’. Really good question. I wouldn’t care, I guess. You will learn the language anyway. Or you will learn the tooling anyway once you’re comfortable with the language of your choice, because normally you would like to be more effective (this is my perception of curious programmers, I hope you are a curious programmer). What do you feel when a colleague next to you just moves around the project like a pro and finds everything he needs just in fractions of a second, and types with shortcuts creating new statements with just a few strokes? And then you try to type: ‘p’ ‘u’ ‘b’ ‘l’ ‘i’ ‘c’ ‘_’ ‘s’ ‘t’ ‘a’ ‘t’ ‘i’ ‘c’ ‘_’ ‘v’ ‘o’ ‘i’ ‘d’ ‘_’ ‘m’ ‘a’ ‘i’ ‘l’ [oooops! a typo!]. Frustrating… It is every so often I was keeping myself back from screaming at my colleague ‘just Ctrl+Shift+E !!!!’ while the team mate was looking for the class in the project tree the name of which he did not remember. Modern IDEs have revolutionized the way in which we are able to work with the code. Sadly, most programmers are held back by some mysterious myth that if you learn the tools too much you’re doomed as a programmer as you start depending on those tools. Don’t be held back by such fears! Go learn some tooling instead – it will save you some time later!   Reference: Do you really get your IDE? from our JCG partner Anton Arhipov at the Code Impossible blog. ...
apache-jmeter-logo

Performance Analysis of REST/HTTP Services with JMeter and Yourkit

My last post described how to accomplish stress- or load-testing of asynchronous REST/HTTP services with JMeter. However, running such tests often reveals that the system under test does not deal well with increasing load. The question is now how to find the bottleneck? Having an in-depth look at the code to detect suspicious parts could be one alternative. But considering the potentially huge codebase and therefore the multitude of possibilities for the bottleneck to hide out1 this might not look too promising. Fortunately there are tools available that provide efficient analysis capabilities on the base of telemetry2. Recording and examination of such measurements is commonly called profiling and this post gives a little introduction of how to do this using Yourkit3. First of all we launch our SUT (System Under Test) and use JMeter to build up system load. To do so JMeter may execute a test scenario that simulates multiple users sending a lot of requests to the SUT. The test scenario is defined in a testplan. The latter may contain listeners that allow to capture execution time of requests and provide statistics like maximum/minimum/avarage request duration, deviation, throughput and so on. This is how we detect that our system does not scale well… After this findings we enable Yourkit to retrieve telemetry. Therefore the VM of the SUT is started with a special profiler agent. The profiler tool provides several views that allow live inspection of CPU utilization, memoryconsumption and so on. But for a thorough analysis of e.g. the performance of the SUT under load, Yourkit needs to capture CPU information provided by the agent via so called snapshots.It is advisable to run the SUT, JMeter and Yourkit on separate machines to avoid falsification of the test results. Running e.g. the SUT and JMeter on the same machine could reduce throughput since JMeter threads may consume a lot of the available computation time. With this setup in mind we run through a little example of a profiling session. The following code snippet is an excerpt of a JAX-RS based service4 we use as SUT. @Path( '/resources/{id}' ) public class ExampleResourceProvider {private List<ExampleResource> resources;[...]@Override @GET @Produces( MediaType.TEXT_PLAIN ) public String getContent( @PathParam( 'id' ) String id ) { ExampleResource found = NOT_FOUND; for( ExampleResource resource : resources ) { if( resource.getId().equals( id ) ) { found = resource; } } return found.getMessage(); } The service performs a lookup in a list of ExampleResource instances. An ExampleResource object simply maps an identifier to a message represented as String. The message found for a given identifier is returned. As the service is called with GET requests you can test the outcome with a browser:For demonstration purpose the glue code of the service initializes the list with 500000 elements in an unordered way. Once we got the SUT running we can set it under load using JMeter. The testplan performs about 100 concurrent requests at a time. As shown in the picture below an average request execution takes about 1 second.The CPU telemetry recorded by Yourkit during the JMeter testplan execution reveals the reason of the long request execution times. Selecting the Hot spots tab of the profiled snapshot shows that about 72% of the CPU utilization was consumed by list iteration. Looking at the Back Traces view which lists the caller tree of the selected hot spot method we discover that our example service method causes the list iteration.Because of this we change the service implementation in the next step to use a binary search on a sorted list for the ExampleResource lookup. @Override @GET @Produces( MediaType.TEXT_PLAIN ) public String getContent( @PathParam( 'id' ) String id ) { ExampleResource key = new ExampleResource( id, null ); int position = Collections.binarySearch( resources, key ); return resources.get( position ).getMessage(); } After that we re-run the JMeter testplan:The average request now takes about 3 ms which is quite an improvement.And having a look at the Hot spots of the according CPU profiling session confirms that the bottle neck caused by our method has vanished. Admittedly the problem in the example above seems to be very obvious. But we found a very similar one in our production code hidden in the depth of the system (shame on me…). It is important to note that the problem did not get obvious before we started our stress and load tests5. I guess we would have spend a lot of time to examine the code base manually before – if ever – finding the cause. However the profiling session pointed us directly to the root of all evil. And as most often the actual problem was not difficult to solve. So profiling can help you a lot to handle some of your work more efficiently. At least it does for me – and by the way – it is a lot of fun tooNote that the code which causes the bottleneck could belong to a third libraries as well. ↩ As I am doing such an analysis right now in a customer project I came up with the idea to write this post  ↩ I am not doing any tool adverts or ratings here – I simply use tools that I am familiar with to give a reproducable example of a more abstract concept. There is a good chance that there are better tools on the market for your needs ↩ Note that the sole purpose of the code snippets in this post is to serve as example of how to find and resolve a performance bottleneck. The snippets are poorly written and should not be reused in any way! ↩ From my experience it is quite common that a newly created code base contains some of those nuggets. So having such tests is a must in order to find performance problems before the customer finds them in production… ↩  Reference: Performance Analysis of REST/HTTP Services with JMeter and Yourkit from our JCG partner Frank Appel at the Code Affine blog. ...
mongodb-logo

MongoDB From the Trenches: Prudent Production Planning

While starting out with MongoDB is super easy, there are few things you should keep in mind as you move from a development environment into a production one. No one wants to get paged at 3am because a customer can’t complete an order on your awesome e-commerce site because your database isn’t responding fast enough or worse, is down. Planning for a production deployment with MongoDB isn’t rocket science, but I must warn you, it’ll cost money, especially if your application actually gets used a lot, which is every developer’s dream. Therefore, like all databases, you need to plan for high availability and you’ll want the maximum performance benefits you can get for your money in a production environment. First and foremost, Mongo likes memory; that is, frequently accessed data is stored directly in memory; moreover, writes are also stored in memory until being flushed to disk. It’s imperative that you provide enough memory for Mongo to store a valid working dataset; otherwise, Mongo will have to go to the disk to retrieve, what should be, fast lookups via indexed data. This is sloooooow. Therefore, a good rule of thumb is to plan to run your Mongo instances with as much memory as you can afford. You can get an idea for your working data set by running Mongostat – this is a handy command line utility that’ll give you a second-by-second view into what Mongo is up to – one particular metric you’ll see is resident memory (labeled as res) – this will give you a good idea of how much memory Mongo’s using at any given moment. If this number exceeds what you have available on a given machine, then Mongo is having to go to disk, which is going to be a lot slower. Not all data can be stored in memory; every document in Mongo is eventually written to disk. And like always, I/O is always a slow operation compared to working with memory. This is why, for example, writes in Mongo can be so fast – drivers allow you to, essentially, fire and forget and the actual write to disk is done later, asynchronously. Reads can also incur an I/O penalty when something requested isn’t in working memory. Thus, for high performance reads and writes, pay attention to the underlying disks. A key metric here is IOPS or input/output operations per second. Mongo will be extremely happy, for example, in an SSD environment, provided you can afford it. Just take a look at various IOPS comparisons between SSDs and traditional spinning disks – super fast RPM disks can achieve IOPS in the 200 range. Typical SSD drives are attaining wild numbers, orders of magnitude higher (like in the 100’s of thousands of IOPS). It’s crazy how fast SSDs are compared to traditional hard drives. RAM is still faster than SSDs, so you’ll still want to understand your working set of data and ensure you have plenty of memory to contain it. Finally, for maximum availably, you really should be using Mongo’s replica sets. Setting up a cluster of Mongo instances is so incredibility easy that there really isn’t a good reason not to do it. The benefits of doing so are manifold, including:data redundancy high availability via automated failover disaster recoveryPlus, running a replica set makes maintenance so much easier as you can bring nodes off line and on line w/out an interruption of service. And you can run nodes in a replica set on commodity hardware (don’t forget about my points regarding memory and I/O though). Accordingly, when looking to move Mongo into a production environment, you need to consider memory, I/O performance, and replica sets. Running a high performant, high availability replica set’ed Mongo, not surprisingly, will cost you. If you’re looking for options for running Mongo in a production environment, I can’t recommend enough the team at MongoHQ. I’m a huge fan of Mongo. Check out some of the articles, videos, and podcasts that I’ve done, which focus on Mongo, including:Java development 2.0: MongoDB: A NoSQL datastore with (all the right) RDBMS moves Video demo: An introduction to MongoDB Eliot Horowitz on MongoDB 10gen’s Steve Francia talks MongoDB  Reference: MongoDB From the Trenches: Prudent Production Planning from our JCG partner Andrew Glover at the The Disco Blog blog. ...
java-logo

Investigating Deadlocks – Part 3

In my previous two blogs in this series, part 1 and part 2, I’ve demonstrated how to create a piece of bad code that deadlocks and then used this code to show three ways of taking a thread dump. In this blog I’m going to analyze the thread dump to figure out what when wrong. The discussion below refers to both the Account and DeadlockDemo classes from part 1 of this series, which contains full code listings. The first thing that that I need is a thread dump from the DeadlockDemo application, so as they used to say on Blue Peter “here’s one I prepared earlier”.       2012-10-16 13:37:03 Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.10-b01-428 mixed mode):"DestroyJavaVM" prio=5 tid=7f9712001000 nid=0x110247000 waiting on condition [00000000] java.lang.Thread.State: RUNNABLE"Thread-21" prio=5 tid=7f9712944000 nid=0x118d76000 waiting for monitor entry [118d75000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366ee0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-20" prio=5 tid=7f971216c000 nid=0x118c73000 waiting for monitor entry [118c72000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e98> (a threads.deadlock.Account) - locked <7f3366f58> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-19" prio=5 tid=7f9712943800 nid=0x118b70000 waiting for monitor entry [118b6f000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366f40> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-18" prio=5 tid=7f9712942800 nid=0x118a6d000 waiting for monitor entry [118a6c000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366f40> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-17" prio=5 tid=7f9712942000 nid=0x11896a000 waiting for monitor entry [118969000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366ec8> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-16" prio=5 tid=7f9712941000 nid=0x118867000 waiting for monitor entry [118866000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366ec8> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-15" prio=5 tid=7f9712940800 nid=0x118764000 waiting for monitor entry [118763000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366ef8> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-14" prio=5 tid=7f971293f800 nid=0x118661000 waiting for monitor entry [118660000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366f28> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-13" prio=5 tid=7f97129ae000 nid=0x11855e000 waiting for monitor entry [11855d000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-12" prio=5 tid=7f97129ad000 nid=0x11845b000 waiting for monitor entry [11845a000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366f40> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-11" prio=5 tid=7f97129ac800 nid=0x118358000 waiting for monitor entry [118357000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366eb0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-10" prio=5 tid=7f97129ab800 nid=0x118255000 waiting for monitor entry [118254000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-9" prio=5 tid=7f97129ab000 nid=0x118152000 waiting for monitor entry [118151000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e98> (a threads.deadlock.Account) - locked <7f3366ec8> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-8" prio=5 tid=7f97129aa000 nid=0x11804f000 waiting for monitor entry [11804e000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) - locked <7f3366f28> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-7" prio=5 tid=7f97129a9800 nid=0x117f4c000 waiting for monitor entry [117f4b000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) - locked <7f3366e80> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-6" prio=5 tid=7f97129a8800 nid=0x117e49000 waiting for monitor entry [117e48000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366e80> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-5" prio=5 tid=7f97128a1800 nid=0x117d46000 waiting for monitor entry [117d45000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:81) - waiting to lock <7f3366f28> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-4" prio=5 tid=7f97121af800 nid=0x117c43000 waiting for monitor entry [117c42000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e80> (a threads.deadlock.Account) - locked <7f3366e98> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-3" prio=5 tid=7f97121ae800 nid=0x117b40000 waiting for monitor entry [117b3f000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e80> (a threads.deadlock.Account) - locked <7f3366ef8> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"Thread-2" prio=5 tid=7f971224a000 nid=0x117a3d000 waiting for monitor entry [117a3c000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) - locked <7f3366f40> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)"RMI TCP Accept-0" daemon prio=5 tid=7f97128fd800 nid=0x117837000 runnable [117836000] java.lang.Thread.State: RUNNABLE at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) - locked <7f32ee740> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:462) at java.net.ServerSocket.accept(ServerSocket.java:430) at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341) at java.lang.Thread.run(Thread.java:680)"Poller SunPKCS11-Darwin" daemon prio=1 tid=7f97128fd000 nid=0x117734000 waiting on condition [117733000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at sun.security.pkcs11.SunPKCS11$TokenPoller.run(SunPKCS11.java:692) at java.lang.Thread.run(Thread.java:680)"Low Memory Detector" daemon prio=5 tid=7f971209e000 nid=0x1173ec000 runnable [00000000] java.lang.Thread.State: RUNNABLE"C2 CompilerThread1" daemon prio=9 tid=7f971209d000 nid=0x1172e9000 waiting on condition [00000000] java.lang.Thread.State: RUNNABLE"C2 CompilerThread0" daemon prio=9 tid=7f971209c800 nid=0x1171e6000 waiting on condition [00000000] java.lang.Thread.State: RUNNABLE"Signal Dispatcher" daemon prio=9 tid=7f971209b800 nid=0x1170e3000 waiting on condition [00000000] java.lang.Thread.State: RUNNABLE"Surrogate Locker Thread (Concurrent GC)" daemon prio=5 tid=7f971209a800 nid=0x116fe0000 waiting on condition [00000000] java.lang.Thread.State: RUNNABLE"Finalizer" daemon prio=8 tid=7f971209a000 nid=0x116d1c000 in Object.wait() [116d1b000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <7f3001300> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <7f3001300> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)"Reference Handler" daemon prio=10 tid=7f9712099000 nid=0x116c19000 in Object.wait() [116c18000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <7f30011d8> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <7f30011d8> (a java.lang.ref.Reference$Lock)"VM Thread" prio=9 tid=7f9712096800 nid=0x116b16000 runnable"Gang worker#0 (Parallel GC Threads)" prio=9 tid=7f9712002800 nid=0x1135c7000 runnable"Gang worker#1 (Parallel GC Threads)" prio=9 tid=7f9712003000 nid=0x1136ca000 runnable"Concurrent Mark-Sweep GC Thread" prio=9 tid=7f971204d800 nid=0x116790000 runnable "VM Periodic Task Thread" prio=10 tid=7f97122d4000 nid=0x11793a000 waiting on condition"Exception Catcher Thread" prio=10 tid=7f9712001800 nid=0x1103ef000 runnable JNI global references: 1037Found one Java-level deadlock: ============================= "Thread-21": waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by "Thread-20" "Thread-20": waiting to lock monitor 7f97118bc108 (object 7f3366e98, a threads.deadlock.Account), which is held by "Thread-4" "Thread-4": waiting to lock monitor 7f9711834360 (object 7f3366e80, a threads.deadlock.Account), which is held by "Thread-7" "Thread-7": waiting to lock monitor 7f97118b9708 (object 7f3366eb0, a threads.deadlock.Account), which is held by "Thread-11" "Thread-11": waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by "Thread-20"Java stack information for the threads listed above: =================================================== "Thread-21": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366ee0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-20": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e98> (a threads.deadlock.Account) - locked <7f3366f58> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-4": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e80> (a threads.deadlock.Account) - locked <7f3366e98> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-7": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) - locked <7f3366e80> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-11": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366eb0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)Found 1 deadlock.Heap par new generation total 19136K, used 11590K [7f3000000, 7f44c0000, 7f44c0000) eden space 17024K, 68% used [7f3000000, 7f3b51ac0, 7f40a0000) from space 2112K, 0% used [7f40a0000, 7f40a0000, 7f42b0000) to space 2112K, 0% used [7f42b0000, 7f42b0000, 7f44c0000) concurrent mark-sweep generation total 63872K, used 0K [7f44c0000, 7f8320000, 7fae00000) concurrent-mark-sweep perm gen total 21248K, used 8268K [7fae00000, 7fc2c0000, 800000000) Scanning quickly through, you can see that this thread dump is divided into four parts. These are:A complete list of all the applcation’s threads A list of deadlocked threads A small stack trace of deadlocked threads The application’s heap summary  The Thread List The thread list in point one above is a list of all the application’s threads and their current status. From this you can see that an application consists of a whole bunch of threads, which you can roughly divide in to two. Firstly there are the background threads. These are the ones that every application has, which get on with all the dirty jobs that we, as application programmers, don’t usually need to worry about. These have names such as: “DestroyJavaVM“, Low Memory Detector, Finalizer, Exception Catcher Thread and Concurrent Mark-Sweep GC Thread. Secondly, there are the threads that you or I may create as part of our code. These usually have names that consist of the word Thread followed by a number. For example: Thread-3, Thread-6 and Thread-20. "Thread-20" prio=5 tid=7f971216c000 nid=0x118c73000 waiting for monitor entry [118c72000] java.lang.Thread.State: BLOCKED (on object monitor) at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:82) - waiting to lock <7f3366e98> (a threads.deadlock.Account) - locked <7f3366f58> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:58) Looking at the information given on Thread-20 in more detail you can see that this can be broken down into several parts. These are: <td >Thread-20 <td >The thread’s name as described above.<tr > <td >prio=5 <td >The thread’s priority. A number from 1 to 10, where 1 is the lowest and 10 is the highest priority. <tr > <tr > <td >tid=7f971216c000 <td >The thread id. A unique number that’s returned by a Thread.getId() call. <tr > <td >nid=0x118c73000 <td >The native thread id. This maps to a platform dependent thread id. <tr > <td >waiting for monitor entry [118c72000] java.lang.Thread.State: BLOCKED (on object monitor) <td >This is the status of the thread; in this case it’s BLOCKED. Also included is a stack trace outlining where the thread is blocked. Note that a thread can also be marked as a daemon. For example: “RMI TCP Accept-0″ daemon prio=5 tid=7f97128fd800 nid=0x117837000 runnable [117836000] java.lang.Thread.State: RUNNABLE Daemon threads are background task threads such as the RMI TCP Accept-0 thread listed above. A daemon thread is a thread that does not prevent the JVM from exiting. The JVM will exit, or close down, when only daemon threads remain.However, the thread list doesn’t really help in tracing the cause of a deadlock, so moving swiftly along… The Deadlock Thread List This section of the thread dump contains a list of all threads that are involved in the deadlock. Found one Java-level deadlock: ============================= "Thread-21": waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by "Thread-20" "Thread-20": waiting to lock monitor 7f97118bc108 (object 7f3366e98, a threads.deadlock.Account), which is held by "Thread-4" "Thread-4": waiting to lock monitor 7f9711834360 (object 7f3366e80, a threads.deadlock.Account), which is held by "Thread-7" "Thread-7": waiting to lock monitor 7f97118b9708 (object 7f3366eb0, a threads.deadlock.Account), which is held by "Thread-11" "Thread-11": waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by "Thread-20" From the segment above, you can see that there are five threads all blocking on instances the threads.deadlock.Account class Leaving aside the monitor ids and Account instances, you can see that “Thread-21″ is waiting for “Thread-20″, which is waiting for “Thread-4″, which in turn is waiting for “Thread-7″. “Thread-7″ is waiting for “Thread-11″, which is waiting for “Thread-20″: a deadlock loop as shown in the diagram below:The Deadlock Stack TracesThe final piece of the puzzle is the list of deadlocked thread stack traces as shown below: Java stack information for the threads listed above: =================================================== "Thread-21": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366ee0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-20": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e98> (a threads.deadlock.Account) - locked <7f3366f58> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-4": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366e80> (a threads.deadlock.Account) - locked <7f3366e98> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-7": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366eb0> (a threads.deadlock.Account) - locked <7f3366e80> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-11": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f3366f58> (a threads.deadlock.Account) - locked <7f3366eb0> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) From the previous section, we know that Thread-20 is waiting, via a circuitous route, for Thread-11 and Thread-11 is waiting for Thread-20. This is our deadlock. The next step is to tie this deadlock up to lines of code using the thread stack trace above and I’ve simplified this in the diagram below.In the above diagram I’ve removed the 7f3366 prefix from the object ids for clarity; hence, object 7f3366f58 is now f58. From this diagram, you can see that object f58 is locked by Thread-20 on line 59 and is waiting for a lock on object e98 on line 86. Following the arrows down, you can see that Thread-7 is waiting for a lock on eb0 on line 86, which in turn is locked by Thread-11 on line 59. Thread-11 is waiting for a lock on f58 on line 86, which, looping back up, is locked on line 58 by Thread-20. So, where are these lines of code? The following shows line 59:…and this is line 86:Everyone gets surprises sometimes and the stack trace above surprised me. I was expecting the locks to be on lines 85 and 86; however, they were on 59 and 86. Since line 59 doesn’t contain a synchronized keyword, I’m guessing that the compiler has done some optimisation on the transfer(…) method’s first synchronized keyword. The conclusion that can be drawn from this is that the code, which randomly picks two Account objects from a list, is locking them in the wrong order on lines 59 and 86. So what’s the fix? More on that next time; however, there’s one final point to note, which is that the make up of a deadlock may not be the same every time you generate a thread dump on a program. After running the DeadlockDemo program again and using kill -3 PID to get hold of another thread dump, I obtained these results: Found one Java-level deadlock: ============================= "Thread-20": waiting to lock monitor 7fdc7c802508 (object 7f311a530, a threads.deadlock.Account), which is held by "Thread-3" "Thread-3": waiting to lock monitor 7fdc7a83d008 (object 7f311a518, a threads.deadlock.Account), which is held by "Thread-11" "Thread-11": waiting to lock monitor 7fdc7c802508 (object 7f311a530, a threads.deadlock.Account), which is held by "Thread-3"Java stack information for the threads listed above: =================================================== "Thread-20": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:86) - waiting to lock <7f311a530> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-3": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:87) - waiting to lock <7f311a518> (a threads.deadlock.Account) - locked <7f311a530> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59) "Thread-11": at threads.deadlock.DeadlockDemo$BadTransferOperation.transfer(DeadlockDemo.java:87) - waiting to lock <7f311a530> (a threads.deadlock.Account) - locked <7f311a518> (a threads.deadlock.Account) at threads.deadlock.DeadlockDemo$BadTransferOperation.run(DeadlockDemo.java:59)Found 1 deadlock. In this thread dump a smaller number of threads are involved in the deadlock, but if you analyze it you can draw the same conclusions as my first example. Next time: fixing the code… For more information see the other blogs in this series. All source code for this an other blogs in the series are available on Github at git://github.com/roghughe/captaindebug.git   Reference: Investigating Deadlocks – Part 3: Analysing the Thread Dump from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...
junit-logo

JUnit Rules

The first time I stumbled over a JUnit @Rule annotation I was a bit irritated of the concept. Having a public field in a test case seemed somewhat odd and so I was reluctant to use it regularly. But after a while I got used to that and it turned out that rules can ease writing tests in many ways. This post gives a quick introduction of the concept and some short examples of what rules are good for. What are JUnit Rules? Let’s start with a look at a JUnit out-of-the-box rule. The TemporaryFolder is a test helper that can be used to create files and folders located under the file system directory for temporary content1. The interesting thing with the TemporaryFolder is that it guarantees to delete its files and folders when the test method finishes2. To work as expected the temporary folder instance must be assigned to an @Rule annotated field that must be public, not static, and a subtype of TestRule: public class MyTest {@Rule public TemporaryFolder temporaryFolder = new TemporaryFolder();@Test public void testRun() throws IOException { assertTrue( temporaryFolder.newFolder().exists() ); } }   How does it work? Rules provide a possibility to intercept test method calls similar as an AOP framework would do. Comparable to an around advice in AspectJ you can do useful things before and/or after the actual test execution3. Although this sounds complicated it is quite easy to achieve. The API part of a rule definition has to implement TestRule. The only method of this interface called apply returns a Statement. Statements represent – simply spoken – your tests within the JUnit runtime and Statement#evaluate() executes them. Now the basic idea is to provide wrapper extensions of Statement that can do the actual contributions by overriding Statement#evaluate(): public class MyRule implements TestRule {@Override public Statement apply( Statement base, Description description ) { return new MyStatement( base ); } }public class MyStatement extends Statement {private final Statement base;public MyStatement( Statement base ) { this.base = base; }@Override public void evaluate() throws Throwable { System.out.println( 'before' ); try { base.evaluate(); } finally { System.out.println( 'after' ); } } } MyStatement is implemented as wrapper that is used in MyRule#apply(Statement,Destination) to wrap the original statement given as argument. It is easy to see that the wrapper overrides Statement#evaluate() to do something before and after the actual evaluation of the test4. The next snippet shows how MyRule can be used exactly the same way as the TemporaryFolder above: public class MyTest {@Rule public MyRule myRule = new MyRule();@Test public void testRun() { System.out.println( 'during' ); } } Launching the test case leads to the following console output which proves that our example rule works as expected. The test execution gets intercepted and modified by our rule to print ‘before’ and ‘after’ around the ‘during’ of the test: before during after Now that the basics are understood let’s have a look at slightly more useful things you could do with rules. Test Fixtures Quoted from the according wikipedia section a test fixture ‘is all the things that must be in place in order to run a test and expect a particular outcome. Frequently fixtures are created by handling setUp() and tearDown() events of the unit testing framework’. With JUnit this often looks somewhat like this: public class MyTest {private MyFixture myFixture;@Test public void testRun1() { myFixture.configure1(); // do some testing here }@Test public void testRun2() { myFixture.configure2(); // do some testing here }@Before public void setUp() { myFixture = new MyFixture(); }@After public void tearDown() { myFixture.dispose(); } } Consider you use a particular fixture the way shown above in many of your tests. In that case it could be nice to get rid of the setUp() and tearDown() methods. Given the sections above we now know that this can be done by changing MyFixture to implement TestRule. An appropriate Statement implementation would have to ensure that it calls MyFixture#dispose() and could look like this: public class MyFixtureStatement extends Statement {private final Statement base; private final MyFixture fixture;public MyFixtureStatement( Statement base, MyFixture fixture ) { this.base = base; this.fixture = fixture; }@Override public void evaluate() throws Throwable { try { base.evaluate(); } finally { fixture.dispose(); } } } With this in place the test above can be rewritten as: public class MyTest {@Rule public MyFixture myFixture = new MyFixture();@Test public void testRun1() { myFixture.configure1(); // do some testing here }@Test public void testRun2() { myFixture.configure2(); // do some testing here } } I come to appreciate the more compact form of writing tests using rules in a lot of cases, but surely this is also a question of taste and what you consider better to read5. Fixture Configuration with Method Annotations So far I have silently ignored the Description argument of TestRule#apply(Statement,Description). In general a Description describes a test which is about to run or has been run. But it also allows access to some reflective information about the underlying java method. Among others it is possible to read the annotations attached to such a method. This enables us to combine rules with method annotations for convenience configuration of a TestRule. Consider this annotation type: @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD}) public @interface Configuration { String value(); } Combined with the following snippet inside MyFixture#apply(Statement,Destination) that reads the configuration value annotated to a certain test method… Configuration annotation = description.getAnnotation( Configuration.class ); String value = annotation.value(); // do something useful with value … the test case above that demonstrates the usage of the MyFixture rule can be rewritten to: public class MyTest {@Rule public MyFixture myFixture = new MyFixture();@Test @Configuration( value = 'configuration1' ) public void testRun1() { // do some testing here }@Test @Configuration( value = 'configuration2' ) public void testRun2() { // do some testing here } } Of course there are limitations to the latter approach due to the fact that annotations only allow Enums, Classes or String literals as parameters. But there are use cases where this is completely sufficient. A nice example using rules combined with method annotations is provided by the restfuse library. If you are interested in a real world example you should have a look at the library’s implementation of the Destination rule6. Comming to the end the only thing left to say is that I would love to hear from you about other useful examples of JUnit rules you might use to ease your daily testing work:The directory which is in general returned by System.getProperty( 'java.io.tmpdir' ); ↩ Looking at the implementation of TemporaryFolder I must note that it does not check if the deletion of a file is successful. This might be a weak point in case of open file handles ↩ And for what it’s worth you even could replace the complete test method by something else ↩ The delegation to the wrapped statement is put into a try...finally block to ensure that the functionality after the test gets executed, even if the test would fail. In that case an AssertionError would be thrown and all statements that are not in the finally block would be skipped ↩ You probably noted that the TemporaryFolder example at the beginning is also nothing else but a fixture use case ↩ Note that restfuse’s Destination class implements MethodRule instead of TestRule. This post is based on the latest JUnit version where MethodRule has been marked as @Deprecated. TestRule is the replacement for MethodRule. But given the knowledge of this post it should be nevertheless easily possible to understand the implementation ↩  Reference: JUnit Rules from our JCG partner Frank Appel at the Code Affine blog. ...
android-logo

Android Reverse Engineering and Decompilation

Reverse engineering of android java app using apktool, dex2jar, jd-gui to convert .apk file to .java. By reverse engineering of android app (.apk file) we can get following :understand how a particular UI in an App is constructed reading AndroidManifest.xml – permissions, activities, intents etc in the App native libraries and images used in that App obsfucated code ( android SDK, by default, uses ProGuard tool which shrinks, optimizes, and obfuscates your code by removing unused code and renaming classes, fields, and methods with semantically obscure names.  Required Tools : Download the followings first.Dex2jar from http://code.google.com/p/dex2jar/ JD-GUI from http://java.decompiler.free.fr/?q=jdgui ApkTool from http://code.google.com/p/android-apktool/Using ApkTool - to extract AndroidManifest.xml and everything in res folder(layout xml files, images, htmls used on webview etc..) Run the following command : >apktool.bat d sampleApp.apk It also extracts the .smali file of all .class files, but which is difficult to read. ##You can achieve this by using zip utility like 7-zip. Using dex2jar - to generate .jar file from .apk file, we need JD-GUI to view the source code from this .jar. Run the following command : >dex2jar sampleApp.apk Decompiling .jar JD-GUI - it decompiles the .class files (obsfucated- in case of android app, but readable original code is obtained in case of other .jar file). i.e., we get .java back from the application. Just Run the jd-gui.exe and File->Open to view java code from .jar or .class file. You May Also Like -Android: Application Project Structure in Eclipse Final Year Computer Project Suggestion, A HUGE LIST Installing Android SDK, ADT in eclipse Java: using recursion to read a folder and its content in tree format sub-fo … Android First Program eclipse- getting started  Reference: Android Reverse Engineering – decompile .apk-.dex-.jar-.java from our JCG partner Ganesh Tiwari at the GT’s Blog blog. ...
spring-logo

Spring: Setting Logging Dependencies

This post describes how to set-up logging dependencies in Spring. It is based on information available in a post by Dave Syer’s. A reminder on Java logging frameworks is available here. The code example is available at GitHub in the Spring-Logging-Dependencies directory. Spring uses Jakarta Commons Logging API (JCL). Unfortunately, many people do not like its runtime discovery algorithm. We can disactivate it and use SLF4J with Logback instead. We will use a variation of the Spring MVC with Annotations example to do so. Here is the modified controller:   import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping;@Controller public class MyController {private static final Logger LOG = LoggerFactory.getLogger(MyController.class);@RequestMapping(value = '/') public String home(Model model) {String s = 'Logging at: ' + System.currentTimeMillis(); LOG.info(s);model.addAttribute('LogMsg', s);return 'index';}} We create a SFL4J logger and log some information with current time in milliseconds. The maven dependencies are: <properties> ... <spring.version>3.1.2.RELEASE</spring.version> <slf4j.version>1.7.1</slf4j.version> <logback.version>0.9.30</logback.version> </properties><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> <type>jar</type> </dependency><dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency><dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> <type>jar</type> </dependency><dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency> Once built, one can start the example by browsing: http://localhost:9393/spring-logging-dependencies/. It will display the following:In the logs, you will find the logged statement:More Spring posts here.   Reference: Setting Logging Dependencies In Spring from our JCG partner Jerome Versrynge at the Technical Notes blog. ...
java-logo

Prototype Design Pattern: Creating another dolly

It’s really a time consuming process to create objects and also an expensive affair. So we are now on a venture to save both time and money. How do we do that?Anybody remember about Dolly? Yes, it’s the sheep which was the first mammal to be cloned. Well I don’t want to dig into the details but the key point is it’s all about cloning. It’s about creating a duplicate.Prototype Design Pattern is pretty much similar to this real life example. This is another in the series of Creational Design Pattern part of the Gang of Four Design Pattern. So this pattern works by cloning of an object rather than creation unlike Factory patterns. When to use this pattern?If the cost of creating the object is expensive or complicated. When trying to keep the number of classes in an application to a minimum When adding or removing objects at runtime When the client application needs to be unaware of the object creation, composition and representation. Objects are required which are similar to the existing objectsWhat does Prototype pattern do? Prototype pattern allows making new instances by copying the existing instances. Prototype pattern results in a cloned object which is different from the original object. The state of the original is the same as the clone, at the time of cloning. Thereafter each object may undergo state change. We can modify the objects to perform different things as well. The only good thing is client can make new instances without knowing which specific class is being instantiated. Structure: The prototype class declares an interface for cloning itself by implementing the Cloneable interface and using the clone() method. Concrete Prototype implements the clone() method for cloning itself. And the client class creates a new object by asking Prototype to clone itself rather than using the new keyword.The flow of events works in such a manner that the original class (e.g. Class A) is already initialized and instantiated. This is because we cannot use the clone as it is. We need to instantiate the original class (Class A) before using it. The client then requests the Prototype class for a new object of the same type as Class A. A concrete prototype depending on the type of object needed provides the object by cloning itself using the clone() method. Imagine a scenario where there might be requirements where we have to get the user profile data from the backend for multiple processing e.g. user profile or roles etc which does not get changed very frequently. So we might have to use expensive database resources, connections and transactions. In this case we can store the data in single call and cache it in the session for further processing. In the above example UserProfile object is the main object which will be cloned. The UserProfile implements the Cloneable interface. The BankDetails and Identity classes inherit from the UserProfile class. These are the concrete prototype classes. We have introduced a new class called UserProfileRegistry which finds appropriate UserProfile instance and then returns the clone to the client class appropriately.You would need to clone() an Object when you want to create another Object at runtime that is a true copy of the Object you are cloning. True copy means all the attributes of the newly created Object should be the same as the Object you are cloning. If you could have instantiated the class by using new instead, you would get an Object with all attributes as their initial values. For example, if you are designing a system for performing bank account transactions, then you would want to make a copy of the Object that holds your account information, perform transactions on it, and then replace the original Object with the modified one. In such cases, you would want to use clone() instead of new.   Interesting points:The Creational Design pattern can co-exist together e.g. Abstract factory, Builder and Prototype can use Singleton Pattern during their implementation or they might also work in isolation. Prototype pattern definitely need initialize operation but don’t need sub classing but Factory Method requires subclassing but don’t require initialize operation. Beneficial in Bank based transaction where we have expensive database queries. Caching might help and prototype pattern is the best answer to this situation as copy of the object having bank account information or user profile information can be used , perform transaction on it and then replace the original object with the modified one. The above example uses Shallow cloning method. However we can implement through deep cloning as well. A detailed explanation on this topic can be found in our article: Deep diving into Cloning.Benefits:Hides complexities of creating of objects. The clients can get new objects without knowing whose type it will be. Reduce subclassing.Drawback:Drawback to using the Prototype is that making a copy of an object can sometimes be complicated. Classes that have circular references to other classes cannot really be cloned.Download Source Code:  Reference: Prototype Design Pattern: Creating another dolly from our JCG partner Mainak Goswami at the Idiotechie blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close