Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Writing your own logging service?

Application logging is one those things like favorite Editors war: everyone has their own opinions and there are endless of implemenations and flavors out there. Now a days, you likely would want to use something already available such as Log4j or Logback. Even JDK has a built in “java.util.logging” implementation. To avoid couple to a direct logger, many projects would opt to use a facade interface, and there is already couple good ones out there already, such as SLF4J or Apache Common Logging etc. Despite all these, many project owners still want to try write their own logger service! I wondered if I were to ask and write one myself, what would it be like? So I played around and come up with this simple facade that wraps one of the logger provider (JDK logger in this case), and you can check it out here. With my logger, you can use it like this in your application: import zemian.service.logging.*; class MyService { Log log = LogFactory.createLog(MyService.class); public void run() { log.info(Message.msg("%s service is running now.", this)); } }Some principles I followed when trying this out:Use simple names for different level of messages: error, warn, info, debug and trace (no crazy fine, finer and finest level names.) Seperate Log service from implementation so you can swap provider. Uses Message logging POJO as data encapsulation. It simplifies the log service interface. Use log parameters and lazy format binding to construct log message to speed performance.Do not go crazy with logging service implemenation make it complex. For example I recommend NOT to mix business logic or data in your logging if possible! If you need custom error codes to be logged for example, you can write your own Exception class and encapsulate there, and then let the logging service do its job: just logging. Here are some general rules about using logger in your application that I recommend: Use ERROR log messages when there is reallyl a error! Try not to log an “acceptable” error message in your application. Treat an ERROR as critical problem in your application, like if it’s in production, some one should be paged to take care of the problem immediately. Each message should have a full Java stacktrace! Some application might want to assign a unique Error Code to these level of messages for easier identification and troubleshoot purpose. Use WARN log messages if it’s a problem that’s ignorable during production operation, but not good idea to supress it. Likely these might point to potentially problem in your application or env. Each message should have a full Java stacktrace, if available that is! Use INFO log messages for admin operators or application monitors peoples to see how your application is doing. High level application status or some important and meaningful business information indicators etc. Do not litter your log with developer’s messages and unessary verbose and unclear message. Each message should be written in clear sentence so operators knows it’s meaningful. Use DEBUG log messages for developers to see and troubleshoot the application. Use this for critical application junction and operation to show objects and services states etc. Try not to add repeated loop info messages here and litter your log content. Use TRACE log message for developers to troubleshoot tight for loop and high traffic messages information. You should select a logger provider that let you configure and turn these logging levels ON or OFF (preferrable at runtime if possible as well). Each level should able to automatically suppress all levels below it. And ofcourse you want a logger provider that can handle log message output to STDOUT and/or to FILE as destination as well.Reference: Writing your own logging service? from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
software-development-2-logo

In Favour of Self-Signed Certificates

Today I watched the Google I/O presentation about HTTPS everywhere and read a couple of articles, saying that Google is going to rank sites using HTTPS higher. Apart from that, SPDY has mandatory usage of TLS, and it’s very likely the same will be true for HTTP/2. Chromium proposes marking non-HTTPS sites as non-secure. And that’s perfect. Except, it’s not very nice for small site owners. In the presentation above, the speakers say “it’s very easy” multiple times. And it is, you just have to follow a dozen checklists with a dozen items, run your site through a couple of tools and pay a CA 30 bucks per year. I have run a couple of personal sites over HTTPS (non-commercial, so using a free StatCom certificate), and I still shiver at the thought of setting up a certificate. You may say that’s because I’m an Ops newbie, but it’s just a tedious process. But let’s say every site owner will have a webmaster on contract who will renew the certificate every year. What’s the point? The talk rightly points out three main aspects – data integrity, authentication and encryption. And they also rightly point out that no matter how mundane your site is, there is information about that that can be extracted based on your visit there (well, even with HTTPS the Host address is visible, so it doesn’t matter what torrents you downloaded, it is obvious you were visiting thepiratebay). But does it really matter if my blog is properly authenticating to the user? Does it matter if the website of the local hairdresser may suffer a man-in-the-middle attack with someone posing as the site? Arguably, not. If there is a determined attacker that wants to observe what recipes are you cooking right now, I bet he would find it easier to just install a keylogger. Anyway, do we have any data of how many sites are actually just static websites, or blogs? How many websites don’t have anything more than a contact form (if at all)? 22% of newly registered domains in the U.S. are using WordPress. That doesn’t tell much, as you can build quite interactive sites with WordPress, but is probably an indication. My guess is that the majority if sites are simple content sites that you do not interact with, or interactions are limited to posting an anonymous comment. Do these sites need to go through the complexity and cost of providing an SSL certificate? Certification Authorities may be rejoicing already – forcing HTTPS means there will be an inelastic demand for certificates, and that means prices are not guaranteed to drop. If HTTPS is forced upon every webmaster (which should be the case, and I firmly support that), we should have a way of free, effortless way to allow the majority of sites to comply. And the only option that comes to mind is self-signed certificates. They do not guarantee there is no man-in-the-middle, but they do allow encrypting the communication and making it impossible for a passive attacker to see what you are browsing or posting. Server software (apache, nginx, tomcat, etc.) can have a switch “use self-signed certificate”, and automatically generate and store the key pair on the server (single server, of course, as traffic is unlikely to be high for these kinds of sites). Browsers must change, however. They should no longer report self-signed certificates as insecure. At least not until the user tries to POST data to the server (and especially if there is a password field on the page). Upon POSTing data to the server, the browser should warn the user that it cannot verify the authenticity of the certificate and he must proceed only if he thinks data is not sensitive. Or even passing any parameters (be it GET or POST) can trigger a warning. That won’t be sufficient, as one can issue a GET request for site.com/username/password or even embed an image or use javascript. That’s why the heuristics to detect and show a warning can include submitting forms, changing src and href with javascript, etc. Can that cover every possible case, and won’t it defeat the purpose? I don’t know. Even small, content-only, CMS-based sites have admin panels, and that means the owner sends username and password. Doesn’t this invalidate the whole point made above? It would, if there wasn’t an easy fix – certificate pinning. Even now this approach is employed by mobile apps in order to skip the full certificate checks (including revocation). In short, the owner of the site can get the certificate generated by the webserver, import it in the browser (pin it), and be sure that the browser will warn him if someone tries to intercept his traffic. If he hasn’t imported the certificate, the browser would warn him upon submission of the login form, possibly with instructions what to do. (When speaking about trust, I must mention PGP. I am not aware whether there is a way to use web-of-trust verification of the server certificate, instead of the CA verification, but it’s worth mentioning as a possible alternative) So, are self-signed certificate for small, read-only websites, secure enough? Certainly not. But relying on a CA isn’t 100% secure either (breaches are common). And the balance between security and usability is hard. I’m not saying my proposal in favour of self-signed certificates is the right way to go. But it’s food for thought. The exploits such an approach can bring to even properly secured sites (via MITM) are to be considered with utter seriousness. Interactive sites, especially online shops, social networks, payment providers, obviously, must use a full-featured HTTPS CA certificate, and that’s only the bare minimum. But let’s not force that upon the website of the local bakery and the millions of sites like it. And let’s not penalize them for not purchasing a certificate. P.S. EFF’s Let’s Encrypt looks a really promising alternative. P.P.S See also in-session key negotiationReference: In Favour of Self-Signed Certificates from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

ROI Is Dead!

Last month I was in Huib Schoots’ workshop. He talked about being asked “What’s the ROI of testing?”. To which he replied “I don’t know, what’s the ROI of management?”. We are really obsessed with ROI, aren’t we? We use it a lot as a tool for decision making, but are we using it wisely? ROI (Return On Investment) is comprised of costs and value. Cost is easy to measure: salaries, time, tools, licenses, computers, material. But can we put a number of value? Let’s take a developer as an example. What’s their value? Is it lines of code? If so, how do different languages compare? Delivered lines of code are hardly a measure. What about productivity? For example, lines of code per minute? Should we subtract bugs from that? Even if everything works perfectly, are all the features count as value, or just the ones actually used? Let’s try something else… Maybe we’ll have better luck with testers. Is it the number of bugs they found? How do we count the ones they didn’t find? What if they documented everything they found, and then we decided to ignore all the non-major bugs, and fix only the major ones. Is all the work around minor bugs waste (non-value)? You see the pattern – while cost is easy to measure individually, value isn’t. The reason is that as long it’s work in progress, there isn’t any really value yet, regardless on who’s doing the work. We can measure whatever we like, but we’ll be just lying to ourselves. When the product is released, the investment is the sum of the work put into it. It doesn’t matter who put the work in, the tester, the developer, the product manager, the team leader. Only then can we start talking about value. Now we’re talking. Let’s just calculate the value of the product, at least we’ll have an ROI for that. And to make it easy, we’ll calculate income as value, because it’s easy to measure. Wait a minute. WHEN you measure actually impacts the income going into the calculation. Is it on release day? Unless you’re Apple, or making a block-buster movie, there’s not much sense in checking the first day. (By the way: Neither Apple or any major studio stops counting on the first day). Right, so when do you stop? 1 month after release? 6 months? 2 years? 5 years? Just for the sake of argument, let’s say it’s 2 years. On the 2nd anniversary we evaluate the ROI to be 1.5. We’re happy because the ratio is bigger than one. Now what do I do with this number? Do you remember what we needed the ROI for? Making decisions before we actually did the work. If we have to wait 2 years for making a decision now, let alone do the project to get the number, it’s kind of missing the point. Oh wait, there’s more. It seems that the product was successful enough to carry the next product (released 3 years later) forward. Or it was so crappy, that the next product, although it was much better, suffered from its sibling’s reputation. It seems the value of product B is impacted by product A! Our calculations are ruined! ROI is dead as a decision mechanism, because complexity killed it. There are too many things, external or internal, that go into its calculation, and we don’t even know most of them when do the calculation. There’s a reason we’re talking about safe-to-fail experiments in agile. We acknowledge we don’t know enough, but we’re willing to invest, and sometimes lose,  small amounts of money. We acknowledge complexity and the only way not to lose big, is to lower the cost of failure. That means learning quickly, getting feedback fast. The primary measure of progress is working software, says the Agile Manifesto. We’re willing to continue investing in things that continue show promise. It’s the only evidence we have, and we should trust it. Not a made up (sorry, well calculated) number that was applicable to the last product we release. Either that, or you can wait a few years, and then decide. Your choice.Reference: ROI Is Dead! from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

How to test a REST api from command line with curl

If you want to quickly test your REST api from the command line, you can use curl. In this post I will present how to execute GET, POST, PUT, HEAD, DELETE HTTP Requests against a REST API. For the purpose of this blog post I will be using the REST api developed in my post Tutorial – REST API design and implementation in Java with Jersey and Spring. 1. Introduction If in the first part of the blog post I will do a brief introduction to curl and what it can do (HTTP requests with options), in the second part I will “translate” the SOAPui test suite developed for the REST API tutorial to curl requests. 1.1. What is curl? Well, curl is a command line tool and library for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos…), file transfer resume, proxy tunneling and more.[1] As mentioned, I will be using curl to simulate HEAD, GET, POST, PUT and DELETE request calls to the REST API. 1.2. HEAD requests If you want to check if a resource is serviceable, what kind of headers it provides and other useful meta-information written in response headers, without having to transport the entire content, you can make a HEAD request.  Let’s say I want to see what I would GET when requesting a Podcast resource. I would issue the following HEAD request with curl: Request curl -I does a HEAD request curl -I http://localhost:8888/demo-rest-jersey-spring/podcasts/1 OR HEAD request against a resource curl -i -X HEAD http://localhost:8888/demo-rest-jersey-spring/podcasts/1 Curl options -i, --include – include protocol headers in the output (H/F) -X, --request – specify request  COMMAND (GET, PUT, DELETE…)  to useResponse HEAD response % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 631 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 HTTP/1.1 200 OK Date: Tue, 25 Nov 2014 12:54:56 GMT Server: Jetty(9.0.7.v20131107) Access-Control-Allow-Headers: X-extra-header Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia Allow: OPTIONS Content-Type: application/xml Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Vary: Accept-Encoding Content-Length: 631 Note the following headersAccess-Control-Allow-Headers: Content-Type Access-Control-Allow-Methods: GET, POST, DELETE, PUT and Access-Control-Allow-Origin: *in the response. They’ve been added to support Cross-Origing Resource Sharing (CORS). You can find more about that in my post How to add CORS support on the server side in Java with Jersey. What I find a little bit intriguing is the response header Content-Type: application/xml, because I would have expected it to be application/json, since in the resource method defined with Jersey this should have taken precedence: @Produces annotation media types @GET @Path("{id}") @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) public Response getPodcastById(@PathParam("id") Long id, @QueryParam("detailed") boolean detailed) throws IOException, AppException { Podcast podcastById = podcastService.getPodcastById(id); return Response.status(200) .entity(podcastById, detailed ? new Annotation[]{PodcastDetailedView.Factory.get()} : new Annotation[0]) .header("Access-Control-Allow-Headers", "X-extra-header") .allow("OPTIONS").build(); } 1.3. GET request Executing curl with no parameters on a URL (resource) will execute a GET. Request Simple curl call on the resource curl http://localhost:8888/demo-rest-jersey-spring/podcasts/1 Response <?xml version="1.0" encoding="UTF-8" standalone="yes"?><podcast><id>1</id><title>- The Naked Scientists Podcast - Stripping Down Science</title><linkOnPodcastpedia>http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science</linkOnPodcastpedia><feed>feed_placeholder</feed><description>The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home.</description><insertionDate>2014-10-29T10:46:02.00+0100</insertionDate></podcast> Note that as expected from the HEAD request we get an xml document. Anyway we can force a JSON response by adding a header line to our curl request, setting the Accept HTTP header to application/json: curl request with custom header curl --header "Accept:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/1 Curl options -H, --header – customer header to pass to the servercurl request with custom header curl -H "Accept:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/1 Response Response in JSON format {"id":1,"title":"- The Naked Scientists Podcast - Stripping Down Science","linkOnPodcastpedia":"http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science","feed":"feed_placeholder","description":"The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home.","insertionDate":"2014-10-29T10:46:02.00+0100"} If you want to have it displayed prettier, you can use the following command, provided you have Python installed on your machine. Request Call resource with json pretty print curl -H "Accept:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/1 | python -m json.tool Response JSON response – pretty printed % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 758 100 758 0 0 6954 0 --:--:-- --:--:-- --:--:-- 6954 [ { "description": "The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home.", "feed": "feed_placeholder", "id": 1, "insertionDate": "2014-10-29T10:46:02.00+0100", "linkOnPodcastpedia": "http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science", "title": "- The Naked Scientists Podcast - Stripping Down Science" }, { "description": "Quarks & Co: Das Wissenschaftsmagazin", "feed": "http://podcast.wdr.de/quarks.xml", "id": 2, "insertionDate": "2014-10-29T10:46:13.00+0100", "linkOnPodcastpedia": "http://www.podcastpedia.org/quarks", "title": "Quarks & Co - zum Mitnehmen" } ] 1.4. Curl request with multiple headers As you’ve found out in my latest post, How to compress responses in Java REST API with GZip and Jersey, all the responses provided by the REST api are being compressed with GZip. This happens only if the client “suggests” that it accepts such encoding, by setting the following header Accept-encoding:gzip. Request Set multiple headers with curl curl -v -H "Accept:application/json" -H "Accept-encoding:gzip" http://localhost:8888/demo-rest-jersey-spring/podcasts/ Curl options -v, --verbose – make the operation more talkativeTo achieve that you need to simply add another -H option with the corresponding value. Of course in this case you would get some unreadable characters in the content, if you do not redirect the response to a file: Response with fuzzy characters * Adding handle: conn: 0x28ddd80 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x28ddd80) send_pipe: 1, recv_pipe: 0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* About to connect() to proxy vldn680 port 19001 (#0) * Trying 10.32.142.80... * Connected to vldn680 (10.32.142.80) port 19001 (#0) > GET http://localhost:8888/demo-rest-jersey-spring/podcasts/ HTTP/1.1 > User-Agent: curl/7.30.0 > Host: localhost:8888 > Proxy-Connection: Keep-Alive > Accept:application/json > Accept-encoding:gzip > < HTTP/1.1 200 OK < Date: Tue, 25 Nov 2014 16:17:02 GMT * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Encoding: gzip < Content-Length: 413 < Via: 1.1 vldn680:8888 < { [data not shown] 100 413 100 413 0 0 2647 0 --:--:-- --:--:-- --:--:-- 2647▒QKo▒0▒+▒▒g▒▒R▒+{▒V▒Pe▒▒؊c▒▒ n▒▒▒▒fæHH▒"▒▒g▒/?2▒eM▒gl▒a▒d ▒{=`7▒Eϖ▒▒c▒ZMn8▒i▒▒▒}H▒▒i1▒3g▒▒▒▒▒ ;▒E▒0O▒n▒R*▒g/E▒▒n=▒▒▒▒)▒U▒▒▒lժ▒Φ▒h▒6▒▒▒_>w▒▒-▒▒:▒▒▒!▒Bb▒Z▒▒tO▒N@'= |▒▒C▒f▒▒loؠ▒,T▒▒A▒4▒▒:▒l+#▒0b!▒▒'▒G▒^▒Iﺬ.TU▒▒▒z▒\▒i^]e▒▒▒▒2▒▒▒֯▒▒?▒:/▒m▒▒▒▒▒Y▒h▒▒▒_䶙V▒+R▒WT▒0▒?f{▒▒▒▒&▒l▒▒Sk▒iԽ~▒▒▒▒▒▒n▒▒▒▒_V]į▒ * Connection #0 to host vldn680 left intact 2. SOAPui test suite translated to curl requests As mentioned, in this second part I will map to curl requests the SOAPui test suite presented here. 2.1. Create podcast(s) resource 2.1.1. Delete all podcasts (preparation step) Request DELETE all podcasts curl -i -X DELETE http://localhost:8888/demo-rest-jersey-spring/podcasts/ Response HTTP/1.1 204 No Content Date: Tue, 25 Nov 2014 14:10:17 GMT Server: Jetty(9.0.7.v20131107) Content-Type: text/html Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia Vary: Accept-Encoding Via: 1.1 vldn680:8888 Content-Length: 0 2.1.2. POST new podcast without feed – 400 (BAD_REQUEST) Request curl -i -X POST -H "Content-Type:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/ -d '{"title":"- The Naked Scientists Podcast - Stripping Down Science-new-title2","linkOnPodcastpedia":"http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science","description":"The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home."}' Response Response 400 Bad Request HTTP/1.1 400 Bad Request Date: Tue, 25 Nov 2014 15:12:11 GMT Server: Jetty(9.0.7.v20131107) Content-Type: application/json Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia Vary: Accept-Encoding Content-Length: 271 Via: 1.1 vldn680:8888 Connection: close{"status":400,"code":400,"message":"Provided data not sufficient for insertion","link":"http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/","developerMessage":"Please verify that the feed is properly generated/set"} 2.1.3. POST new podcast correctly – 201 (CREATED) Request POST new podcast correctly – 201 (CREATED) curl -i -X POST -H "Content-Type:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/ -d '{"title":"- The Naked Scientists Podcast - Stripping Down Science","linkOnPodcastpedia":"http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science","feed":"feed_placeholder","description":"The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home."}' Response curl -i -X POST -H "Content-Type:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/ -d '{"title":"- The Naked Scientists Podcast - Stripping Down Science-new-title2","linkOnPodcastpedia":"http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science","description":"The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home."}' 2.1.4. POST same podcast as before to receive – 409 (CONFLICT) Request curl -i -X POST -H "Content-Type:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/ -d '{"title":"- The Naked Scientists Podcast - Stripping Down Science","linkOnPodcastpedia":"http://www.podcastpedia.org/podcasts/792/-The-Naked-Scientists-Podcast-Stripping-Down-Science","feed":"feed_placeholder","description":"The Naked Scientists flagship science show brings you a lighthearted look at the latest scientific breakthroughs, interviews with the world top scientists, answers to your science questions and science experiments to try at home."}' Response HTTP/1.1 409 Conflict Date: Tue, 25 Nov 2014 15:58:39 GMT Server: Jetty(9.0.7.v20131107) Content-Type: application/json Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia Vary: Accept-Encoding Content-Length: 300{"status":409,"code":409,"message":"Podcast with feed already existing in the database with the id 1","link":"http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/","developerMessage":"Please verify that the feed and title are properly generated"} 2.1.5. PUT new podcast at location – 201 (CREATED) Request curl -i -X PUT -H "Content-Type:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/2 -d '{"id":2,"title":"Quarks & Co - zum Mitnehmen","linkOnPodcastpedia":"http://www.podcastpedia.org/quarks","feed":"http://podcast.wdr.de/quarks.xml","description":"Quarks & Co: Das Wissenschaftsmagazin"}' Response HTTP/1.1 201 Created Location: http://localhost:8888/demo-rest-jersey-spring/podcasts/2 Content-Type: text/html Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia Vary: Accept-Encoding Content-Length: 60 Server: Jetty(9.0.7.v20131107)A new podcast has been created AT THE LOCATION you specified 2.2. Read podcast resource 2.2.1. GET new inserted podcast – 200 (OK) Request curl -v -H "Accept:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts/1 | python -m json.tool Response < HTTP/1.1 200 OK < Access-Control-Allow-Headers: X-extra-header < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Allow: OPTIONS < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Vary: Accept-Encoding < Content-Length: 192 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < { [data not shown] * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) 100 192 100 192 0 0 2766 0 --:--:-- --:--:-- --:--:-- 3254 * Connection #0 to host localhost left intact * Expire cleared { "feed": "http://podcast.wdr.de/quarks.xml", "id": 1, "insertionDate": "2014-06-05T22:35:34.00+0200", "linkOnPodcastpedia": "http://www.podcastpedia.org/quarks", "title": "Quarks & Co - zum Mitnehmen" } 2.2.2. GET podcasts sorted by insertion date DESC – 200 (OK) Request curl -v -H "Accept:application/json" http://localhost:8888/demo-rest-jersey-spring/podcasts?orderByInsertionDate=DESC | python -m json.tool Response Pretty formatted response < HTTP/1.1 200 OK < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 419 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < 0 419 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{ [data not shown] * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) 100 419 100 419 0 0 6044 0 --:--:-- --:--:-- --:--:-- 6983 * Connection #0 to host localhost left intact * Expire cleared [ { "feed": "http://podcast.wdr.de/quarks.xml", "id": 1, "insertionDate": "2014-06-05T22:35:34.00+0200", "linkOnPodcastpedia": "http://www.podcastpedia.org/quarks", "title": "Quarks & Co - zum Mitnehmen" }, { "feed": "http://www.dayintechhistory.com/feed/podcast-2", "id": 2, "insertionDate": "2014-06-05T22:35:34.00+0200", "linkOnPodcastpedia": "http://www.podcastpedia.org/podcasts/766/Day-in-Tech-History", "title": "Day in Tech History" } ] 2.3. Update podcast resource 2.3.1. PUT not “complete” podcast for FULL update – 400 (BAD_REQUEST) Request curl -v -H "Content-Type:application/json" -X PUT http://localhost:8888/demo-rest-jersey-spring/podcasts/2 -d '{"id":2, "title":"Quarks & Co - zum Mitnehmen","linkOnPodcastpedia":"http://www.podcastpedia.org/quarks","feed":"http://podcast.wdr.de/quarks.xml"}' Response < HTTP/1.1 400 Bad Request < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 290 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) * Connection #0 to host localhost left intact * Expire cleared {"status":400,"code":400,"message":"Please specify all properties for Full UPDATE","link":"http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/","developerMessage":"required properties - id, title, feed, lnkOnPodcastpedia, description"} 2.3.2. PUT podcast for FULL update – 200 (OK) Request $ curl -v -H "Content-Type:application/json" -X PUT http://localhost:8888/demo-rest-jersey-spring/podcasts/2 -d '{"id":2, "title":"Quarks & Co - zum Mitnehmen","linkOnPodcastpedia":"http://www.podcastpedia.org/quarks","feed":"http://podcast.wdr.de/quarks.xml", "description":"Quarks & Co: Das Wissenschaftsmagazin"}' Response < HTTP/1.1 200 OK < Location: http://localhost:8888/demo-rest-jersey-spring/podcasts/2 < Content-Type: text/html < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 86 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) * Connection #0 to host localhost left intact * Expire cleared The podcast you specified has been fully updated created AT THE LOCATION you specified 2.3.3. POST (partial update) for not existent podcast – 404 (NOT_FOUND) Request $ curl -v -H "Content-Type:application/json" -X POST http://localhost:8888/demo-rest-jersey-spring/podcasts/3 -d '{"title":"Quarks & Co - zum Mitnehmen - GREAT PODCAST"}' | python -m json.tool Response < HTTP/1.1 404 Not Found < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 306 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < { [data not shown] * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) 100 361 100 306 100 55 9069 1630 --:--:-- --:--:-- --:--:-- 13304 * Connection #0 to host localhost left intact * Expire cleared { "code": 404, "developerMessage": "Please verify existence of data in the database for the id - 3", "link": "http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/", "message": "The resource you are trying to update does not exist in the database", "status": 404 } 2.3.4. POST (partial update) podcast – 200 (OK) Request $ curl -v -H "Content-Type:application/json" -X POST http://localhost:8888/demo-rest-jersey-spring/podcasts/2 -d '{"title":"Quarks & Co - zum Mitnehmen - GREAT PODCAST"}' Response < HTTP/1.1 200 OK < Content-Type: text/html < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 55 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) * Connection #0 to host localhost left intact * Expire cleared The podcast you specified has been successfully updated 2.4. DELETE resource 2.4.1. DELETE second inserted podcast – 204 (NO_CONTENT) Request $ curl -v -X DELETE http://localhost:8888/demo-rest-jersey-spring/podcasts/2 Response < HTTP/1.1 204 No Content < Content-Type: text/html < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < * Excess found in a non pipelined read: excess = 42 url = /demo-rest-jersey-spring/podcasts/2 (zero-length body) * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) * Connection #0 to host localhost left intact * Expire cleared 2.4.2. GET deleted podcast – 404 (NOT_FOUND) Request curl -v http://localhost:8888/demo-rest-jersey-spring/podcasts/2 | python -m json.tool Response < HTTP/1.1 404 Not Found < Content-Type: application/json < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 306 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < { [data not shown] * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) 100 306 100 306 0 0 8916 0 --:--:-- --:--:-- --:--:-- 13304 * Connection #0 to host localhost left intact * Expire cleared { "code": 404, "developerMessage": "Verify the existence of the podcast with the id 2 in the database", "link": "http://www.codingpedia.org/ama/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring/", "message": "The podcast you requested with id 2 was not found in the database", "status": 404 } 2.5. Bonus operations 2.5.1. Add podcast from application form urlencoded Request POST with urlencoded curl -v --data-urlencode "title=Day in Tech History" --data-urlencode "linkOnPodcastpedia=http://www.podcastpedia.org/podcasts/766/Day-in-Tech-History" --data-urlencode "feed=http://www.dayintechhistory.com/feed/podcast" Response < HTTP/1.1 201 Created < Location: http://localhost:8888/demo-rest-jersey-spring/podcasts/null < Content-Type: text/html < Access-Control-Allow-Origin: * < Access-Control-Allow-Methods: GET, POST, DELETE, PUT < Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia < Vary: Accept-Encoding < Content-Length: 81 * Server Jetty(9.0.7.v20131107) is not blacklisted < Server: Jetty(9.0.7.v20131107) < * STATE: PERFORM => DONE handle 0x600056180; line 1626 (connection #0) * Connection #0 to host localhost left intact * Expire cleared A new podcast/resource has been created at /demo-rest-jersey-spring/podcasts/null Note: I am still at the beginning of using curl, so please if you have any suggestions leave a comment. Thank you. ResourcesCurl Cygwin How to Set Up a Python Development Environment on Windows Python.orgReference: How to test a REST api from command line with curl from our JCG partner Adrian Matei at the Codingpedia.org blog....
software-development-2-logo

Not using UML on Projects is Fatal

The Unified Modeling Language (UML) was adopted as a standard by the OMG in 1997, almost 20 years ago. But despite its longevity, I’m continually surprised at few organizations actually use it. Code is the ultimate model for software, but it is like the trees of a forest.  You can see a couple, but only few people can see the entire forest by just looking at the code.  For the rest of us, diagrams are the way to see the forest, and UML is the standard for diagrams.  They say, “A picture is worth a thousand words“, and this is true for code; even on a large monitor you can only see so many lines of code. Every other engineering discipline has diagrams for complex systems, e.g. design diagrams for airplanes, blueprints for buildings. In fact, the diagrams need to be created and approved BEFORE the airplane or building is created.    Contrast that with software where UML diagrams are rarely produced, or if they are produced, they are produced as an after thought.  The irony is that the people pushing to build the architecture quickly say that there is no time to make diagrams, but they are the first people to complain when the architecture sucks. UML is key to planning (see Not planning is for losers). I think this happens because developers, like all people, are focused on what they can see and touch right now. It is easier to try to code a GUI interaction or tackle database update problems than it is to work at an abstract level through the interactions that are taking place from GUI to database. Yet this is where all the architecture is. Good architecture makes all the difference in medium and large systems. Architecture is the glue that holds the software components in place and defines communication through the structure. If you don’t plan the layers and modules of the system then you will continually be making compromises later on. In particular, medium to large projects (>10,000 function points) are at a very high risk of failure if you don’t consider the architectural issues. Considering only 3 out of 10 software projects are successful only a fool would skip planning the architecture (see Failed? You get what you deserve!)Good diagrams, in particular UML, allow you to abstract away all the low level details of an implementation and let you focus on planning the architecture. This higher level planning leads to better architecture and therefore better extensibility and maintainability of software. If you are a good coder then you will make a quantum leap in your ability to tackle large problems by being able to work through abstractions at a higher level. How often do we find ourselves unable to implement simple features simply because the architecture doesn’t support it? Well the architecture doesn’t support it because we spend very little time developing the blueprint for the architecture of the system. UML diagrams need to be produced at two levels:the analysis or ‘what’ level the design or ‘how’ levelAnalysis UML diagrams (class, sequence, collaboration) should be produced early in the project and support all the requirements. Ideally you use a requirements methodology that allows you to trace easily from the requirements onto the diagrams. Analysis diagrams do not have implementation classes on them, i.e. no vendor specific classes.  The goal is to identify how the high level concepts (user, warehouse, product, etc) relate to each other. These analysis level UML diagrams will help you to identify gaps in the requirements before moving to design. This way you can send your BAs and product managers back to collect missing requirements when you identify missing elements before you get too far down the road. Once the analysis diagrams validate that the requirements are relatively complete and consistent, then you can create design diagrams with the implementation classes. In general the analysis diagrams are one to many to the design diagrams. Since you have validated the architecture at the analysis level, you can now do the design level without worrying about compromising the architectural integrity. Once the design level is complete you can code without compromising the design level. When well done the analysis UML, design UML, and code are all in sync. Good software is properly planned and executed from the top down. It is mentally tougher to create software this way, but the alternative is continuous patches and never ending bug-fix cycles.So remember the following example from Covey’s The 7 Principles of Highly Effective People: You enter a clearing where a man is furiously sawing at a large log, but he is not making any progress. You notice that the saw is dull and is unable to cut the wood, so you say, “Hey, if you sharpen the saw then you will saw the log faster”. To which the man replies, “I don’t have time, I’m too busy sawing the log”. Don’t be the guy sawing with a dull UML is the tool to sharpen the saw, it does take time to learn and apply, but you will save yourself much more time and be much more successful. BibliographyCovey, Stephen. The 7 Habits of Highly Effective People OMG, Unified Modeling Language™ (UML®) Resource PageReference: Not using UML on Projects is Fatal from our JCG partner Dalip Mahal at the Accelerated Development blog....
apache-hadoop-logo

Open Source Cloud Formation with Minotaur for Mesos, Kafka and Hadoop

Today I am happy to announce “Minotaur” which is our Open Source AWS based infrastructure for managing big data open source projects including (but not limited too): Apache Kafka, Apache Mesos and Cloudera’s Distribution of Hadoop. Minotaur is based on AWS Cloud Formation. The following labs are currently supported:          Apache Mesos Apache Kafka Apache Zookeeper Cloudera Hadoop Golang Kafka Consumer Golang Kafka ProducerSupervisor Supervisor is a Docker-based image that contains all the necessary software to manage nodes/resources in AWS. Supervisor set-upclone this repo to repo_dir cd to repo_dir/supervisor folderBefore trying to build docker image, you must put some configuration files under config directory:aws_config This file is just a regular aws-cli config, you must paste your secret and access keys, provided by Amazon in it: [default] output = json region = us-east-1 aws_access_key_id = SECRET_KEY aws_secret_access_key = ACCESS_KEY Do not add or remove any extra whitespaces (especially before and after “=” sign in keys). private.key This is your private SSH key, public part of which is registered on Bastion host. environment.key This is a shared key for all nodes in environment Supervisor is supposed to manage. ssh_configThis is a regular SSH config file, you have to change your_username only (this is the one registered on Bastion). BASTION_IP is handled dynamically when container is built. # BDOSS environment Host 10.0.2.* IdentityFile ~/.ssh/environment.key User ubuntu ProxyCommand ssh -i ~/.ssh/private.key your_username@BASTION_IP nc %h %p Host 10.0.*.* IdentityFile ~/.ssh/environment.key User ubuntu ProxyCommand ssh -i ~/.ssh/private.key your_username@BASTION_IP nc %h %pexec up.sh:If this is the first time you’re launching supervisor – it will take some time to build. Subsequent up’s will take seconds. Using supervisor Now you can cd to /deploy/labs/ and deploy whatever you want. Example: minotaur lab deploy mesosmaster -e bdoss-dev -d test -r us-east-1 -z us-east-1a Creating new stack 'mesos-master-test-bdoss-dev-us-east-1-us-east-1a'... Stack deployed. this will spin up a mesos master node in “testing” deployment. awsinfo Supervisor has a built-in “awsinfo” command, which relies on AWS API and provides brief info about running machines. It is also capable of searching through that info. Usage example awsinfo – will display brief info about all nodes running in AWS: root@supervisor:/deploy# awsinfo Cloud: bdoss/us-east-1 Name Instance ID Instance Type Instance State Private IP Public IP ---- ----------- ------------- -------------- ---------- --------- nat.bdoss-dev i-c46a0b2a m1.small running 10.0.2.94 54.86.153.142 bastion.bdoss-dev i-3faa69de m1.small running 10.0.0.207 None mesos-master.test.bdoss-dev i-e80ddc09 m1.small terminated None None mesos-slave.test.bdoss-dev i-e00ddc01 m1.small terminated None None awsinfo mesos-master – will display info about all mesos-master nodes running in AWS: root@supervisor:/deploy/labs# awsinfo mesos-master Cloud: bdoss/us-east-1 Name Instance ID Instance Type Instance State Private IP Public IP ---- ----------- ------------- -------------- ---------- --------- mesos-master.test.bdoss-dev i-e80ddc09 m1.small terminated None None awsinfo 10.0.2 – match a private/public subnet: root@supervisor:/deploy/labs# awsinfo 10.0.2 Cloud: bdoss/us-east-1 Name Instance ID Instance Type Instance State Private IP Public IP ---- ----------- ------------- -------------- ---------- --------- nat.bdoss-dev i-c46a0b2a m1.small running 10.0.2.94 54.86.153.142 mesos-master.test.bdoss-dev i-e96ebd08 m1.small running 10.0.2.170 54.172.160.254 Vagrant If you can’t use Docker directly for some reason, there’s a Vagrant wrapper VM for it. Before doing anything with Vagrant, complete the above steps for Docker, but don’t execute up.sh script Just cd into vagrant directory and exec vagrant up, then vagrant ssh (nothing special here yet). When you will exec vagrant ssh, docker container build process will spawn up immediately, so wait a bit and let it complete. Now you’re inside a Docker container nested in Vagrant VM and can proceed with deployment in the same manner as it’s described for docker. All the following vagrant ssh‘s will spawn Docker container almost immediately. Once you are inside of the supervisor image, the minotaur.py script may be used to provision an environment and labs. The rest of this readme assumes that the script is executed from within the supervisor container. Minotaur Commands List Infrastructure Components root@supervisor:/deploy# ./minotaur.py infrastructure list Available deployments are: ['bastion', 'iampolicies', 'iamusertogroupadditions', 'nat', 'sns', 'subnet', 'vpc'] Print Infrastructure Component Usage root@supervisor:/deploy# ./minotaur.py infrastructure deploy bastion -h usage: minotaur.py infrastructure deploy bastion [-h] -e ENVIRONMENT -r REGION -z AVAILABILITY_ZONE [-i INSTANCE_TYPE]optional arguments: -h, --help show this help message and exit -e ENVIRONMENT, --environment ENVIRONMENT CloudFormation environment to deploy to -r REGION, --region REGION Geographic area to deploy to -z AVAILABILITY_ZONE, --availability-zone AVAILABILITY_ZONE Isolated location to deploy to -i INSTANCE_TYPE, --instance-type INSTANCE_TYPE AWS EC2 instance type to deploy Deploy Infrastructure Component In this example, the bdoss-dev bastion already existed, so the CloudFormation stack was updated with the current template. root@supervisor:/deploy# ./minotaur.py infrastructure deploy bastion -e bdoss-dev -r us-east-1 -z -us-east-1a Template successfully validated. Updating existing 'bastion-bdoss-dev-us-east-1-us-east-1a' stack... Stack updated. List Labs List all supported labs. root@supervisor:/deploy# ./minotaur.py lab list Available deployments are: ['clouderahadoop', 'gokafkaconsumer', 'gokafkaproducer', 'kafka', 'mesosmaster', 'mesosslave', 'zookeeper'] Print Lab Usage Print the kafka lab usage. root@supervisor:/deploy# ./minotaur.py lab deploy kafka -h usage: minotaur.py lab deploy kafka [-h] -e ENVIRONMENT -d DEPLOYMENT -r REGION -z AVAILABILITY_ZONE [-n NUM_NODES] [-i INSTANCE_TYPE] [-v ZK_VERSION] [-k KAFKA_URL]optional arguments: -h, --help show this help message and exit -e ENVIRONMENT, --environment ENVIRONMENT CloudFormation environment to deploy to -d DEPLOYMENT, --deployment DEPLOYMENT Unique name for the deployment -r REGION, --region REGION Geographic area to deploy to -z AVAILABILITY_ZONE, --availability-zone AVAILABILITY_ZONE Isolated location to deploy to -n NUM_NODES, --num-nodes NUM_NODES Number of instances to deploy -i INSTANCE_TYPE, --instance-type INSTANCE_TYPE AWS EC2 instance type to deploy -v ZK_VERSION, --zk-version ZK_VERSION The Zookeeper version to deploy -k KAFKA_URL, --kafka-url KAFKA_URL The Kafka URL Deploy Lab Deploy a 3-broker Kafka cluster. root@supervisor:/deploy# ./minotaur.py lab deploy kafka -e bdoss-dev -d kafka-example -r us-east-1 -z us-east-1a -n 3 -i m1.small Template successfully validated. Creating new 'kafka-bdoss-dev-kafka-example-us-east-1-us-east-1a' stack... Stack deployed.Reference: Open Source Cloud Formation with Minotaur for Mesos, Kafka and Hadoop from our JCG partner Joe Stein at the All Things Hadoop blog....
software-development-2-logo

Working With Legacy Code, What does it Really Mean

At the end of January I am going to talk in Agile Practitioners 2015 TLV. I’ll be talking about Legacy Code and how to approach it. As the convention’s name implies, we’re talking practical stuff. So what is practical in working with legacy code? Is it how to extract a method? Or maybe it’s how to introduce setter for a static singleton? Break dependency? There are so many actions to make while working on legacy code. But I want to stop for a minute and think. What does it mean to work on legacy code? How do we want the code to be after the changes? Why do we need to change it? Do we really need to change it? Definition Let’s start with the definition of Legacy Code. If you search the web you will see definitions such as “…Legacy code refers to an application system source code type that is no longer supported…” (from: techopedia) People may think that legacy code is old, patched. The definitions above are correct (old, patched, un-maintained, etc.), but I think that the definition coined by Michael Feathers (Working Effectively with Legacy Code) is better. He defined legacy code as: Code Without Tests I like to add that legacy code is usually Code that cannot be tested. So basically, if 10 minutes ago, I wrote code which is not tested, and not testable, then it’s already Legacy Code. Questioning the Code When approaching code (any code), I think we should ask ourselves the following questions constantly.What’s wrong with this code? How do we want the code to be? How can I test this piece of code? What should I test? Am I afraid to change this part of code?Why Testable Code? Why do we want to test our code? Tests are the harness of the code. It’s the safety net. Imagine a circus show with trapeze. There’s a safety net below (or mattress). The athletes can perform, knowing that nothing harmful will happen if they fall (well, maybe their pride). Recently I went to an indie circus show. The band was playing and a girl came to do some tricks on a high rope. But before she even started, she fixed a mattress below. And this is what working with legacy code is all about: Put a mattress before you start doing tricks… Or, in our words, add tests before you work / change the legacy code. Think about it, the list of questions above can be answered (or thought of) just by understanding that we need to write tests to our code. Once you put your safety net, your’re not afraid to jump. ⇒ once you write tests, you can add feature, fix bug, refactor. Conclusion In this post I summarized what does it mean to work with legacy code. It’s simple: working with legacy code, is knowing how to write tests to untested code. The crucial thing is, understanding that we need to do that. Understanding that we need to invest the time to write those tests. I think that this is as important as knowing the techniques themselves. In following post(s) I will give some techniques examples.Reference: Working With Legacy Code, What does it Really Mean from our JCG partner Eyal Golan at the Learning and Improving as a Craftsman Developer blog....
software-development-2-logo

Outsourcing, Do It Right

Most of the times outsourcing is a nightmare. Companies outsource the activities that are not their core activity nevertheless needed for business to get the job done as cheap as possible. They look at it as some necessary evil, something that would better be not to know all about, forget the details and have it been done. Many times IT is outsourced with this mindset and causes disaster. The problem does not lie in the fact that IT itself is outsourced. It is the mindset. IT can be outsourced and in many cases this is rational to do. The problem is that management many times does not realize that only the T (technology) is what can be outsourced but no I (information).   When you outsource Information Technology running your business outsource only Technology but never Information! Note that in the expression “information technology” the work “information” is an adjective to “technology” therefore when we speak IT outsourcing we do speak about the outsourcing of the technology and only the handling of the information. If you are a manufacturer IT is not your core business. If you are an insurance company, or a bank IT is not your core business. If you are doing anything except IT, probably IT is not your core business. You have some system that does the book keeping, resource planning, customer relationship management and so on. This is the technology. Your business can be running fine even if you do not own the knowledge that runs these systems. Does it help you to be a better manufacturer, bank or whatever you are if you own the information technology? Probably no. It does not give you business advantage. If your book keeping is as good as your competitor’s then this is ok as a foundation to compete on other grounds. The information, however, how you run your business, what makes you a better manufacturer, bank, whatever than others is important to own. That you should not outsource. How you produce , what are the best business processes in your company, how you can adapt to market changes are core business. If you do not care about that and you outsource these core activities to a software technology company they most probably will not provide the best fitting solution for you and the cost may increase, profitability lowers, and your general competitiveness weakens. This task needs business knowledge and companies can outsource this only to companies that have the knowledge and skills. Some software companies do, but in this case these companies are not only software companies but rather consulting-and-software companies. Sometimes you can not tell where the border is. You can hire a consulting company, or you can hire experts as employees. The difference is paper work, sometime taxation. You need the knowledge and you should get it to serve your business goals. Having all that said, let’s focus on the technology outsourcing assuming we know where business information ends and technology starts. The major point to outsource IT is to save cost. The outsourcing company can do it cheaper. I have seen many times that on the decline huge companies cut off the IT department, formed a new company, sold and contracted this company for IT outsourcing. The next Monday everybody was sitting at the same desk, doing the same work. How could it be cheaper than some nasty taxation tricks, less benefits from the new company to the employees and similar effects? In some cases such IT companies cut off the body of a huge corporation can survive, but I have never seen any. They inherit the company culture of the corporate that I do not say is wrong, but may just not be the best and most competitive for a small IT company. They stay alive for many years and know one that is alive for more than a decade and is a success story. Whenever you see such success stories remember that: To have a success story you need success and story. One of them is optional. Really successful IT companies, as I see, come from small and grow big. They learn, as they go how to do IT in a professional way. Focusing the core, working for multiple clients gives the real advantages. The solution at the end of the day can be cheaper using the resources more effectively, having better distribution and better skill matching. If your problem needs 7.5 person, you can not hire that amount. People are not really effective when chopped into half. You should hire at least 8 persons and find some occupation for the “half” person. An IT company can find that occupation more effectively. They can also easily allocate 15 persons each half time if that may be more effective. If somebody does not fit the task you need not fire the person, the outsource company will find the person that fits best and a task for the person that fits his/her skills. These are possibilities when outsourcing is done right. If the people of the IT company work only for you and they never submerge into the culture of the mother company then you loose the advantage that comes from knowledge gathered. If the outsourcing company just hires people and sells them out right away they are simply slave traders. That is not the way. If outsourcing is “Let’s get others to do the work that does not mean much to us as cheaply as possible.”, then it is wrong. If it does not mean much to “us”, we should not do it. We should not outsource it. It should rather be eliminated. If it can not be eliminated, then it does mean much. Just as much as it means for the outsourcing company. Your business depends on it.Reference: Outsourcing, Do It Right from our JCG partner Peter Verhas at the Java Deep blog....
agile-logo

The Product Owner’s Checklist for the First Sprint

Summary Scrum is a popular agile framework for developing a product with the right features and the right technologies. Unfortunately, it does not state the prerequisites for kicking off a Scrum project and for starting the first sprint. As a consequence, I find it not uncommon that product managers and product owners are unsure about the work they should do prior to the first sprint. This post offers a checklist to help you do the right upfront product management work.       The Essential Upfront Product Management Work Scrum is agnostic about the work that has to be done before you can create an initial product backlog  and run the first sprint, be it for a brand-new product or for an existing one that you have just taken on. In fact, it doesn’t make any recommendations. But this does not mean, of course, that you don’t have to or should not do any work before you move into Scrum and kickoff the first sprint. As the product owner, you should do just enough work to have the following artefacts in place and be able to answer the following questions:Artefacts Questions to AddressShared VisionWhat are the purpose and the motivation for developing, marketing, and selling the product? What is the positive change the product should create?Valid Product StrategyWhat market or market segment does the product serve? Who are the customers and the users? What problem does it solve, or which benefit does it provide? Why would people choose it over a competing offer? What makes it special or desirable? What are the business goals? Why should your company invest in the product?Valid Business ModelHow does the product generate the desired business benefits? How is it monetised? What are the cost factors for developing, marketing, and selling the product? What marketing and sales channels are used?Realistic Product RoadmapHow is the product going to be delivered over the next 12 months? What are the release dates? What goals or benefits do the individual releases provide? What metrics are used to determine success?PersonasWhat characteristics, attitudes, behaviours, and goals do the customers and users have? Who is the primary persona?The items in the table above form a checklist to help you assess if you are ready to apply Scrum. You may have to tailor the checklist to you specific needs. For an in-house product, a valid business model is typically less applicable than for a commercial one, for instance. Similarly, if you build a product for a client then the strategy should address the client’s business goals rather than yours. You can use choose from a range of tools to capture the answers to the questions above. For instance, my Product Vision Board describes the vision and the strategy, Alexander Osterwalder’s Business Model Canvas defines the business model, my GO Roadmap captures the product roadmap, and my persona template helps you describe the personas. The point is not to use a specific tool but to ask the right questions and to answer them effectively. To put it differently, if you struggle with the questions then a tool alone is unlikely to help you. Vision and Strategy Take Priority Of the four artefacts listed above, the vision and the product strategy are particularly crucial. You should hence pay particular attention to them and create them first. Here is why: If you don’t have a shared vision, then you lack an overarching goal and a reason for creating the product. You will consequently struggle to motivate, guide, and align the stakeholders. If you don’t know who the customers and the users are and why they would buy and use your product, then you cannot make informed decisions about the user experience and the product features. Imagine writing a user story without knowing who the user is. You would have to speculate and dream up the story. What’s more, collecting meaningful feedback becomes virtually impossible, as you are likely to ask the wrong people and receive unhelpful feedback. This would cause you to draw the wrong conclusions and make the wrong changes to your product; or you would conclude that the users don’t have a clue, that you should ignore their input, and that you know what’s best for them anyway. Neither approach maximises the chances of creating a successful product. Finally, if you don’t know what the business goals are and why your company should invest in your product, you don’t understand the value that the product should create for your business. This will make it difficult to attract funding and to get the right people. Business Model, Product Roadmap, and Personas Come Second Having a vision and product strategy is great but not always enough to start the first sprint effectively. For commercial software products it is also important to understand how you can meet the business goals and how you can monetise your product. Otherwise you won’t be able to create a financial forecast, and your company is unlikely to be in a position to make an informed investment decision. Similarly, a product roadmap details the product strategy and states how it will be implemented. Without a roadmap, you haven’t made explicit when major releases will happen, what benefits they should provide, and how you are going to determine their success. This will make it difficult to align the stakeholders including marketing, sales, and support, to staff the development team, and to show that you have done a great job and deserve a pay rise or a bonus. Without personas I find it difficult to discover the right user interaction, the right user interface design, and the right functionality. Who do the user stories serve and why do they add value for the users? And without a primary persona, I find it hard to prioritise and decide, for instance, if a story or scenario should make it into the release or not and if it can be postponed or only partially provided. This makes managing the product backlog very challenging. Do Just Enough Upfront Work While you don’t want to rush into the first sprint, you don’t want to spend more time and effort then absolutely necessary to answer the questions in my checklist above. A great way to carry out the upfront work is to employ a Lean Startup-based approach, as the picture below illustrates.The diagram above shows two key steps, problem validation and product development. The first step iteratively creates a shared vision, a valid product strategy and business model, a realistic product roadmap, and helpful personas. This is likely to require some research and validation work, for instance, direct observation, problem interviews, and developing minimum viable products (MVPs). The second step leverages Scrum and builds the actual product, a product with the right features and the right technologies. It starts with creating an initial product backlog and finishes with the launch of the new release. I describe the two steps in more detail in my post “New Product Development with Lean Startup and Scrum”. How much upfront work is needed depends on how many risks your strategy and your business model contain. The more risks there are, the more time and effort you typically have to invest. The risks in turn are related to your growth strategy and to the technologies used to build the product. Developing a new product for a new market carries significantly more risk than updating an existing product for an existing market and therefore tends to require more upfront work, for instance. The amount of tike you may have to spend therefore varies. It can range from a few days for a straight-forward product update to a few months for a diversification effort. As the product owner, you should lead the problem validation effort and you should shape the vision, the strategy, the business model, the roadmap, and the personas. If that’s not the case then make sure that the necessary prep work has been carried out, that you know the answers to the questions stated in the checklist above, and that the appropriate artefacts are available. If that’s not the case you should consider delaying the start of the first sprint and carrying out more research and validation work.Reference: The Product Owner’s Checklist for the First Sprint from our JCG partner Roman Pichler at the Pichler’s blog blog....
java-logo

Sacrilege – a Custom SWT Scrollbar

SWT is a thin abstraction layer on top of native OS widgets. Which is a very good thing if you intent that your applications integrate well with the OS look and feel. But as a trade-off this approach limits styling capabilities significantly. In particular I perceive the native SWT scrollbar often disruptive on more subtle view layouts. Coming across this problem recently I gave a custom SWT scrollbar widget a try. This post introduces the outcome – a simple slider control, usable as SWT Slider replacement or Scrollbar overlay.       SWT Scrollbar The OS scrollbar abstraction of SWT has two manifestations: org.eclipse.swt.widgets.Scrollbar and org.eclipse.swt.widgets.Slider. The differences between both widgets are explained in the following JavaDoc passage: ‘Scroll bars are not Controls. On some platforms, scroll bars that appear as part of some standard controls such as a text or list have no operating system resources and are not children of the control. For this reason, scroll bars are treated specially. To create a control that looks like a scroll bar but has operating system resources, use Slider.’ This means the Slider provides at least a minimum of programmtical influence, like setting its bounds. But derivates of org.eclipse.swt.widgets.Scrollable (the superclass of all controls that have standard scroll bars) just provide the read-only abstraction Scrollbar. Which still is very useful to react to scroll events for example, but leaves practically no room for look and feel adjustments. And the application range of sliders is usally limited to custom components, that – for whatever reasons – cannot use the scrollbars provided by the Composite super class. FlatScrollBar Although there were some cross platform obstacles to overcome, creating a custom slider was straight forward. The following picture shows the native slider on the left shell in comparison to the FlatScrollBar control used on the right shell (OS: Windows 7):It is noteworthy that the custom slider expands on mouse over as shown by the vertical bar. The horizontal bar depicts the compact base apprearance as a discreet thumb and selection indicator. In general the FlatScrollBar mimics essentially behavior, semantics and API of a Slider/Scrollbar:Obviously I decided to omit the arrow up and down buttons, but this is just an optical adjustment. While not configurable yet, the arrow buttons can be revived by changing a single constant value in the source code. ScrollableAdapter But what about the scrollbars of Scrollable derivates like text, tree, tables or the like? Being part of the OS control itself as stated above, they are simply not replaceable. Theoretically one could deactivate scrolling and use some kind of custom scrolled composite to simulate scrolling behavior. But this has several downsides. I gave this approach a try and the results were not satisfying. However wrapping a scrollable into an overlay adapter-composite seems more promising. So far I was able to adapt successfully to Tree and Table controls.And this is how adapter creation looks like: new FlatScrollBarTable( parent, ( adapter ) -> new Table( adapter, SWT.NONE ) ); Easy enough, isn’t it? The second parameter is a generic factory (ScrollableFactory<T extends Scrollable>) that allows to adapt to various scrollable types. But as a generic overlay implementation is not possible at all, for now only trees and tables adapters are available. The adapter provides access to the table instance by the method FlatScrollBarTable#getTable(). This allows to adapt also to JFace tree- and table-viewers without a problem. As native scrollbars on Mac OS look acceptable out of the box the adapter refrains from custom overlays on that platform. Only Gtk and MS Windows platforms are affected. Hence no Mac Screenshot in the title image. However the FlatScrollBar control itself works well on OS X too. Conclusion Using the FlatScrollBar and the ScrollableAdapter in one of our projects looks promising so far. Of course the code base is pretty new and might contain some undetected issues or flaws. However I found it worthwhile to introduce this controls to an outside audience, which might help to reveal such flaws or lead to further requirements. I am curious to see how sustainable this approach will be and if it is possible to adapt also to text and/or styled text controls. If you want to check out the controls, they are part of the com.codeaffine.eclipse.swt feature of the Xiliary P2 repository available at:http://fappel.github.io/xiliaryIn case you want to have a look at the code or file an issue you might also have a look at the Xiliary GitHub project. Look for FlatScrollbarDemo, FlatScrollBarTreeDemo and FlatScrollBarTableDemo for usage examples:https://github.com/fappel/xiliaryReference: Sacrilege – a Custom SWT Scrollbar from our JCG partner Frank Appel at the Code Affine blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close